AMD GPU will power all 3 major next gen consoles?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Even if GTX560 Ti consumed 170W or even 200W or even 250W and HD6970 2GB consumed 170W it is clear that GTX560 Ti has superior performance/watt in Tessellation.

Apparently, that spec is irrelevant to the console manufacturers. As I've already stated, we don't just render triangles (geometry).

AMD's smaller design makes it cheaper to produce. As quantities go up, that difference increases. Add to that, in overall gaming performance, AMD uses less power. This means that the associated components can be smaller (and therefore cheaper). AMD adheres to open standards (3D, physics, etc.), rather than proprietary standards, that require individual licensing.

Assuming both competitors are equal in ability to meet a deadline and quality of work, what it's all about, is cost.
 

Outrage

Senior member
Oct 9, 1999
217
1
0
Even if GTX560 Ti consumed 170W or even 200W or even 250W and HD6970 2GB consumed 170W it is clear that GTX560 Ti has superior performance/watt in Tessellation.

GTX560 ti Performance/Watt
21962/170W = 129,18
21962/200W = 109,81
21962/250W = 87,84

HD6970 Performance/Watt
9818/170W = 57,75
9818/200W = 49,09
9818/250W = 39,27

It is not difficult to understand why, it is like Multicore CPUs,

No mater your IPC, more cores will be faster in multithreaded applications and as i have said before, Tessellation performance only goes up with more tessellators.

RussianSensation can argue all day long about the power usage of HD69xx series but the clear fact is that GF100 Fermi and Derivative designs are superior in Tessellation processing than Cayman or any other current AMD Design.

Remember people, im not talking about cards in general, im talking about Tessellation and you will see that AMD will utilize the same principles with multi-Tessellators and improved Compute processing capabilities (Better Physics not PhysX) in their next gen architecture (GCN)

You are using numbers from an old driver. they rebenched the 6970 with newer drivers and got 19398 vs nvidia 560's 21962. take away 1gb of memory from the 6970 and there actual power consumption should be about the same.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
A few points on the general choice- MS lost billions of dollars due to AMD GPUs overheating. The real cause of this was the shockingly stupid X clamp design, but what actually caused the RRoD was the AMD supplied part frying. That's just the reality of the situation. People point to RRoD as evidence as to why the companies would be concerned about performance/watt, they then point to the company whose part failed as the solution to that problem.
As already mentioned, this is pure FUD. AMD did not manufacture the part, and is not responsible for the problems. On the other hand, Nvidia DID manufacture a defective part, never fixed it, and screwed over OEM's and end users. This is a big part of why many have become gun shy when it comes to using Nvidia GPUs.

Combine that with Nvidia's reputation as being arrogant and difficult to work with, it does not surprise me that Sony looks to be dumping NV for their next Playstation.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
Just take the results of "Normal" Tessellation

GTX560Ti = 21962
HD6970 2GB = 9818

GTX560Ti TDP = 170W according to NV
HD6970 2GB TDP = 250W according to AMD

Do you want me to explain or have you figured it out yet ??

Get rid of your arrogance because it is unwarranted for several different reasons.

First, it goes against the spirit of this technical forum.

Second, you're using an outdated benchmark.

Third, I specifically asked for context-specific data, and you don't have it. So using the TDP numbers is irrelevant. If we were to do that, then why not look at the HD 6950? Roughly the same tessellation performance but at only 200W instead of 250W. Now I'll tell you why: because those numbers are pointless. In the real gaming situations the 6950 uses less power than the 560 Ti.

Fourth, games render more than just tessellation. Do you know what happens when Nvidia's GPUs have to process data other than tessellation at the same time? It can't do as much tessellation, so the tess performance drops. This is why games and benchmarks like Unigine Heaven don't have nearly the performance discrepancy between AMD and Nvidia cards like Tessmark shows. Despite the 560's tessellation advantage, the 6950 can still outperform it in Heaven So, as I said before, workload is important and it's also factor in power consumption.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Fourth, games render more than just tessellation. Do you know what happens when Nvidia's GPUs have to process data other than tessellation at the same time? It can't do as much tessellation, so the tess performance drops. This is why games and benchmarks like Unigine Heaven don't have nearly the performance discrepancy between AMD and Nvidia cards like Tessmark shows. So, as I said before, workload is important and it's also factor in power consumption.
Exactly.

Nvidia's architecture is more flexible in some ways, so for specific workloads like doing only tessellation, the benchmarks look impressive. But when you have a normal gaming workload, the GPU can no longer dedicate a large % of resources only to tessellation, so performance evens out. This is another example where synthetic benches are in no way representative of actual usage. When AMD said "tessellation done right" the fanboys starting frothing at the mouth, but it was a valid point. There is no need to toss insane (with little to no visual improvements) levels of tessellation at a game, use it properly, balance out your resources.

On the flip side, AMD's first DX11 implementation of tessellation was lacking, so they addressed it to some extent, but Nvidia still holds the lead.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
RussianSensation can argue all day long about the power usage of HD69xx series but the clear fact is that GF100 Fermi and Derivative designs are superior in Tessellation processing than Cayman or any other current AMD Design.

Remember people, im not talking about cards in general, im talking about Tessellation and you will see that AMD will utilize the same principles with multi-Tessellators and improved Compute processing capabilities (Better Physics not PhysX) in their next gen architecture (GCN)

Ok...I don't think anyone here is arguing that AMD is better or equal to Nvidia when it comes to tessellation specifically. The point is that when you account for overall performance -- not just tessellation -- AMD chips have significantly more performance per watt than Nvidia chips. And that is critical for a console. Tessellation is a nifty feature but it's nowhere near as important as overall shading performance right now.
 
Last edited:

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
Exactly.

Nvidia's architecture is more flexible in some ways, so for specific workloads like doing only tessellation, the benchmarks look impressive. But when you have a normal gaming workload, the GPU can no longer dedicate a large % of resources only to tessellation, so performance evens out. This is another example where synthetic benches are in no way representative of actual usage. When AMD said "tessellation done right" the fanboys starting frothing at the mouth, but it was a valid point. There is no need to toss insane (with little to no visual improvements) levels of tessellation at a game, use it properly, balance out your resources.

On the flip side, AMD's first DX11 implementation of tessellation was lacking, so they addressed it to some extent, but Nvidia still holds the lead.

To be fair a part of the "tessellation done right" is simply because AMD tessellator can't render as many triangles - when AMD remakes their architecture and change their tessellator you can bet AMD won't complain about insane levels of tessellation.

On the other hand, it is true that there is basically (I would say no difference but there will always be people with sharp eyes) little difference.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
AMD dont supply MS with any chips, they sold the design and get royalties for each 360 sold. Putting the blame at AMD for MS problems with there gpu is pure BS.
Also I remember there was an article where an engineer disassembled an Xbox360 and was pointing out flawed design problems all over the place, not just with the GPU.
 

Ghiedo27

Senior member
Mar 9, 2011
403
0
0
My hope is that the console designers are aiming to get 28nm parts. If that's the case, then AMD has a better track record of rolling out new designs at a price point that's palatable for inclusion in a console- say $125 for the graphics core or thereabouts. If they can spend a bit more than that it would be a wash, but if they're trying to cut costs nvidia just seems to price themselves out of the market.

Regarding tessellation, it seems a bit early to make it into consoles. It takes expensive hardware to push it into being a major feature at playable frame rates.

Just my 2c.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
Also I remember there was an article where an engineer disassembled an Xbox360 and was pointing out flawed design problems all over the place, not just with the GPU.
so Microsoft cant afford good engineers? anybody can find fault with any design if they choose to. maybe that engineer pointing out flaws should quit his modest 50k a year job and design his own perfect line of consoles. :D
 

gorobei

Diamond Member
Jan 7, 2007
3,957
1,443
136
so Microsoft cant afford good engineers? anybody can find fault with any design if they choose to. maybe that engineer pointing out flaws should quit his modest 50k a year job and design his own perfect line of consoles. :D

actually there was a MS engineer on the xbox team who left before the 360 was released. he wrote a blog or letter somewhere commenting about the 360's inadequate cooling after the rrod issues came out and were settled. the gist of his account was that: the management team rushed the project, were told by their own engineers that the cooling wasnt enough and would likely fail, and that most of the recommended tolerances wouldnt have been prohibitively expensive to implement but cost cutting took priority.

ms might have been racing to beat sony to release, cut corners, and gambled that process improvements would solve the heat issue. it may have been worth it to beat ps3 to release and get a better foothold in the console market, but it eroded some of the consumer confidence in the hardware. i certainly wont be buying any future xbox(720 etc) until a few revisions into the life cycle.
 

toyota

Lifer
Apr 15, 2001
12,957
1
0
actually there was a MS engineer on the xbox team who left before the 360 was released. he wrote a blog or letter somewhere commenting about the 360's inadequate cooling after the rrod issues came out and were settled. the gist of his account was that: the management team rushed the project, were told by their own engineers that the cooling wasnt enough and would likely fail, and that most of the recommended tolerances wouldnt have been prohibitively expensive to implement but cost cutting took priority.

ms might have been racing to beat sony to release, cut corners, and gambled that process improvements would solve the heat issue. it may have been worth it to beat ps3 to release and get a better foothold in the console market, but it eroded some of the consumer confidence in the hardware. i certainly wont be buying any future xbox(720 etc) until a few revisions into the life cycle.
yeah that probably sounds about right. I think we would be shocked if we found out the gritty details on many of the products we use.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
What exactly was downsized on the GTX580 in terms of GPGPU? The die is 520mm^2 vs. 530mm^2 on the 480. That's hardly an improvement. You still have 1/8 DP support, still 3B transistors. There are several reasons why the 580 runs cooler:

1) More mature 40nm process
2) Improved firmware
3) A redesigned vapor-chamber GPU cooler with a larger opening for the fan in the shroud

This is the article I read which gave me the idea that Nvidia engineers downsized the GF100's HPC qualities slightly in order to accomodate the extra SM unit, as well as increasing the clock speed.

We have had a hard time seeing how NVIDIA would be able to activate its sixteenth SM unit without severe problems with the power consumption. But with GF110 NVIDIA made an active choice and sacrificed the HPC functionality (High Performance Computing) that it talked so boldly about for Fermi, not only to make it smaller but also more efficient.

According to sources to NordicHardware it can be as many as 300 million transistors that NVIDIA has been able to cut in this way. The effect is that GF110 will be a GPU targetting only retail and will not be as efficient for GPGPU applications as the older siblings of the Fermi Tesla family. Something few users will care about.

Looking back at it now it was never actually confirmed by Nvidia, but I'm inclined to believe them as Nordic Hardware was very accurate with the GTX 580's final specs so they must have had a good source.

Regardless, if Nvidia were to manufacture a GF110 based GPU for consoles, it would HAVE to be re-engineered to make it more suitable for consoles ie cheaper, smaller, more energy efficient, a lot less potent, but still more than viable enough to render amazing images since programmers code a lot closer to the metal for console games than PC games.
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Nvidia being potentially locked out of the big 3 next-gen consoles has less to do with technical abilities, and more to do with companies not enamored by dealing with Nvidia and their inflexible terms and conditions. Nvidia has a less than stellar reputation of cooperating with their partners, and a not so good reputation of backing up their products when they turn out to be defective.

IMO Nvidia will have to work extra hard to smooth over Sony, otherwise Sony will dump them for sure.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Get rid of your arrogance because it is unwarranted for several different reasons.

First, it goes against the spirit of this technical forum.

First off all my answer was not meant to be arrogant


cusideabelincoln said:
Second, you're using an outdated benchmark.


The bench is not outdated but the link I have provided did have older drivers for both AMD and NV cards

From the ASUS ROG MATRIX GTX 580 review
http://www.geeks3d.com/20110612/asus-rog-matrix-gtx-580-platinum-review-overclocking/8/

Have a look at TessMark 0.3.0 scores at Normal Tessellation 16x, even GTX560 outperform the HD6970

tessmark16x.png




cusideabelincoln said:
Third, I specifically asked for context-specific data, and you don't have it. So using the TDP numbers is irrelevant. If we were to do that, then why not look at the HD 6950? Roughly the same tessellation performance but at only 200W instead of 250W. Now I'll tell you why: because those numbers are pointless. In the real gaming situations the 6950 uses less power than the 560 Ti.

you have asked for power usage running that specific benchmark when you clearly know I cant find it because nobody have tested yet. I have used the cards TDPs in order to show the difference in performance a lower TDP card like GTX560 Ti have over a higher TDP card like HD6970.



cusideabelincoln said:
Fourth, games render more than just tessellation. Do you know what happens when Nvidia's GPUs have to process data other than tessellation at the same time? It can't do as much tessellation, so the tess performance drops.

I would very much would like you to support that, got any paper/review/technical analysis to support it ??


cusideabelincoln said:
This is why games and benchmarks like Unigine Heaven don't have nearly the performance discrepancy between AMD and Nvidia cards like Tessmark shows. Despite the 560's tessellation advantage, the 6950 can still outperform it in Heaven So, as I said before, workload is important and it's also factor in power consumption.

Not all games/benchmarks are equal so I will show you another game when a lower TDP card like GTX560 (not the Ti) can outperform a higher TDP card like HD6970,

I could also put HAWX2 but I know that most of the users will dismiss that bench

http://www.anandtech.com/show/4344/nvidias-geforce-gtx-560-top-to-bottom-overclock/8

37890.png


Since you spoke of outdated benchmarks, TechPowerUp still uses Heaven 2.0 when 2.5 is the latest version.

Next gen consoles will use tessellation and Physics (NO PhysX) and im sure both MS and Sony will not use a 40nm chip like GF110/114 or Cayman. It could be a 28nm derivative design or a 28nm GCN/Kepler derivative architecture chip.


From Microsoft,

http://msdn.microsoft.com/en-us/library/ff476340(v=VS.85).aspx#tessellation_benefits



Tessellation Benefits

Tessellation:

Saves lots of memory and bandwidth, which allows an application to render higher detailed surfaces from low-resolution models. The tessellation technique implemented in the Direct3D 11 pipeline also supports displacement mapping, which can produce stunning amounts of surface detail.

Supports scalable-rendering techniques, such as continuous or view dependent levels-of-detail which can be calculated on the fly.
Improves performance by performing expensive computations at lower frequency (doing calculations on a lower-detail model). This could include blending calculations using blend shapes or morph targets for realistic animation or physics calculations for collision detection or soft body dynamics.

The Direct3D 11 pipeline implements tessellation in hardware, which off-loads the work from the CPU to the GPU. This can lead to very large performance improvements if an application implements large numbers of morph targets and/or more sophisticated skinning/deformation models. To access the new tessellation features, you must learn about some new pipeline stages.
 

Ghiedo27

Senior member
Mar 9, 2011
403
0
0
Next gen consoles will use tessellation and Physics
It seems to me that exclusive software sells consoles while exclusive hardware just makes it harder to port throwaway titles that form the base of a consoles catalog.

Regarding Microsoft's sales pitch, I don't think anyone has ever used tessellation to improve performance. I do like the idea of reduced geometry load for distant objects, though. I hope that will change as we move out of the 'supported games as tech demo platforms' stage.
 

ZimZum

Golden Member
Aug 2, 2001
1,281
0
76
Nvidia being potentially locked out of the big 3 next-gen consoles has less to do with technical abilities, and more to do with companies not enamored by dealing with Nvidia and their inflexible terms and conditions. Nvidia has a less than stellar reputation of cooperating with their partners, and a not so good reputation of backing up their products when they turn out to be defective.

I would say its a combination of both. The fact that Fermi is larger, hotter and consumes more power than its AMD counterparts has certainly cost Nvidia in the OEM sector. Especially the mobile market. They lost Apple which was huge.
 

PingviN

Golden Member
Nov 3, 2009
1,848
13
81
Not all games/benchmarks are equal so I will show you another game when a lower TDP card like GTX560 (not the Ti) can outperform a higher TDP card like HD6970,

I could also put HAWX2 but I know that most of the users will dismiss that bench

http://www.anandtech.com/show/4344/nvidias-geforce-gtx-560-top-to-bottom-overclock/8

37890.png

It is well known that Nvidia has a driver advantage in Civ 5 due to having managed to enable Multi threaded rendering. It doesn't have anything (almost) with Fermis architecture, it simply comes down to drivers. Should AMD manage to do the same thing (ie, enable multi threading, will probably happen before BF3 is released), the picture would be different. In other words, arguing Fermis tesselation superiority using Civ V benchmark is not accurate. Not saying Fermi doesn't perform better than HD6900, but it's not mainly because of micro-architectural design.

35204.png


On a moderate and tesselation levels, HD6970 is still the faster card. Heaven might be a synthetic benchmark, but it still renders graphics other than just measuring tessellation performance. On a moderate level (what we'll probably see in games), there is no doubt which card is better suited for the task in the same power envelope. In this particular test, HD6970 has a comfortable 25% advantage in framerate. Extreme levels are unplayable and it's highly unlikely games will take advantage of these kind of tessellation levels due to the simple fact that lowering tessellation levels [without noticable reduction of image quality] you get better performance.

Games will start taking advantage of tessellation, of that I'm sure, but not on an extreme level, the performance hit (on any of todays GPUs) just isn't worth it. Shading performance is still important and AMD offer better performance/watt in everything but pure tessellation power, something that just doesn't offer an accurate perspective on what matters when it comes to gaming performance.


My 5 cents
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
First off all my answer was not meant to be arrogant

It was a belittling comment no matter how you slice it.

I would very much would like you to support that, got any paper/review/technical analysis to support it ??

Look at all the DX11 game benchmarks.

Not all games/benchmarks are equal so I will show you another game when a lower TDP card like GTX560 (not the Ti) can outperform a higher TDP card like HD6970,

I could also put HAWX2 but I know that most of the users will dismiss that bench

Civilization 5? Ryan Smith already theorized about the performance discrepancy in Civ 5 and he believes it has more to do with AMD's lack of multithreaded drivers. Here and here. Even your posted benchmark reflects that point: The 6900 cards have room for improvement on the driver front. If Civ 5 was tessellation-bound on AMD cards then the 6870 should not be performing the same as the 6950.

If you posted Hawx 2, it would not support your theory either. In non-DX11 HAWX 2 still runs faster on Nvidia hardware.

Since you spoke of outdated benchmarks, TechPowerUp still uses Heaven 2.0 when 2.5 is the latest version.

Outdated "benchmark" as in an outdated "benchmark run", which is the time context when the benchmark was run and which means using outdated drivers when there is a specific example showing newer drivers increase performance in that specific test, as Outrage pointed out.

Now please stop arguing semantics. From the context clues of my posts it was clear what I meant by "outdated benchmark."
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Civ 5 is well known to have driver problems with AMD cards, not a hardware problem. I can easily cite examples of the reverse being true, with Nvidia cards having driver problems. Take Dragon Age II for example -- a game that uses DirectX 11 for tessellation, as well as other effects, in its "very high" setting.

130130216348lg1t140V_7_1.gif


130130216348lg1t140V_7_2.gif


Here we see that the GTX 580 can't even match the HD 6870 in performance with Dragon Age II. Now, later Dragon Age patches and Nvidia driver releases have done a lot to fix the issue, but that just reinforces the point that it's a driver problem, not a hardware problem. Same with Civ V.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
But in the context of nV vs. AMD offerings, how do you justify putting a GTX460 instead of an HD6870 or say a GTX560 instead of an HD6950 2GB?

None of those parts are going to be used as they are produced today in a next gen console. There is zero chance. The largest chips we will see will be 28nm for the HD twins next generation.

Tessellation brings a major performance hit in games.

Tessellation brings a major performance hit in PC games. This seems to be a major issue for non gamers, or PC exclusive gamers, to wrap their heads around. Tesselation if built in with proper support on the consoles is likely to be used for everything it reasonably can be, which is a significant portion of scene data in any game. In the PC space we haven't seen anything that takes this approach or anything close to this approach in games. Console devs use the tools they are given far more effectively then the best PC devs could ever hope to, using Carmack's guidelines you can expect a x00% performance boost on consoles doing the same tasks, 100% being the low side. Highly optimized code designed to run on one exact platform will always significantly outperform generic code, always.

AMD dont supply MS with any chips, they sold the design and get royalties for each 360 sold. Putting the blame at AMD for MS problems with there gpu is pure BS.

nVidia has never made a GPU, TSMC does. Using your precise logic bumpgate wasn't nVidia's fault. I honestly don't see the RRoD issue as being AMD's issue, it is the XClamp that fails(fixed enough of them to know)- but if you blame nVidia for bumpgate and you have any integrity, you have to blame AMD for the RRoD issues and the billions of dollars of losses that it caused.

My hope is that the console designers are aiming to get 28nm parts. If that's the case, then AMD has a better track record of rolling out new designs at a price point that's palatable for inclusion in a console- say $125 for the graphics core or thereabouts.

Your pricing is off by ~$100. Console makers are spending the majority of their costs for the entire system on one chip, zero chance of that happening. People should keep that in mind when discussing these things. Is it worth it for the companies to spend the R&D on these chips given the potential RoI.

I am guessing there is a 95% likelihood the next gen PS4/720 will have an HD5000 or HD6000 derivative GPU.

Forgot to mention this one earlier, there is somewhere between 0% and 0% chance of that happening. The next gen consoles will not launch with current parts.

The 8800 series launched less then a week before the PS3, the RSX was based on the core that was nVidia's highest end offering a week prior to it shipping(it was scaled down). There is no chance of the next PS or XB hitting before the end of next year(and even then the odds are very low), the more likely scenario is 2013 or 2014. There is absolutely no chance whatsoever that the next gen consoles are going to be using current GPUs, none.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Forgot to mention this one earlier, there is somewhere between 0% and 0% chance of that happening. The next gen consoles will not launch with current parts.

The 8800 series launched less then a week before the PS3, the RSX was based on the core that was nVidia's highest end offering a week prior to it shipping(it was scaled down). There is no chance of the next PS or XB hitting before the end of next year(and even then the odds are very low), the more likely scenario is 2013 or 2014. There is absolutely no chance whatsoever that the next gen consoles are going to be using current GPUs, none.
Sony says they are already making the PS4, what GPU do you think they will use?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Sony says they are already making the PS4, what GPU do you think they will use?

The PS4 is in the design phase. The PS3 entered the design phase some five years before the RSX was built. The GPU they will end up using has not been released yet, that is the only thing I can say with any certainty.
 

Ghiedo27

Senior member
Mar 9, 2011
403
0
0
Your pricing is off by ~$100. Console makers are spending the majority of their costs for the entire system on one chip, zero chance of that happening. People should keep that in mind when discussing these things. Is it worth it for the companies to spend the R&D on these chips given the potential RoI.
Are you saying that they're going to spend $225 on a video chip? Then what are they going to be spending on the cpu, memory, storage, mainboard, power supply, and enclosure? Yeah, they lose money on these things but I don't think they want to bleed it out of their eyeballs.

*Just to be clear here I'm talking retail portion pricing i.e. out of a $350 console about a 1/3 might be on the cpu another 1/3 gpu, and the rest split up between the rest. Just a rough figure to ballpark from- if anyone has an educated guess I'd love to hear it.