AMD GPU will power all 3 major next gen consoles?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
Wouldn't that be cutting on their nose to spite their face? CUDA should be a specific selling point, key features of CUDA are integrated into the HW, and CUDA is far more mature than any software support GCN will have, much less AMD's VLIW support..

what console gamer cares, let alone has heard about CUDA?
 

Zanovar

Diamond Member
Jan 21, 2011
3,446
232
106
When are these new consoles due?,keep on hearing different timeframes
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
At each price category NV GF100 (and derivative chips) has superior performance/watt in Tessellation.

You know that most consoles aren't using 400W+ of power right? Can you please explain how we can fit a GTX570/580 style GPU inside a console?

You know you can put a passively cooled HD6850 into a console which barely uses 100W? Can you put a passively cooled GTX460 in there? You keep missing the fact that in the same power envelope as a GTX460 1GB/560, I can fit an HD6950 2GB. I'd rather take an 6950 2GB over a GTX560 1GB in a console.

GTX570 peaks at 210 Watts vs. 132 Watts for the 6950 2GB. You can talk about Tessellation performance all day, but there is no way someone they are going to add 80W of power to the console just to have superior Tessellation performance. AMD knows how to design a lean power efficient chip, which is better for consoles. On the desktop side, where I am using a 500W+ PSU, I don't really care if a videocard consumes 130 or 210W. But for consoles, that's a world of a difference!

what console gamer cares, let alone has heard about CUDA?

I think people are forgetting the fact that a console's primary function is to play games, not encode video or run scientific applications in CUDA.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
what console gamer cares, let alone has heard about CUDA?
When did you stop beating your wife? Console gamers don't have to know anything more than where all the plugs and buttons are. They are unsupported end users. nVidia and ATI sell to MS, Ninendo, and...OK, really only Sony. The gamers don't matter. The companies footing the bill for the boxes matter most, with game publishers coming in second, and game developers third. Also, what else does nVidia have? Let me quote RussianSensation for that one:
AMD knows how to design a lean power efficient chip, which is better for consoles. On the PC, where I am using a 500W+ PSU, I don't really care if a videocard consumes 130 or 210W. But for consoles, that's a world of a difference!
nVidia's got nothin', except superior parallel compute capability, and better software support for it, compared to AMD. That their GPU could be used to free up the CPU, like the Cell SPE, but way better, would be their one chance to look good. They can't meaningfully remove CUDA from the hardware. A few features here and there, but that's not going to make a highly efficient GPU, just a crippled one. They need to offer some combination of better and/or cheaper and/or more efficient than the competition.
 
Last edited:

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
If as a console developer I am faced with a decision of choosing between a card that has better performance per watt in games vs. a card that only wins in 2% cases once you enable Tessellation, I am choosing a card that has better performance per watt in games.

That makes my head hurt. If a console launches with extremely powerful tesselation, the devs are going to use it to make their titles look better. This is nothing remotely like the PC market, we don't get a gradual build up of using a new feature over the course of many years. Console devs get a new platform they want to push it as hard as they can out of the gate. Obviously over time they figure out new tricks to maximize performance, but if you hand them something like an extremely powerful tesselation unit they are going to utilize it heavily from day one(same would be true of any feature you can name). eDRAM has never been a factor on desktop GPUs, 360 devs instantly relied on it heavily to enhance their games. It is how it works in the console space, you do the best with what you are given. Doesn't matter if you asked for it, it was ideal or anything else. Those who best utilize the tools available to them are considered the top developers in the market, not the guy who attempts to please technology fan 345,392.

The GPU choice in the next consoles will be a business choice, much like this generaion was. Barring someone utterly failing to deliver their goals(which was how nV ended up in the last generation- Sony figured out through natural development processes which Intel spent billions to rediscover on its' own- software rendering is very dumb). AMD could end up being the choice for all of the next generation consoles, it also could end up being in just the WiiU although that is highly unlikely.

A few points on the general choice- MS lost billions of dollars due to AMD GPUs overheating. The real cause of this was the shockingly stupid X clamp design, but what actually caused the RRoD was the AMD supplied part frying. That's just the reality of the situation. People point to RRoD as evidence as to why the companies would be concerned about performance/watt, they then point to the company whose part failed as the solution to that problem.

In reality, no matter what the consoles launch at, the overwhelming majority of them are going to use significantly less power throughout the life cycle then the launch specs. Many die shrinks happen over the life cycle of a console. The out of the gate cost per chip is going to be a *FAR* bigger factor then performance per watt, that is absolutely certain.

The most logical choice for the next generation is AMD for MS and nVidia for Sony, although not for the reasons some would think. The one constant we have had in the modern era of consoles is that the system to win the largest portion of the consumer pie has had the best backwards compatability. Every generation. Also true to date, the most powerful system has never finished on top in the console market. Hasn't happened.

I think if you talked to most of the developers they would want MS and Sony to simply release far more powerful versions of what they already have. I also think that at the end of the day those requests are insignificant compared to the bottom lines of cost and marketability.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
So far I've been disappointed with what Tessellation brought to the table - generally it is a minor increase (in many cases it is more a question of looking different rather than objectively better) in IQ at a huge performance cost.

Maybe it is a question of games not being built from the ground for tessellation since the tessellation ready user base is small?

Hardware PhysX, at first look, seems to take quite a big blow - all potential hardware physics effects seems to be locked on AMD platform, which won't be using hardware PhysX.

I'm not seeing developers changing their console hardware physics implementation to physX for their PC ports, which leaves NV with only whatever exclusives for PC decide to go hardware physX or more games where hardware physX is an afterthought.

I am curious why people keep talking about PhysX when I said Physics. It doesnt matter the API. This is the next frontier that will give us a much better gaming experience.

Tessselation has the potential to give us great gains in graphics with lower bandwidth requirements. Your personal opinion on what we have or havent seen means nothing. We are just now seeing some games starting to use it. The gains wont be realized for 1-2 more generations. We go through this with every release of DX. It is becoming a tired old argument that people cant see a difference or benefit of the new API.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Tessselation has the potential to give us great gains in graphics with lower bandwidth requirements. Your personal opinion on what we have or havent seen means nothing. We are just now seeing some games starting to use it. The gains wont be realized for 1-2 more generations. We go through this with every release of DX. It is becoming a tired old argument that people cant see a difference or benefit of the new API.

Crysis 2 DX11 is the first game to truly make use of tessellation in a meaningful way. Unlike Metro 2033, the effects are very noticeable and help improve graphical fidelity substantially.

Seriously, it's amazing how Crysis 2 went from being just another good looking shooter in DX9, to one of the best looking games ever imo.

Personally, I love the water rendering the most, and it uses tessellation for enhanced geometry on the waves..
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
That makes my head hurt. If a console launches with extremely powerful tesselation, the devs are going to use it to make their titles look better.

You can't put a faster NV GPU inside the console under the same power consumption scenario. I realize that the developers will use the features they are given. But in the context of nV vs. AMD offerings, how do you justify putting a GTX460 instead of an HD6870 or say a GTX560 instead of an HD6950 2GB? Tessellation brings a major performance hit in games. So you are suggesting implementing Tessellation on cards that are already slower to begin with? You are throwing an even greater workload on GPUs that are inferior to start with.

Tessellation makes sense only when the card is fast and can actually utilize it.

Let's look what happens under a real world scenario: Unigine at 1920x1200 4AA/16AF, Tessellation Normal.

6850 = 22.8 fps
GTX460 1GB = 24.5 fps (+7.5%)

^ That's not going to make any difference whatsoever in a console.

But I can fit an HD6870 within the confines of GTX460's power consumption:

HD6870 1GB = 26.4 fps

So I either get a passively cooled 6850 or an HD6870. Why would I choose the GTX460?

What about faster cards?

GTX560 1GB = 29.4 fps
GTX560 Ti 1GB = 33.2 fps

But you can put an HD6950 1GB with the same power envelope into the console as a GTX560 Ti.

HD6950 1GB = 34.2 fps

So where is this "massive Tessellation" performance advantage that NV has? It comes into play if you enable Extreme settings at which point all of these cards are a slideshow. Finally, it's also found in the GTX570/580 cards - but they are too hot and power hungry for consoles.

Realistically speaking, if this next generation of consoles will stick around for another 6-7 years, then you start worrying about other bottlenecks in the Fermi architecture: lower amount of VRAM, and lower texture filtering performance. I'd take an HD6950 2GB over a GTX560 Ti 1GB in a console any day of the week under that scenario.
 
Last edited:

Outrage

Senior member
Oct 9, 1999
217
1
0
A few points on the general choice- MS lost billions of dollars due to AMD GPUs overheating. The real cause of this was the shockingly stupid X clamp design, but what actually caused the RRoD was the AMD supplied part frying. That's just the reality of the situation. People point to RRoD as evidence as to why the companies would be concerned about performance/watt, they then point to the company whose part failed as the solution to that problem.

AMD dont supply MS with any chips, they sold the design and get royalties for each 360 sold. Putting the blame at AMD for MS problems with there gpu is pure BS.
 
Last edited:

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
Crysis 2 DX11 is the first game to truly make use of tessellation in a meaningful way. Unlike Metro 2033, the effects are very noticeable and help improve graphical fidelity substantially.

Seriously, it's amazing how Crysis 2 went from being just another good looking shooter in DX9, to one of the best looking games ever imo.

Personally, I love the water rendering the most, and it uses tessellation for enhanced geometry on the waves..

what? it's a cheap tackon paid for by nvidia to sell more of their cards.
 

-Slacker-

Golden Member
Feb 24, 2010
1,563
0
76
At each price category NV GF100 (and derivative chips) has superior performance/watt in Tessellation.

The link you provided for the power consumption, tests Crysis and not a DX-11 Tessellation game or benchmark. I clearly talked about DX-11 Tessellation performance/watt and in order to see that, you really have to test a real Tessellation game or benchmark not Crysis in DX-10.

Have a look at TessMark OpenGL, http://www.geeks3d.com/20110408/download-tessmark-0-3-0-released/

You really need to understand that Tessellation in DX-11 API is not programmable (Hull-Shader and Domain-Shader are programmable) and the only way to get more performance is to install more Tessellators in the architecture of your Graphics card, that’s why NV GF100 is a multicore (Multi-tessellation cores) design and only 69xx has dual core design from AMD, HD58xx and HD68xx series are single-Tess Core designs.

Next gen AMD architecture GCN (Graphics Core Next) will utilize a multi-Tessellation core design much like NV GF100/110.



Im talking about Tessellation and you keep saying that AMD cards have better performance/watt in general use, have you a problem understanding what Im talking about ??

Why not test HD6870 against GTS550 in Tessellation and observe witch card has better performance/watt ?? Even GTS450 can keep up against HD5870 or HD6870 in Tessellation.

http://www.gpu-tech.org/content.php...sellation-punch-does-the-Geforce-GTS-450-have

tessmark_comp_table_gts450_gtx460_gtx480_hd5870.jpg


I thought AMD's catalyst blows at opengl - if true, wouldn't NV's massive performance lead in those graphs be due to more than just tessellation performance?

Also, why does having better tess performance per watt matter so much, when real world performance depends on so much more, even in a game that supports extreme tessellation?

Anyway, this could turn up to be great news for AMD, they REALLY need the money.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,697
397
126
PhysX seems to be pretty much dead if NVIDIA is out of the 3 consoles.

I doubt we can extrapolate the future of tessellation and performance of current GPUs offerings on future games that are developed from the ground up with tessellation in mind, based on tessellation in current games since in those games tessellation only seems to be used on top of already complex textures.

Is that better Genx87?

I just used your post to enter the thread since your post touched 2 things I wanted to talk about - tessellation and how physics will be done in the future (i.e. not using hardware physX).
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
what? it's a cheap tackon paid for by nvidia to sell more of their cards.

Well if thats your opinion, thats your opinion..

Although it's likely you haven't even played the game under DX11. Screenshots and compressed video footage definitely don't do the game justice.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
GTX570 peaks at 210 Watts vs. 132 Watts for the 6950 2GB. You can talk about Tessellation performance all day, but there is no way someone they are going to add 80W of power to the console just to have superior Tessellation performance. AMD knows how to design a lean power efficient chip, which is better for consoles. On the desktop side, where I am using a 500W+ PSU, I don't really care if a videocard consumes 130 or 210W. But for consoles, that's a world of a difference!

I think people are forgetting the fact that a console's primary function is to play games, not encode video or run scientific applications in CUDA.

It seems you answered your own question.

The main reason why fermi based GPUs have higher power consumption than equivalent AMD GPUs is because they have additional GPGPU capabilities that the latter do not have.

It would be silly for Nvidia sell a GF100 based GPU to console makers that still had the GPGPU capabilities intact, so they would most certainly gut the GPU to make it more power efficient.

Thats what they did with the GTX 580. They downsized the GPGPU capabilities and were then able to increase the SP count and core/mem speeds (with some other architectural performance tweaks) without drastically increasing power consumption.

The GTX 580 uses less power under normal gaming scenarios than the GTX 480 despite being significantly faster, precisely because it's a more gaming oriented GPU.
 

tommo123

Platinum Member
Sep 25, 2005
2,617
48
91
Well if thats your opinion, thats your opinion..

Although it's likely you haven't even played the game under DX11. Screenshots and compressed video footage definitely don't do the game justice.

honestly? i haven't... but... it's not a good enough game to go back in and replay it.

crysis and warhead are. and i have played them a few times but crysis 2 just isn't.
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
A few points on the general choice- MS lost billions of dollars due to AMD GPUs overheating. The real cause of this was the shockingly stupid X clamp design, but what actually caused the RRoD was the AMD supplied part frying. That's just the reality of the situation. People point to RRoD as evidence as to why the companies would be concerned about performance/watt, they then point to the company whose part failed as the solution to that problem.
Not so sure about that. I've read a lot about it and suffered the problem myself. The part that failed was the GPU packaging's bond to the PCB. The cooling was also very inadequate as anyone who has ever taken apart an xbox would know. If the EU had never passed into law their legislation regarding the use to lead in children's toys, MS would never have had the problem because they could have used lead based solder. The xclamps put flexural strain on the package but not a whole lot. The xclamp used in the 360 is pretty small and weak. I would say the #1 reason for failure was inadequate cooling and the #2 reason was lead free solder.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
If AMD does design the graphics chips for all 3 consoles, that would definitely shake up the PC market. Console games would be designed with AMD-specific technology in mind, and that would carry over to PC ports. More than that, though, AMD's graphics R&D would get a major boost. Many technologies first implemented in the 360's Xenos chip were later used in ATi's PC graphics. Nvidia won't have that boost anymore. They may have the mobile market, but advances in mobile, ARM-based technology would not benefit their PC development as much as console research.
 

cusideabelincoln

Diamond Member
Aug 3, 2008
3,275
46
91
At each price category NV GF100 (and derivative chips) has superior performance/watt in Tessellation.

/snip[/IMG]

Your links only showed performance. They did not show the power consumption in the specific context of tessellation performance. As such your claim has not been validated, because GPUs will use varying amounts of power depending on the workload.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thats what they did with the GTX 580. They downsized the GPGPU capabilities and were then able to increase the SP count and core/mem speeds (with some other architectural performance tweaks) without drastically increasing power consumption.

??? GTX580 and 480 artificially have 1/8th DP so that NV can sell Tesla cards for $3-5k a piece. The GPU design is no different than their prosumer products.

What exactly was downsized on the GTX580 in terms of GPGPU? The die is 520mm^2 vs. 530mm^2 on the 480. That's hardly an improvement. You still have 1/8 DP support, still 3B transistors. There are several reasons why the 580 runs cooler:

1) More mature 40nm process
2) Improved firmware
3) A redesigned vapor-chamber GPU cooler with a larger opening for the fan in the shroud

Performance wise,

"NV has ported over GF104’s faster FP16 (half-precision) texture filtering capabilities, giving GF110/GTX580 the ability to filter 4 FP16 pixels per clock, versus 2 on GF100/GTX480. The other change ties in well with the company’s heavy focus on tessellation, with a revised Z-culling/rejection engine that will do a better job of throwing out pixels early, giving GF110/GTX580 more time to spend on rendering the pixels that will actually be seen. This is harder to quantify (and impossible for us to test), but NVIDIA puts this at another 8% performance improvement."

There is nothing "more gaming centric" about the 580 compared to the 480. They didn't carve out any extra room to move from 480 to 512 SP. GTX580 is how Fermi should have launched.

In fact, if you followed Fermi closely, you would have noticed that 8-10 months after release, the 480 chips ran cooler than the original April-June cards. Once yields improved and transistor quality went up, NV was able to get full yielding 480 chips. Some tweaks mentioned above, and you got yourself a 580. That's not "magic".
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Your links only showed performance. They did not show the power consumption in the specific context of tessellation performance. As such your claim has not been validated, because GPUs will use varying amounts of power depending on the workload.

http://www.geeks3d.com/20110408/download-tessmark-0-3-0-released/

tessmark_gtx590_scores.jpg


Just take the results of "Normal" Tessellation

GTX560Ti = 21962
HD6970 2GB = 9818

GTX560Ti TDP = 170W according to NV
HD6970 2GB TDP = 250W according to AMD

Do you want me to explain or have you figured it out yet ??
 

Mopetar

Diamond Member
Jan 31, 2011
8,436
7,631
136
Tessellation is only one aspect of graphical capabilities, and probably a minor one compared to performance at a given die size, which impacts both cost and heat. You're banking a lot of value on Sony caring that much about tessellation to eschew the advantages in the latter areas that AMD can offer.

If both Nintendo and Microsoft have already selected AMD, either they've decided that they don't give two damns about tessellation or that what AMD provides is sufficient. If Sony goes with Nvidia it will be about money, not about tessellation performance.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Tessellation is only one aspect of graphical capabilities, and probably a minor one compared to performance at a given die size, which impacts both cost and heat. You're banking a lot of value on Sony caring that much about tessellation to eschew the advantages in the latter areas that AMD can offer.

If both Nintendo and Microsoft have already selected AMD, either they've decided that they don't give two damns about tessellation or that what AMD provides is sufficient. If Sony goes with Nvidia it will be about money, not about tessellation performance.

The same argument flipped is probably why Nintendo and whatever console also chooses AMD. The money. AMD has already announced they are tweaking their upcoming architectures, so who knows what version is going in to unknown future hardware. LOL at why they might go with Nvidia, 'only because of money'.
Will there be a display port on the consoles :)
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
You still didn't show what is the power actually consumed during the test.

Even if GTX560 Ti consumed 170W or even 200W or even 250W and HD6970 2GB consumed 170W it is clear that GTX560 Ti has superior performance/watt in Tessellation.

GTX560 ti Performance/Watt
21962/170W = 129,18
21962/200W = 109,81
21962/250W = 87,84

HD6970 Performance/Watt
9818/170W = 57,75
9818/200W = 49,09
9818/250W = 39,27

It is not difficult to understand why, it is like Multicore CPUs,

No mater your IPC, more cores will be faster in multithreaded applications and as i have said before, Tessellation performance only goes up with more tessellators.

RussianSensation can argue all day long about the power usage of HD69xx series but the clear fact is that GF100 Fermi and Derivative designs are superior in Tessellation processing than Cayman or any other current AMD Design.

Remember people, im not talking about cards in general, im talking about Tessellation and you will see that AMD will utilize the same principles with multi-Tessellators and improved Compute processing capabilities (Better Physics not PhysX) in their next gen architecture (GCN)