RTG3rd March – Will Discuss Polaris, Fury X2, VR, DirectX 12 and More

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Its no different than for GPUs either.

Since we are in the VC&G lets take the GPUs,

Having a bigger die with more Shaders at lower frequency has a higher perf/watt than having a smaller die with less shaders at higher frequencies.

edit: Assuming we want the same performance.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Since we are in the VC&G lets take the GPUs,

Having a bigger die with more Shaders at lower frequency has a higher perf/watt than having a smaller die with less shaders at higher frequencies.

Assuming the shaders are equal.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Since we are in the VC&G lets take the GPUs,

Having a bigger die with more Shaders at lower frequency has a higher perf/watt than having a smaller die with less shaders at higher frequencies.

edit: Assuming we want the same performance.

If and if. Not just the shaders but the entire uncore part.

But you quickly reach limitations to that. Fury X vs Nano is a good example.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
If and if. Not just the shaders but the entire uncore part.

But you quickly reach limitations to that. Fury X vs Nano is a good example.


Examples,

Fury Nano vs R9 390X at 1440p or 4K

GTX 980Ti vs GTX 970 at 1440p or 4K

The larger cards (Nano and GTX 980Ti) have the higher perf/watt here.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Examples,

Fury Nano vs R9 390X at 1440p or 4K

GTX 980Ti vs GTX 970 at 1440p or 4K

The larger cards (Nano and GTX 980Ti) have the higher perf/watt here.

Any reason you picked the GTX970? And comparing the Nano to the 390X isn't exactly right either.

perfwatt_1920.gif

perfwatt_2560.gif

perfwatt_3840.gif


So, GTX980 beats GTX980TI in performance/watt. And Nano beats a Fury X in performance/watt. Both entirely expected.

And that's all ignoring the 750TI wonderkid.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
@ ShintaiDK

You forgot something,

Since we are in the VC&G lets take the GPUs,

Having a bigger die with more Shaders at lower frequency has a higher perf/watt than having a smaller die with less shaders at higher frequencies.

edit: Assuming we want the same performance.

GTX750Ti may have the highest perf/watt but it is irrelevant if you cannot play the game (1440/4K). Same goes for every GPU.

So, for the same performance or if you fps cap the game, the larger GPUs will have higher perf/watt.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Yes, until you reach the limit. Again performance doesn't scale linear to power usage.

And performance/watt is king. The one that dominates this wins everything.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
No, you are creating a strawman now.

Im just trying to understand what you are saying. Clearly GTX 750Ti is the winner of perf/watt at 1440p according to the TPU graphs above.

Care to explain better what it is you mean ??
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Im just trying to understand what you are saying. Clearly GTX 750Ti is the winner of perf/watt at 1440p according to the TPU graphs above.

Care to explain better what it is you mean ??

What has the best performance/watt, Fury X or Nano? And forget your 1440p as well as some kind of value.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
What has the best performance/watt, Fury X or Nano?

Nano but those two are the same die same shaders.

Compare R9 390X at 1440p or 4K vs R7 270X at the same performance.

Im sure Hawaii will have higher perf/watt.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Nano but those two are the same die same shaders.

Compare R9 390X at 1440p or 4K vs R7 270X at the same performance.

Im sure Hawaii will have higher perf/watt.

So we can agree that performance and power consumption doesn't scale linear. :)

What got the best performance/watt, GTX980Ti or GTX980?

If you wish to compare at the same performance, feel free to show some numbers. Including 1080p. Because the 1-2% that uses above 1080p isn't that interesting.

The result for bigger die part is...most likely, but maybe.

If you take a 750TI. Set it to run a game that fits it in 1080p for example. I wouldn't be surprised if a Fury X, Nano, GTX980 and GTX980TI all loses to it at the same performance. The (3D) idle load of said cards is almost half of the 750TI peak if not all.

And that returns us to Polaris 10. What did AMD showcase? That's right, performance/watt because its what matters :)
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
So we can agree that performance and power consumption doesn't scale linear. :)

What got the best performance/watt, GTX980Ti or GTX980?

If you wish to compare at the same performance, feel free to show some numbers. Including 1080p. Because the 1-2% that uses above 1080p isn't that interesting.

Ill tell you this one,

perf/watt is only valid and has a meaning for same performance and price segment products. Example Fury X vs GTX 980Ti or R9 390 vs GTX 970 etc.

Comparing perf/watt of GTX 750Ti vs GTX 980Ti at 1440p is irrelevant because the 750Ti cannot game at that resolution.

Also, according to the TPU graphs GTX 980 has higher perf/watt than Fury X at 4K. But, nobody will choose the GTX 980 over the Fury X for 4K resolution gaming because the GTX 980 has higher perf/watt.

So, we really have to know when we are talking about perf/watt.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
If you change the parameter and setup special cases. Then I can understand your confusion.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
And that returns us to Polaris 10. What did AMD showcase? That's right, performance/watt because its what matters :)

Would it blow your mind if I told you that performance/watt can be multiplied by a target wattage (for point of example let's take the totally arbitrary value of 300W) and used to generate a really rough expectation of the performance of a part sized to use that much wattage? So if people have some unaccountable desire for a 300W part, they can get themselves hyped up over a demo that doesn't require a 300W part at all, and can be done with the smallest, best yielding chip.

Anyway, on the desktop processor front there's a pretty good reason why Intel's stagnating. Mobile can be dealt with great by performance/watt. Server's parallel enough that it can be dealt with by it because performance/watt means a core fits in a smaller wattage and server cores can be added to add performance, or alternately the chip can go full Xeon D and just cash that in on good performance that sips power and costs way less in the long run.

Mobile processors are continually getting a whole bunch of good enough for less and less wattage. Servers are continually getting more performance per watt no matter where on the curve they fall. So we know Intel isn't actually stagnating. So what gives? Why is the desktop not gaining like that? How much does the desktop benefit from extra cores outside serious enthusiast use cases? Not much at all. The problem is that the desktop is in the position where single thread performance really matters but it's got more capacity to dissipate wattage than can really meaningfully be used by current processor designs. So there's two options. First is clocking them up, but that's butting into an inability to clock the things much higher. Next up is trying to raise performance by using more transistors per core. That's all good and well but how wide can a core really meaningfully be? We're still trying to raise single threaded performance so we can't go full Power and just add more and more thread support to a core, so we're going to be chasing tiny shreds of performance with wads of transistors and those transistors use power. Well, that's a non-starter for mobile and server so congratulations you've just fragmented off a separate desktop uarch. Clean out your desk.

The nice thing is that increasingly the real power hungry enthusiasts can use more cores. So, for them you could in theory make a different platform that uses most of the work done for the server market where stuffing the thing full of cores is a good use of time and resources and generate a nice platform for higher core counts, and focus core count increases on that platform over time.

God I should've bought Haswell-E instead of Devil's Canyon.
 

maddie

Diamond Member
Jul 18, 2010
5,147
5,523
136
Yes, until you reach the limit. Again performance doesn't scale linear to power usage.

And performance/watt is king. The one that dominates this wins everything.
Until the new gen is released, I expect every single video card recommendation by you to be the GTX 750Ti.

Your reasoning is too simplistic.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
People like new things. THe 970 was new for a long time.
When the 390 comes out, AMD gains share. Because people like new things. If AMD had countered with the 390 sooner, maybe they wouldn't have lost so much market share. In the end, AMD needs an ACTUAL new product in the midrange to start gaining sales again and one that doesn't have a disaster launch attached to it.

BEing first to market would be HUGE as well.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
People like new things. THe 970 was new for a long time.
When the 390 comes out, AMD gains share. Because people like new things. If AMD had countered with the 390 sooner, maybe they wouldn't have lost so much market share. In the end, AMD needs an ACTUAL new product in the midrange to start gaining sales again and one that doesn't have a disaster launch attached to it.

BEing first to market would be HUGE as well.

Truth. With the black eye the 290/X had at launch, anything they could have done to revive interest would have helped.

EDIT:

LOL! Performance/watt only became "king" once Nvidia finally moved ahead of AMD in that particular category.

Outside of forums where we bicker, mainstream has been moving to eco-green nonsense for a while. NV saw it and capitalized on it. AMD some how assumed everyone still wanted power hungry CPUs and GPUs. ANd it cost them.

I wouldn't say Perf/Watt became king once Nvidia moved ahead AMD, but it was becoming king and NV just read the market better (not surprising, considering some of AMD's other blunders).

Pretty sure when the Northeast went from 8.x cents per kh/w to 16.x kh/w they turned to perf/watt and just about everything electrical.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
LOL! Performance/watt only became "king" once Nvidia finally moved ahead of AMD in that particular category.

Performance/watt already mattered back in the K8/Pentium M days and got solid with Core 2 and forward. GPUs was late to get there, but the mobile front finally took them there. And the consumers wanted it.

Until the new gen is released, I expect every single video card recommendation by you to be the GTX 750Ti.

Your reasoning is too simplistic.

Obviously you are not serious.

Would it blow your mind if I told you that performance/watt can be multiplied by a target wattage (for point of example let's take the totally arbitrary value of 300W) and used to generate a really rough expectation of the performance of a part sized to use that much wattage? So if people have some unaccountable desire for a 300W part, they can get themselves hyped up over a demo that doesn't require a 300W part at all, and can be done with the smallest, best yielding chip.

Anyway, on the desktop processor front there's a pretty good reason why Intel's stagnating. Mobile can be dealt with great by performance/watt. Server's parallel enough that it can be dealt with by it because performance/watt means a core fits in a smaller wattage and server cores can be added to add performance, or alternately the chip can go full Xeon D and just cash that in on good performance that sips power and costs way less in the long run.

Mobile processors are continually getting a whole bunch of good enough for less and less wattage. Servers are continually getting more performance per watt no matter where on the curve they fall. So we know Intel isn't actually stagnating. So what gives? Why is the desktop not gaining like that? How much does the desktop benefit from extra cores outside serious enthusiast use cases? Not much at all. The problem is that the desktop is in the position where single thread performance really matters but it's got more capacity to dissipate wattage than can really meaningfully be used by current processor designs. So there's two options. First is clocking them up, but that's butting into an inability to clock the things much higher. Next up is trying to raise performance by using more transistors per core. That's all good and well but how wide can a core really meaningfully be? We're still trying to raise single threaded performance so we can't go full Power and just add more and more thread support to a core, so we're going to be chasing tiny shreds of performance with wads of transistors and those transistors use power. Well, that's a non-starter for mobile and server so congratulations you've just fragmented off a separate desktop uarch. Clean out your desk.

The nice thing is that increasingly the real power hungry enthusiasts can use more cores. So, for them you could in theory make a different platform that uses most of the work done for the server market where stuffing the thing full of cores is a good use of time and resources and generate a nice platform for higher core counts, and focus core count increases on that platform over time.

God I should've bought Haswell-E instead of Devil's Canyon.

Welcome to the dinosaur category :)

Desktop is stagnated/shrinking no matter what you do. Because the mass consumers dont want it. And even if they want a desktop, they go for something small. MiniITX and NUC types.

Currently the GPU market lives on due to gamers willing to pay 300$+. But for how long remains to be seen.

For me, my next GPU will hopefully not use above 125W. 150W if I have to stretch it.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
People like new things. THe 970 was new for a long time.
When the 390 comes out, AMD gains share. Because people like new things. If AMD had countered with the 390 sooner, maybe they wouldn't have lost so much market share. In the end, AMD needs an ACTUAL new product in the midrange to start gaining sales again and one that doesn't have a disaster launch attached to it.

Being first to market would be HUGE as well.

I think the right play after the 290 reviews would be to spec out a 290 equivalent to the 7970 GHz edition and require a cooler capable of keeping it from throttling under a given sound level that aftermarket coolers could hit, and seeding review sites with them. That would've kept the 290 much stronger, probably made the OEMs happy since they could have a way to increase the value of their product, and put them in a much better position to withstand the 970.
 

Abwx

Lifer
Apr 2, 2011
11,837
4,790
136
Again performance doesn't scale linear to power usage.

It scale as a power law, one must increase power by 21% to increase frequency by 10%, but SP count allow increasing perf as a linear function of power, that s why the GT980 being ahead of the 980TI is boggus unless the latter was heavily overclocked but not the former.