Performance per Watt: What chance does Polaris have?

ultima_trev

Member
Nov 4, 2015
148
66
66
While the transition to finfets should help with AMD/RTG's performance per watt, is this the only thing Polaris has going for it or will the architecture be designed with efficiency as a higher priority than the previous generation?

To put it roughly, basing on most benchmarks, AMD's performance per watt seems to be roughly 70-80% of nVidia's. To put it in perspective from my own findings, my R9 390 is set to power limit -30% / vcore -30mv at all times, giving it roughly GTX 980 power consumption figures but only giving about 75-80% of the performance. nVidia has come a long way since Fermi, Fermi to Kepler was akin to Pentium 4 to Conroe, and Maxwell somehow repeated that feat even on the same process node.

If nVidia can manage another feat through architectural optimization alone regardless of the benefits of a smaller fab process, is there any way for AMD to catch up?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
While the transition to finfets should help with AMD/RTG's performance per watt, is this the only thing Polaris has going for it or will the architecture be designed with efficiency as a higher priority than the previous generation?

To put it roughly, basing on most benchmarks, AMD's performance per watt seems to be roughly 70-80% of nVidia's. To put it in perspective from my own findings, my R9 390 is set to power limit -30% / vcore -30mv at all times, giving it roughly GTX 980 power consumption figures but only giving about 75-80% of the performance. nVidia has come a long way since Fermi, Fermi to Kepler was akin to Pentium 4 to Conroe, and Maxwell somehow repeated that feat even on the same process node.

If nVidia can manage another feat through architectural optimization alone regardless of the benefits of a smaller fab process, is there any way for AMD to catch up?


About 2 to 2.5 times bettet perf/watt than 28 card.
http://www.extremetech.com/gaming/2...s-2-5x-performance-per-watt-may-utilize-gddr5

They claim 30% of perf/watt will be from arch improvements and rest from new process.

have you tried running your 390 against 980 in FP64 benchmark? You cards is like 10 times faster ;)
 
Last edited:

Adored

Senior member
Mar 24, 2016
256
1
16
A lot depends on just how hard the silicon is pushed in order to get an overall performance win. Hawaii is the way it is because that's exactly what AMD did.

Nvidia won on 28nm because they really tried with Maxwell. It's clear that AMD didn't try on anything like the same level. The question then is, if they didn't try on the same level on 28nm, what were they doing instead?

Polaris will beat Pascal in perf/Watt but not by the kind of margins we see today with Maxwell vs GCN. I expect to see a correction in the market to nearer classic levels, 60-40 in Nvidia's favour.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
They could do like nVidia and make chips which are more narrowly focused stripping out non gaming functionality that uses power. Hawaii is going to be at a disadvantage because it's such a strong compute oriented design relative to the 980.
 
Mar 10, 2006
11,715
2,012
126
Polaris will beat Pascal in perf/Watt but not by the kind of margins we see today with Maxwell vs GCN. I expect to see a correction in the market to nearer classic levels, 60-40 in Nvidia's favour.

Do you have any proof that NVIDIA isn't "trying" with Pascal? :rolleyes:
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
To put things into perspective:
You will be able to buy a 390 performance card that will look like 7790 - little single fan card with around 100W TDP
 
Mar 10, 2006
11,715
2,012
126
I don't recall making that suggestion?

It's implied. You are saying that Maxwell > GCN 1.3 because NVIDIA gave a hoot and AMD didn't. You jumped from this to your statement that AMD will for sure have a perf/watt lead.

The implication is that AMD will not only make up the lost ground relative to NVIDIA but it will move so far ahead of NVIDIA that it will actually deliver perf/watt leadership. This means that you think NV will twiddle its thumbs while AMD makes an epic leap forward.
 

Adored

Senior member
Mar 24, 2016
256
1
16
It's implied. You are saying that Maxwell > GCN 1.3 because NVIDIA gave a hoot and AMD didn't. You jumped from this to your statement that AMD will for sure have a perf/watt lead.

The implication is that AMD will not only make up the lost ground relative to NVIDIA but it will move so far ahead of NVIDIA that it will actually deliver perf/watt leadership. This means that you think NV will twiddle its thumbs while AMD makes an epic leap forward.

Do you even realise just how far ahead AMD was in perf/Watt at 40nm? I do expect AMD to make an epic lead forward (over 28nm). Given that they barely tried the last 2 years anything less will be an abject failure.

Nvidia has less room to improve. Just take both of their numbers - 2x for Nvidia and 2.5x for AMD - reiterated recently by each company.
 

ultima_trev

Member
Nov 4, 2015
148
66
66
About 2 to 2.5 times bettet perf/watt than 28 card.
http://www.extremetech.com/gaming/2...s-2-5x-performance-per-watt-may-utilize-gddr5

They claim 30% of perf/watt will be from arch improvements and rest from new process.

have you tried running your 390 against 980 in FP64 benchmark? You cards is like 10 times faster ;)

So 30% improvement from their design and 70% from the fab process?

As for benchies, I haven't tried anything compute oriented, mostly stick to Firestrike, Unigine Valley and Steam VR.
 

Adored

Senior member
Mar 24, 2016
256
1
16
Just incase anyone was wondering what it was like back in the old days of 40nm...

perfwatt_1920.gif


That's even more of a lead than what we have with Maxwell vs GCN.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
At the same performance in DX-12 Games, GCN 1.1 and 1.2 will be very close with Maxwell if not better in perf/watt.
 

master_shake_

Diamond Member
May 22, 2012
6,425
292
121
i'm trying really hard to care about performance per watt here.

i'm coming up blank on reasons.

can anyone give me a reason to care?

afaik perf/dollar is a better metric.

what you get vs what you pay for.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Recall Maxwell architecture came from Nvidia's mobile development efforts. That's likely the reason its so power efficient.

But there were obvious trade-offs that had to be made that are now hurting performance with new game developments.

Fury Nano proves GCN tech can match the power to performance of Maxwell. It's just most GCN releases have been overvolted / overclocked to compete with Nvidia's offerings so they fall way outside of their efficiency curve. Luckily most 390's 390x's and Fury's can be undervolted with little to no penalty on performance and once tuned are much closer to Maxwell.

Of course consumers shouldn't have to do this but it is what it is...
 
Last edited:

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
AMD had slightly less efficient hardware per watt and more efficient hardware per dollar last gen.

Both are the result of AMD's hardware design being optimized for the future and nVidia's hardware being optimized for the time that it was released.

Whichever is the right move I will leave up to others; economically NV's decision was better, for consumer value AMD's was.

This generation NV's hardware will follow AMD in design and as a result will lose some of that inherent efficiency lead as a result of adding compute hardware for a-sync compute and other DX12 abilities. AMD on the other hand, already took that efficiency hit with Tahiti and Hawaii.

Of course, there's a lot more at play that decides FPS/watt than just those DX12 specific bits but I believe that is what will be changing the most Maxwell to Pascal.

At the end of the day I, along with the majority of consumers, am concerned more with FPS/$. Give me an architecture that is 10x more efficient than today and I still want a design that consumes 250 watts, because that's about the most a single air-cooled card can stay quiet at. I completely understand there is a niche market for cards that can max performance under 100watts or with no power connectors but if my degree in marketing and years of grad school research and then industry experience has taught me anything, its that the majority of consumers want the biggest bang for their hard earned buck. AMD supplied that last gen with its 7970 and 290x cards that can reach 50% better value than their NV counterparts after looking at present day performance.

I like to think of video card efficiency as I would an internal combustion engine's+transmission+weight+aerodynamics efficiency. But not just road automobile efficiency, but a circuit race car efficiency. If you are the most concerned about average lap time (game performance) then you want the best combination of speed (frames per second) and fewest gas fill ups (energy consumption). If the racecar consumes less, then you can increase the air fuel mixture and compression to increase horsepower, thereby improving lap time.

A video card is no different, give me the best possible game performance at present and in the future at my personally preferred price point ($300-$400) and I'll buy!
 

Adored

Senior member
Mar 24, 2016
256
1
16
i'm trying really hard to care about performance per watt here.

i'm coming up blank on reasons.

can anyone give me a reason to care?

afaik perf/dollar is a better metric.

what you get vs what you pay for.

It's all linked. There are plenty of guys like me who don't or rarely buy high-end GPUs either due to noise, power or whatever reasons.

Buy a 390 instead of a 970 and you might also need a new PSU. I would have (though I ended up buying a new PSU anyway), so for me the 970, although slower, would probably have won out in the end.

Heat and noise are much more important to me than absolute performance. In the end I chose to basically skip the past 2-3 years when I normally upgraded every year instead. Anyone who bought a Pitcairn-class GPU way back in 2012 probably felt the same way, or bought the 970 last year.

This wouldn't be a factor had AMD been close in perf/Watt to Maxwell, but they just haven't been for too long.
 
Last edited:

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
They could do like nVidia and make chips which are more narrowly focused stripping out non gaming functionality that uses power. Hawaii is going to be at a disadvantage because it's such a strong compute oriented design relative to the 980.


Yeap, lets gut the cards, give consumers less for more cash spent and follow their competitors lead on perf pw!
 

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
if they want to recover market share I think it's their only option to go heavy on efficiency, specially for low end-mid range and laptops.

I don think that the power efficiency is such a huge deal for the $400+ market, but for the cheaper stuff I think so, also they need something to go back to being competitive on laptops.

I would say the fact that they started by showing polaris on laptops and smaller chips is a good sign for that, but...
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
It's all linked. There are plenty of guys like me who don't or rarely buy high-end GPUs either due to noise, power or whatever reasons.

Buy a 390 instead of a 970 and you might also need a new PSU. I would have (though I ended up buying a new PSU anyway), so for me the 970, although slower, would probably have won out in the end.

Heat and noise are much more important to me than absolute performance. In the end I chose to basically skip the past 2-3 years when I normally upgraded every year instead. Anyone who bought a Pitcairn-class GPU way back in 2012 probably felt the same way, or bought the 970 last year.

This wouldn't be a factor had AMD been close in perf/Watt to Maxwell, but they just haven't been for too long.

The reason Maxwell has a better performance per watt is because of all the stuff nVidia ripped out of them. AMD left all its compute hardware in there, which is proving to have been a very good idea. nVidia removed a lot of their computer hardware to focus only on "game features", only it turns out more modern games need that compute hardware.

So like always, it comes down to you wanting low power consumption, or better performance.
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
The reason Maxwell has a better performance per watt is because of all the stuff nVidia ripped out of them. AMD left all its compute hardware in there, which is proving to have been a very good idea. nVidia removed a lot of their computer hardware to focus only on "game features", only it turns out more modern games need that compute hardware.

So like always, it comes down to you wanting low power consumption, or better performance.


The irony...

I wonder if ppl realize they are asking for their gpus to be gutted?
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
163
106
The reason Maxwell has a better performance per watt is because of all the stuff nVidia ripped out of them. AMD left all its compute hardware in there, which is proving to have been a very good idea. nVidia removed a lot of their computer hardware to focus only on "game features", only it turns out more modern games need that compute hardware.

So like always, it comes down to you wanting low power consumption, or better performance.
Except this time, with DX12, more compute is needed for better performance & therefore to lower power consumption. This is what you see with Pascal & how they've basically copied GCN in a number of different ways, minus proper Async compute I think :hmm:
 

Adored

Senior member
Mar 24, 2016
256
1
16
The reason Maxwell has a better performance per watt is because of all the stuff nVidia ripped out of them. AMD left all its compute hardware in there, which is proving to have been a very good idea. nVidia removed a lot of their computer hardware to focus only on "game features", only it turns out more modern games need that compute hardware.

So like always, it comes down to you wanting low power consumption, or better performance.

Or in the case of Maxwell, better performance for longer when it mattered and better power consumption. GCN might be proven a better architecture next year, but by then it'll likely be too little too late for e.g. most Fury X buyers compared to 980 Ti buyers.

Sure Hawaii owners got a great card in the long run, ditto Tahiti. Christ even Pitcairn aged well.

Nvidia made the right choices with Maxwell though. Had AMD done the same thing they wouldn't be in this mess.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
^ yes, the answers will differ based on who you ask. For consumers, the GCN will be favored asnwer. For company that wants to sell you more products, kepler and maxwell is a new standard in planned obsolence.
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
Or in the case of Maxwell, better performance for longer when it mattered and better power consumption. GCN might be proven a better architecture next year, but by then it'll likely be too little too late for e.g. most Fury X buyers compared to 980 Ti buyers.

Sure Hawaii owners got a great card in the long run, ditto Tahiti. Christ even Pitcairn aged well.

Nvidia made the right choices for them as a company with Maxwell though. Had AMD done the same thing they wouldn't be in this mess.

Simply brilliant. Well boy! Someone should email them your resume because had it been as simple as just asking you all along they could have just hired you as CEO!
 

Snarf Snarf

Senior member
Feb 19, 2015
399
327
136
i'm trying really hard to care about performance per watt here.

i'm coming up blank on reasons.

can anyone give me a reason to care?

afaik perf/dollar is a better metric.

what you get vs what you pay for.

The importance in efficiency matters the most in terms of scale. 300w seems to be the highest TDP people are willing to deal with for a single card. If your base design is efficient from the start, as you scale it up to add more shaders you end up with more performance inside of the same 300w maximum TDP.

As you increase core clock the power consumption scales pretty fast, GCN doesn't clock anywhere near as high as Maxwell does. 980ti @ 1450 (which is almost guaranteed in normal cases) smokes Fury X because of the choices Nvidia made in regards to efficiency.

Going forward as nodes become more difficult to build on efficiency will begin to matter more as well.