[Tom's] Undervolting the R9 Fury

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
http://www.tomshardware.com/reviews/msi-afterburner-undervolt-radeon-r9-fury,4425.html

Basically,

01-Power-Consumption-Voltage-Steps.png


02-Power-Consumption-Gaming.png


03-Gaming-Performance.png


04-Gaming-Efficiency.png


That's some undervolting headroom! This reminds me of how you can usually shave off >0.1-0.15v from any recent AMD CPU and have it pass any stress testing at stock clocks. It's interesting to see how Fiji becomes almost as efficient as a 100MHz OC'd GM204. Of course then, you could further overclock the crap out of GM204 if you want to, while Fiji is already at its practical limit.


All the powertune and perf/W improvements on GCN1.2 do seem to shine here once you supply the chip with the voltage it actually requires, it certainly looks good for Polaris if they continue to improve on this regard.
 
Last edited:

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
The Asus Fury is factory undervolted, so at least one manufacturer noticed this early on.

My Tri X can hardly overclock (at least it partially unlocked) so many I could try knocking off some voltage.
 
Feb 19, 2009
10,457
10
76
Yup, Asus Fury vs 980 per [H] review was ~11% more total system power but ~15% faster overall.

http://hardocp.com/article/2015/07/10/asus_strix_r9_fury_dc3_video_card_review/8#.VrlREfl96Uk

1436520543zZMsl7GpwE_8_1.gif


1436520543zZMsl7GpwE_6_4.gif


1436520543zZMsl7GpwE_5_4.gif


1436520543zZMsl7GpwE_4_4.gif


The funny thing? [H] had a brain fail and in their conclusion:

There are still factors, other than raw performance, that people judge video cards by. You cannot deny the efficiency of the GeForce GTX 980 over the new Radeon R9 Fury. The GeForce GTX 980 is able to deliver more performance per watt. The overall system wattage usage is a lot less on GTX 980 versus R9 Fury.

LOL?
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
GCN as a whole is more efficient than people think. The problem is that AMD, for whatever reason, has always overvolted their products from the factory (increased yields?). Worse, once Nvidia started pushing the performance envelope, AMD responded by cranking up clocks (especially on the memory controllers) and running GCN out of its comfort zone, thus causing perf/watt to go down even more.

Hawaii has a reputation for being a hot, loud, and power-hungry chip. In fact, it was probably AMD's most efficient architecture up to that time. But for marketing reasons, it had to be able to match GK110 (and, later, GM204) in terms of raw perofrmance, and that meant real-world efficiency went down the tubes. If Hawaii had been run at 800-900 MHz core clock like it should have been, it would have had power consumption >100W lower. Evidence of this can be seen in this FirePro W8100 review by Tom's Hardware, which shows the professional Hawaii card (clock rate: 824 MHz) topping out at just 188W even on 100% GPGPU load.

It will be interesting to see how FinFET changes the score. In addition to whatever architectural improvements AMD can throw in, we should also see the ability to produce higher clock speeds without increasing TDP. Apple saw a clock speed increase from 1.3 to 1.85 GHz on the CPU side of their iPhone SoC by moving from planar to FinFET. That's a massive 42% boost. If we saw a similar increase in clock speed on GCN, the sweet spot would move all the way up to 1125-1300 MHz, and speeds of 1400 MHz and above would be easily attainable at some sacrifice of efficiency. Of course, Nvidia will receive some of the same benefits from FinFET when they get there, so that will make for some interesting competition.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I'm gonna try -48mv for a while to see if it is stable (with the unlock and overclock in my sig). So far so good. That seems to knock around 30 watts off the card according to Tom's, so I can't complain. Hopefully it keeps.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,595
6,067
136
I've done this since the 290 launch. I managed to do anywhere from -30mV to -100mV on my cards for free. You can do even more if you lower the core clock to 850 +/- 50 (sweet spot).
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
The funny thing? [H] had a brain fail and in their conclusion:

LOL?
I swear they wrote that before any tests were ever done. If not I don't know how to explain how a tester could write a conclusion that is directly contradicted by their own results. o_O

Anyway AMD is very conservative when it comes to voltage, and yes the correct word is conservative they don't like to have any chance of crashes due to undervolting. Nvidia in my experience is more crash prone (driver stopped responding stuff) but I don't know if it's their more aggressive power gating or something else.
 

CropDuster

Senior member
Jan 2, 2014
375
60
91
I've done this since the 290 launch. I managed to do anywhere from -30mV to -100mV on my cards for free. You can do even more if you lower the core clock to 850 +/- 50 (sweet spot).
Same. My 290 will run 1000mhz with -75mV all day long.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
It's not surprising and it's also how Nvidia is able to get a GTX 980 @ 125 watts and a GTX 750 TI @ 45 watts into laptops. Undervolt, slightly drop the clocks, and lose only ~5-10% in performance.

Some people say AMD overvolts their chips. Perhaps the chips are set at certain voltages because under certain kinds of loads they'd otherwise fail at lower voltages and current shipping clock speeds.
 

Goatsecks

Senior member
May 7, 2012
210
7
76
Yup, Asus Fury vs 980 per [H] review was ~11% more total system power but ~15% faster overall.

The funny thing? [H] had a brain fail and in their conclusion



LOL?

He has calculated that the Fury is, on average, 11.4% faster. To compute this: use the average percentage difference taken from the apples for apples comparison. The Fury manages this using 14.6% more power.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
It's not surprising and it's also how Nvidia is able to get a GTX 980 @ 125 watts and a GTX 750 TI @ 45 watts into laptops. Undervolt, slightly drop the clocks, and lose only ~5-10% in performance.

Some people say AMD overvolts their chips. Perhaps the chips are set at certain voltages because under certain kinds of loads they'd otherwise fail at lower voltages and current shipping clock speeds.

Could you imagine Fury X launched undervolted (or at least at less volts it currently uses). At least Fury was facing 980.

Fury X would get slaughtered even more in performance. With the water cooler, I don't think anyone was really concerned with heat. Power consumption would have still been higher, but the performance delta would have been wider.


I remember a while back reading an article/forum post about how AMD used automation instead of by hand/custom for Bulldozer and how I think some of it was also used in GCN Tahiti. Wasn't that the initial reason why AMD had to juice everything from the start? I'll have to dig see if I can find the info.

EDIT:
AH here we go:
http://www.xbitlabs.com/news/cpu/di...x_AMD_Engineer_Explains_Bulldozer_Fiasco.html

The management decided there should be such cross-engineering [between AMD and ATI teams within the company] ,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style. This results in giving up a lot of performance, chip area, and efficiency.

I was under the impression GCN was just too "fat" from the start. Of course as Mantle came around, and AMD started to explain the hardware just wasn't being properly utilized.

NV ran with a leaner design. Like Bulldozer, the GPUs can run really lean too at the expensive of some performance, but that isn't going to make you look to appealing when your competitor would probably still outperform and use less power. They went for performance first, efficiency be damned.
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
Tom's tried, but they failed. nV's voltage/clocks curve is already optimized. With these results, it's fair.

If you're aggravated and want to see the same gains from Nvidia's GPU, get ready for disappointment. Lowering the GPU's voltage just isn't in the cards because it's achieved by decreasing the internal power target. Since GPU Boost is a very fragile system, every little drop has a negative impact on clock rate. In turn, this results in a massive performance hit. It's not something to complain about, per se. Nvidia simply has its mechanism optimally balanced, so there's practically no room for improvement. Consequently, MSI Afterburner doesn't even offer the option to lower the voltage. It can only be increased.
nV's system automatically adjusts voltage when you overclock. This is why GM204/GM200 can overclock like monsters, voltage is automatically managed for you:

GTX 980
clock_vs_voltage.jpg



GTX 980Ti

clock_vs_voltage.jpg



AMD's chips require you to adjust voltage if needed to overclock, and also allow for this kind of undervolting while leaving clocks alone. My 290 also enjoys a nice UV and some important power savings if set at 1GHz and 1.1v or less, these guys at OCN are doing wonders with Hawaii and its BIOS.

AMD probably overvolts all their chips (GPUs, APUs, CPUs, whatever, they all are fed unreasonably high voltage) this way to maximize yields, we can take advantage of that to reduce power consumption quite a lot with a little trial until artifacting.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I dont think thats exactly fair because cant the GTX 980 also be tweaked for a good 10% boost without even messing with its voltage?

I dont think its intended to be a head to head sort of comparison, more like a proof of concept with respect to Fury undervolting, with a 980 as a bechmarking point since its a known quantity. More scientific in nature than forum-thread-battle in nature
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
clock_vs_voltage.jpg


Woof, that's terrible. I'm keeping my card at 1450mhz and voltage never goes over 1.184v.

@1.220v I can hit 1514mhz, and then I think I still got more room but power limit kicks me in the face.

I'm starting to think I got a unicorn 980 Ti.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Tom's tried, but they failed. nV's voltage/clocks curve is already optimized. With these results, it's fair.

nV's system automatically adjusts voltage when you overclock. This is why GM204/GM200 can overclock like monsters, voltage is automatically managed for you:

Tom's is wrong about undervolting Nvidia cards, though. I had a GTX 780 with a custom bios and was undervolting + overclocking to save on power use. I'm sure there are maxwell custom bios's out that allow for under/overvolting as well. Maxwell probably more efficiently utilizes voltage steps better than GCN, but both have to ensure 100% stability in the most stringent tests which is why there is wiggle room to undervolt at stock or even with overclocked speeds while gaming.
 

thilanliyan

Lifer
Jun 21, 2005
12,039
2,251
126
Both my 290s can undervolt 50mv at stock 950/1250 clocks. Two 7950s I had previously could undervolt by a massive 200mV (0.2v) at stock clocks...I really don't know why those cards had stock volts of 1.15v.
 

Geforce man

Golden Member
Oct 12, 2004
1,737
11
81
I just dug an old 7870myst (Tahiti xt) out of storage, was my brother's old card I think? I think I may re purpose it as a light gaming / htpc card, and edit the bios to 800Mhz core and .95v, that should make it absolutely sip power.