• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD Radeon RX Vega 56 and Vega 64 in the undervolting test

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
That threw me for a second. For the card it's obviously much more than 18%. You're using system power to calculate that %.

Yeah you are right which makes it even worse. So according to their review of the card, idle use is 136W and load is 393W. -> 257W more. Under clocked at load power of the card alone is reduced by 73W or 28% while gaining mostly > 15% on performance. This means 43% higher performance/watt.
 
Yeah you are right which makes it even worse. So according to their review of the card, idle use is 136W and load is 393W. -> 257W more. Under clocked at load power of the card alone is reduced by 73W or 28% while gaining mostly > 15% on performance. This means 43% higher performance/watt.

It is the whole system, not the card, drawing so much (it is clearly indicated Gesamtsystem which means Entire System in German). So under load you have to factor also CPU, RAM, and so on.

Edit: Ah, OK, you made the calculation based on the difference between idle and load. This, however, misses the idle load offset for the card. In any way, yes, the difference in perf/W is huge.
 
It is the whole system, not the card, drawing so much (it is clearly indicated Gesamtsystem which means Entire System in German). So under load you have to factor also CPU, RAM, and so on.

True. Thats what I'm trying to do. 257w is the difference between idle and load, eg. what the card uses when full loaded. 73 w of that is 28%. So with their tweak it uses 28% less power at full load.
 
I see total system power as a better indicator all around - what the end user can expect to see at the wall. When you improve GPU performance in a GPU-bound situation, the CPU will also draw more power because it will get closer to operating 100% as opposed to 80% or something. In reality, Vega56 is using at least 73W less power because the CPU is now consuming more power to deliver more frames within that total system power of 321W. Their test system is a Intel Core i7-3960X @ 3.9 GHz, which is not pushed very hard for SNB-E, but I would very much like to see the exact GPU power consumption to know for sure.

It's possible that this is silicon in the 99th percentile, but based upon past AMD GPU undervolting performance, it's probably more like 60th.
 
We dont know to what degree avfs or watman display is buggy. It still is acording to amd and in my setup that also seems to be the case for voltage regulation.

As for the binning. Perhaps they got more variability from gf than expected. And then - staying hell bend on simple beancounter goals like yield and freq they launch with +15% voltage to boot.
The smart solution is obviously just to create another sku like rx 53. An rx56 with a good deal lower freq. All problems solved with the added benefit of having a sku closer to rx 580. Look at ryzen. Binning and sku done right.

They can probably fix the avfs and driver issues but no way is binning and 1.2v for rx64 going away. A facelift will come soon. The drivers will probably also change a good deal.
 
Or they could just be leaving the voltage tuning for the Nano card. From what I recall on the Fury Nano, it didn't seem particularly well binned, just that they put effort into managing the clocks and power which seems software alone can manage the level that the Nano offered. Which would be stupid as it hurts them overall due to how bad Vega looks (although, my guess is they think people don't care that much on desktop performance focused cards). But it gives them an extra thing to tout on that card.

My actual guess, and perhaps this is one of the things that was allegedly causing issues between the CPU and GPU teams at AMD, is that the GPU team sucks at implementing power features in the hardware. They've had stuff that should help with that for a while (and have been adding more and more) but seems like the only things that are actually benefiting power use is the software features (that are just limiting the GPU workload, and letting people tweak power use on their own). If you look at the gains made in the APUs, its clear that the CPU team had better power management (even when they were still pushing voltages and stuff higher than they should be, the APUs still got better in their default states).

My guess is that the voltage issue would be a non-issue if they had the hardware power management features working correctly, but they just have not been. We've seen multiple times now (Vega, Polaris, Fiji, you could argue 290, frankly probably all GCN even although I don't know that AMD was touting efficiency back then) that absolutely the cards are capable of the efficiency AMD states, just that they are not shipping the cards to do that in default state and require you to tune them yourself (and not doing anything to alert reviewers, which is obviously stupid; doubly so since they're leaving performance on the table as well).
 
...From what I recall on the Fury Nano, it didn't seem particularly well binned, just that they put effort into managing the clocks and power which seems software alone can manage the level that the Nano offered...
They actually didn't bin the Nano chips separately at all - they used the same ones as the Fury X.

Anecdotally, I run my Fury Tri-X undervolted from the default 1250mV down to 1185mV, which shaves off about 30W, making an already quiet card even quieter.
 
It's not like NV is not sloppy with voltages either, running ridiculous volts that are unneeded for most chips. But there must be reasons why they continue to do so.
 
They use these shoddy chips because they're broke. Look at the times when AMD's GPU division had better performance, power consumption, and price; AMD either lost money, or just barely broke even. They're on razor thin margins, and are trying to squeeze every penny so they don't go bust.
 
Thought I would share my results so far with the Powercolour Vega 64.

Power +50%
Core +5%
Core volt: 1050
Mem 1100
Fan 75%
Temp 67C

Core holds steady at 1712mz and doesn't drop at all, memory steady at 1100mzh.
Result's is the Dagger hashing is up from 38 stock to 44mh

So lucked out on the silicon lotto by the looks of it.
Using the beta blockchain compute drivers.
 
Thought I would share my results so far with the Powercolour Vega 64.

Power +50%
Core +5%
Core volt: 1050
Mem 1100
Fan 75%
Temp 67C

Core holds steady at 1712mz and doesn't drop at all, memory steady at 1100mzh.
Result's is the Dagger hashing is up from 38 stock to 44mh

So lucked out on the silicon lotto by the looks of it.
Using the beta blockchain compute drivers.

Interesting, with the blockchain drivers voltage control doesn't work for me. It will change in Wattman but it doesn't actually take effect.
 
Interesting, with the blockchain drivers voltage control doesn't work for me. It will change in Wattman but it doesn't actually take effect.

What are you using to see your voltages then?

Also update results.
Claymore v10
-30% core (Any tools alow me to set it lower?, and work with VEGA) as I'm still not affecting the hash rate set this low.
Core 1050v
Mem 1100.
Watercooled at <55°C

44.2Mhash on ETH only.
 
Interesting, with the blockchain drivers voltage control doesn't work for me. It will change in Wattman but it doesn't actually take effect.
Folks over at ocuk found that voltage control does not work unless you set GPU voltage at the same value as HBM voltage, or higher.
Setting P6 and P7 volts lower than HBM setting has zero effect. HBM mV must be the same as P6/P7 mV for lower voltages to work.
 
Last edited:
Folks over at ocuk found that voltage control does not work unless you set both GPU voltage at the same value as HBM voltage, or higher.

Ahh, I think you you just solved a puzzle for me. Will test when I get home. I can get my V64 WC edition to use ~+-200W at decent settings. So my card does 1100Mhz memory at 950mv. If I set the P6/P7 also to 950mv perhaps the card will use even less.

I picked up a couple Vega 56's yesterday to play with so I'll be able to compare them as well. Availability is finally decent for these cards.
 
What are you using to see your voltages then?

Also update results.
Claymore v10
-30% core (Any tools alow me to set it lower?, and work with VEGA) as I'm still not affecting the hash rate set this low.
Core 1050v
Mem 1100.
Watercooled at <55°C

44.2Mhash on ETH only.

Folks over at ocuk found that voltage control does not work unless you set GPU voltage at the same value as HBM voltage, or higher.

Sorry, I should have updated. My voltage control issue was from a bad driver install I guess. I did a clean install of the crypto driver and had voltage control after that.

ElFenix, you are correct, Polaris worked the same way.
 
I can't seem to undervolt the core, it's stuck at 1.25v, even if the HBM voltage is lower.

EDIT: I'm confused now lol...I updated HWInfo64 to the latest beta and the voltage values have been flipped it seems (now it's 975mv core, 1.25v mem), but would make sense I suppose since stock memory voltage on V56 is 1.25v?.
 
Last edited:
So a vega 56 can be as fast if not faster than the 1080 if you just undervolt? Why can't this be readily available at MSRP?!?!!? I would totally buy and also save money on a freesync monitor as the same time.
 
What I dont understand is why brands dont come up with their own undervolted models...
Meh. Not only is the new rendering pipeline not activated and hbm2 pseudo channel not working or activated it seems the entire new afvs is not properly implemented in drivers either.
No need to hurry for that when supplies of both dies and hmb is probably scarce.
On can say amd is a bit lucky here. Lol.
 
Meh. Not only is the new rendering pipeline not activated and hbm2 pseudo channel not working or activated it seems the entire new afvs is not properly implemented in drivers either.
No need to hurry for that when supplies of both dies and hmb is probably scarce.
On can say amd is a bit lucky here. Lol.
The real issue is, are these features properly implemented in the hardware? We'll see by end of year. I'm optimistic however.
 
The real issue is, are these features properly implemented in the hardware? We'll see by end of year. I'm optimistic however.
Probably but i think we underestimate the importance of software here. Its a massive undertaking.

Eg this week my denon reciewer upgraded firmware to have hlg and dolby vision and now my vega have intermittent signal loss to the reciewer in hdmi 🙁.
For the driver team its back to correct the issue that was supposed to be fixed in mid august for some hdr devices.
And that is fixing on top of a avfs that is probably not implemented properly. Perhaps they even need to fix the new shader to even get this avfs right. Doa...

If we get all the tech working in vega end this year we are lucky. I kind of doubt it but zlatan guess so.
But hey zlatan is the developer and i am the man that pays the developers and my experience says it will drag on.
Developer's are very optimistic people.
I understand that. Otherwise they wouldn't undertake such crazy task.
Let do as them - lets hope !
 
Back
Top