When is under-volting recommended?

CuriousMike

Diamond Member
Feb 22, 2001
3,044
543
136
I just purchased a shiny new Sapphire Nitro+ RX 580.
Reading reviews and various user comments, one might get the idea that under-volting is almost necessary on RX-580's.

As strictly a gamer, I think the reason one might under-volt their card is strictly to lower power usage, which would lower heat, which would mean the card would maintain it's base clock speed longer?

If the card I purchased has a "good enough cooling system", might that mitigate the need to under-volt?
 
May 11, 2008
19,542
1,191
126
It never hurts to under volt when possible.
The card you have is factory overclocked i believe. Sapphire might have bumped the voltages as well. Perhaps it is possible to undervolt and keep the clocks.
With respect to the cooling system, the fans will probably have high rpm. The cooler the gpu runs, the lower the rpm and the less noise.
 
  • Like
Reactions: CuriousMike

CuriousMike

Diamond Member
Feb 22, 2001
3,044
543
136
It's been awhile since I've done this - is MSI Afterburner still my best bet for doing the under-volt and verifying the GPU keeps base clocks?
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,414
8,356
126
It's been awhile since I've done this - is MSI Afterburner still my best bet for doing the under-volt and verifying the GPU keeps base clocks?
wattman gives access to all of the p-states, which afterburner doesn't. only problem is you have to load the profile each reboot (hopefully automatic profile loading happens soon, or maybe i need to update).
 

maddogmcgee

Senior member
Apr 20, 2015
384
303
136
If it was me I would lower the clocks slightly if it let me undervolt a fair bit. I think AMD got the clocks right on the 480. Having a silent card is rather nice if you don't want to use headphones all the time plus it will save a fair bit of money long term, at least if your paying 32c a kilowatt like me.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
With a 580 you can probably trade a little performance for a big reduction in power consumption.

I changed my r9 290 from ~950@1.2V to 860@1V. Quick and dirty estimation (950*1.2^2)/(860*1^2)=1.6 says it used to use 60% more power than it does now. Actually it doesn't since 290's by default will not run their cooler over 48% fan speed, then throttle. In benchmarks with the card warmed up it is now 3% slower but with a lot less noise since the blower is now able to cool the card well below 48% fan speed.
 
May 11, 2008
19,542
1,191
126
wattman gives access to all of the p-states, which afterburner doesn't. only problem is you have to load the profile each reboot (hopefully automatic profile loading happens soon, or maybe i need to update).

That is strange, the profile i created, has always been loading automatically.
Only in the occasion that the gpu driver crashes while gaming, will wattman revert to default settings.
Otherwise, the created and saved profile will be used.

I had a few crashes with bioshock 2 remastered the last month, but i think i figured out the magic to get it to run.
Because bioshock 2 remastered has been stable since.
 

ZGR

Platinum Member
Oct 26, 2012
2,052
656
136
I do a -100mv undervolt on my R9 290. Big drop in temps.
 
Mar 11, 2004
23,074
5,557
146
With AMD's cards, undervolting actually tends to bring increased performance, especially sustained performance because their stock voltage settings are stupid high and thus causes it to not hold turbo clocks for long if at all due to the heat/power. For figuring out what you can go with there's probably guides, and probably even people that made a table so that you can start with a pretty good guess what voltages you can do for the ASIC quality of your card/chip, or you can spend a lot of time testing for stability. I think I just looked at what others were doing and then went a little more conservative and much lower heat and much less noise, with lower power use, and all the performance I was getting before (I OC'ed the memory a bit, and upped some of the clock speeds at the higher end of the P-states). Also remember to increase power limit by the max (50%). I forget why, but for some reason that helps AMD cards so that it'll maintain max performance or something and isn't about the actual power use or efficiency or what it seems to imply (so its not telling the card to disregard TDP and go for broke, its basically saying "hey, you've got headroom, use it" when combined with the lower voltages).

I also set the FPS limit to 60 (same as my monitor), and disable Chill (since Chill causes some wonky behavior, dips and stuff in certain games). Oh and disable the power efficiency setting as that was just for managing it overdrawing the PCIe slot IIRC, and I'm guessing your card has more robust power delivery on card so it shouldn't be doing that anyway.

wattman gives access to all of the p-states, which afterburner doesn't. only problem is you have to load the profile each reboot (hopefully automatic profile loading happens soon, or maybe i need to update).

Mine holds unless the driver crashes and then it resets. I can tell because it'll be putting out more heat and making audible noise although that usually takes a bit for me to notice (Windows used to have a notification when that would happen but I think I disabled it as I don't get it any more and had changed notification settings).

If it was me I would lower the clocks slightly if it let me undervolt a fair bit. I think AMD got the clocks right on the 480. Having a silent card is rather nice if you don't want to use headphones all the time plus it will save a fair bit of money long term, at least if your paying 32c a kilowatt like me.

I don't recall what stock volts are but yeah I dropped voltage by quite a bit (on the upper end, I played it safe and had it not too crazily changed on the lower side) and dropped the max clock speed by like 5MHz but upped the like 3 P-States lower than the top one. I disagree that they got the clocks right, for the voltages they were using at least, as I seem to recall that some people with the stock reference cards actually had trouble hitting the max clock because the voltage was throttling it so much (granted it wasn't like it was only hitting 1GHz, it was only throttling a bit down to like 1270 instead of 1280). While they're decent cards, the stock config'ed settings for the 480s were not very good. There was the PCIe overdraw issue, the voltages being far higher than necessary (which actually hurt performance and clock speeds, not to mention really hurt efficiency; I recall people OCing up to like 1350 with lower power use than stock), and just typical AMD, have a decent card that is good value, and then make it worse by putting in no effort to optimize them. It was bad enough that Microsoft focused a lot of the Xbox One X development on resolving that (the touted "Hovis method"). Hopefully AMD's revenue boosts will go towards some manner of binning or something, as their stuff has for years been more efficient than it appears in reviews because the stock voltages were just out of whack, and so when they have a dud like the Vega cards it makes it even worse (with Vega, lower voltages and optimizing clock speeds has a pretty substantial impact - it won't touch Nvidia's efficiency even still, unfortunately you're just not going to be getting much if any extra performance other than perhaps sustained ones if the card was throttling before so its more just making it tolerable).

That is strange, the profile i created, has always been loading automatically.
Only in the occasion that the gpu driver crashes while gaming, will wattman revert to default settings.
Otherwise, the created and saved profile will be used.

I had a few crashes with bioshock 2 remastered the last month, but i think i figured out the magic to get it to run.
Because bioshock 2 remastered has been stable since.

Yeah mine loads automatically. I haven't had it crash in games, but rather it'll very occasionally crash from playing back video overnight (I tend to play a movie/playlist of a TV show when I got to bed). I'm not sure what causes that and its been happening through several driver versions and updates of the player software, it happened with MPC-HC and also with MPC-BE. It doesn't consistently happen though and I just reload my saved profile so its not a huge hassle for me so I haven't tried to troubleshoot more. Probably jinx myself but it hasn't happened for a while so maybe it got resolved somehow.
 
May 11, 2008
19,542
1,191
126
Yeah mine loads automatically. I haven't had it crash in games, but rather it'll very occasionally crash from playing back video overnight (I tend to play a movie/playlist of a TV show when I got to bed). I'm not sure what causes that and its been happening through several driver versions and updates of the player software, it happened with MPC-HC and also with MPC-BE. It doesn't consistently happen though and I just reload my saved profile so its not a huge hassle for me so I haven't tried to troubleshoot more. Probably jinx myself but it hasn't happened for a while so maybe it got resolved somehow.

To be honest, that bioshoick 2 remastered crashes often, has nothing todo with the radeon gpu driver but everything with the remastered game. There are pages filled on steam with people complaining that bS2 remastered crashes during busy scenes on different videocards from both AMD and Nvidia. But i found out that i had to change some settings after reading and trying:

UseMultithreadedRendering = True
TextureStreamingMemoryLimit=6144.000000 (I have an 8GB card, default settings is a mere 512MB)
TextureStreamingDistanceLimit=50000.000000 (This setting must be increased as well or the increase in videoram will not help(I suspect), this is often mentioned as 30.000 and to not change it but i noticed that a higher number helped a lot).
UseMultithreadedRendering=1

The multithreaded rendering is default false and 0, so iset it to true and 1 .

Of course with every graphic setting set to maximum.
And it has been running fine ever since.
 
Mar 11, 2004
23,074
5,557
146
Oh, and as for the question of when should you undervolt, anytime you can get away with it. As long as it doesn't cause instability, go for it. Sometimes you can both overclock and undervolt (and like I said before, with AMD you practically have to undervolt to get the max out of your video card unless you don't care about power use or noise and have the power supply and cooling setup to handle it on the GPU, but even then seems like the performance benefits are pretty marginal and you can cut down on heat and power draw significantly by just dialing it back some and undervolting).

You hear less people talk about it with Nvidia and Intel in the CPU space because they just seem to do a better job of testing chips and so have them at better voltage settings to start and so people are less compelled to tinker with it. I know some people still do it (not sure how much benefit they see, I think with one of the Nvidia cards someone saw increased sustained performance because it lowered power/heat enough that it kept it sustaining boost clocks longer, not unlike AMD, just it wasn't nearly as drastic).

To be honest, that bioshoick 2 remastered crashes often, has nothing todo with the radeon gpu driver but everything with the remastered game. There are pages filled on steam with people complaining that bS2 remastered crashes during busy scenes on different videocards from both AMD and Nvidia. But i found out that i had to change some settings after reading and trying:

UseMultithreadedRendering = True
TextureStreamingMemoryLimit=6144.000000 (I have an 8GB card, default settings is a mere 512MB)
TextureStreamingDistanceLimit=50000.000000 (This setting must be increased as well or the increase in videoram will not help(I suspect), this is often mentioned as 30.000 and to not change it but i noticed that a higher number helped a lot).
UseMultithreadedRendering=1

The multithreaded rendering is default false and 0, so iset it to true and 1 .

Of course with every graphic setting set to maximum.
And it has been running fine ever since.

I don't play too many games (and the ones I do don't tend to be recent and especially demanding ones). Yeah, seems like a fair amount of games need polish and updates because of ridiculous bugs these days. Figures they'd release a "remastered" game that has default settings that seemingly would've made sense when it came out what 10 years ago?

Oh and with regards to driver crashes in general, that's been a huge improvement with Windows 10. GPU crashes cause just a reboot of the driver (and doesn't cause a hang and reboot). At least now (I feel like around when it first came out, it still would some, but haven't had it do that for a long while). That's definitely nice.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I have run my Nitro+ RX480 at -100mV (at peak clocks) since day one, been rock solid and WAY quieter.
 
May 11, 2008
19,542
1,191
126
Oh and with regards to driver crashes in general, that's been a huge improvement with Windows 10. GPU crashes cause just a reboot of the driver (and doesn't cause a hang and reboot). At least now (I feel like around when it first came out, it still would some, but haven't had it do that for a long while). That's definitely nice.

Indeed, in the beginning when i just had my card undervolted sometimes i noticed my GPU fans make more noise and then i found out that i had a driver crash because of the default profile settings loaded by the radeon settings program. The driver restarting is so smooth now.
 
Mar 11, 2004
23,074
5,557
146
Looking things up and...I actually don't know what to recommend for AMD tweaking any more. I see people say that having to push the Power Limit all the way up when undervolting isn't necessary now (you can still, but if you undervolt while keeping same clock speeds and default Power Limit it won't give you reduced performance like it used to; but I've seen others saying that without pushing it up you'll still see reduced performance because it'll want to stay in P-States 3-5 more and go into P-6 and P-7 less). I see people having different experiences with the Power Efficiency, Chill, and Frame Rate Target Control toggles as well, where some have one cause them issues (hitching/stutters when playing games) but not others, but then others have a different one causing that. If I remember, Chill was giving me stutters, but FRTC wasn't (I set it to my monitor refresh 60). I don't think I even tried Power Efficiency. Kinda seems like they've evolved some, where they might function more like they're supposed to/intended from the start (like the power efficiency, I seem to recall that being the thing to make sure that Polaris cards ran in spec on the PCIe but it seems more generally to try and be aggressive at running lower clocks and stuff now), but still seems like some work for some people but not for others.

Some people were resorting to BIOS and registry hacks to drop Vega voltage (some got it way down, like 850 or even 825mV) since Wattman was only allowing down to like 1050 (although if you shifted the memory voltage down too you could go to maybe 900, as the HBM voltage set the base voltage or something), but others were using Afterburner and doing voltage offset, but it would push the lower voltages too low so it'd actually cause some instability when idle. But some claim you can reduce to like 900mV and sustain stock clocks or even overclock and get better performance because you keep the HBM2 cooler and that makes a big difference. I think I saw one person say that just putting Power Efficiency on for Vega would get you most of the benefits of undervolting while losing little performance and not needing to put in the work on tweaking it.
 
May 11, 2008
19,542
1,191
126
I vaguely recall that the whole pci-e power issue was only an issue with the earliest RX480 cards that looked like this :
108116-radeon-rx480-1260x709.jpg



After that , the 3rd party designers distributed the power provided to the different phases of the vrm smps differently in the sense that the gpu could no longer draw significant power from the pci-e slot. Most polaris based cards will not even start when the external pci-e power plugs are not connected.

But that there is so much difference in result, can be binning of the gpu.
I mean, i recently read an article about the power consumption of the custom apu inside the xbox x. Some require more power than others to function and all the variables like fan profile and voltage profile and power states are tested and programmed at the factory for every xbox one x individually. This is called the hovis method.
https://www.tested.com/tech/825498-xbox-one-x-and-hovis-method/
It makes sense that in order to sell as much gpu as possible that the binning is liberal and with more voltage than required.
Then there is the whole story about asic quality.

And of course, that some have weird issues may be related to other issues like not taking esd properly into account.
Corrupt windows instabilities because of too much cpu memory overclocking may be an issue.
It is not realistic to look at all the cases and purely blame the gpu alone.
 

FreshBross

Member
Jul 30, 2018
50
1
6
For the 500 series I find that undervolting has little effect, but all 400 series i have used, I undervold -100mv or max it lets me, like -93, -96 on latest afterburner settings.

Better performance, less heat, no brainer. I have seen some 500 series pull as much as 140 watts on full load, you can cap those to 110 in bios and get the same computing from it.
 
Mar 11, 2004
23,074
5,557
146
I vaguely recall that the whole pci-e power issue was only an issue with the earliest RX480 cards that looked like this :
108116-radeon-rx480-1260x709.jpg



After that , the 3rd party designers distributed the power provided to the different phases of the vrm smps differently in the sense that the gpu could no longer draw significant power from the pci-e slot. Most polaris based cards will not even start when the external pci-e power plugs are not connected.

But that there is so much difference in result, can be binning of the gpu.
I mean, i recently read an article about the power consumption of the custom apu inside the xbox x. Some require more power than others to function and all the variables like fan profile and voltage profile and power states are tested and programmed at the factory for every xbox one x individually. This is called the hovis method.
https://www.tested.com/tech/825498-xbox-one-x-and-hovis-method/
It makes sense that in order to sell as much gpu as possible that the binning is liberal and with more voltage than required.
Then there is the whole story about asic quality.

And of course, that some have weird issues may be related to other issues like not taking esd properly into account.
Corrupt windows instabilities because of too much cpu memory overclocking may be an issue.
It is not realistic to look at all the cases and purely blame the gpu alone.

Yeah I think it was just the launch reference RX480s that had the issue.

Even just with Vega (I can't recall if that option is available for older cards, I know some of the features are but not sure if all of them and for what cards), the power efficiency would almost certainly have expanded in what it does (which really I think their way of keeping it in spec was to basically limit its performance, so you were trading a bit of performance so it actually was doing what the name implies, just that for people that were having issues it means they could still use the card in that system; definitely a bunch of that was older boards and cheaper boards that already were not the most stable, so a card that was out of spec would show the issue).

Yeah. That's why I hope AMD works with their partners to try and do some of that for themselves, although I think they just need to be able to afford to bin their chips better. Some of that maybe due to GlobalFoundries issues or something too, but AMD needs to figure out something. Its a shame they didn't bin more, but keep the subpar (but still fully capable) chips ready so that when mining takes off they'd have them ready to throw on miner cards (or let their partners figure up some special mining rigs where they throw the GPUs on a bunch of x1 PCIe boards).

I would be really curious about the binning/production differences between NVidia and AMD. I have a hunch that Nvidia probably could have similar issues but they bin more, but they might also put more work into the engineering (maybe do more "respins" or whatever to get optimal chips to finally produce), so that they also are starting at a better chip quality to begin with. And then they optimize them more via software on top of that.

If I were AMD, I'd be begging Microsoft to sell the One X chip to OEMs so they can make little SFF Win 10 PCs. It'd make for an awesome Steam Box that's for sure.
 
  • Like
Reactions: William Gaatjes
May 11, 2008
19,542
1,191
126
Yeah I think it was just the launch reference RX480s that had the issue.

Even just with Vega (I can't recall if that option is available for older cards, I know some of the features are but not sure if all of them and for what cards), the power efficiency would almost certainly have expanded in what it does (which really I think their way of keeping it in spec was to basically limit its performance, so you were trading a bit of performance so it actually was doing what the name implies, just that for people that were having issues it means they could still use the card in that system; definitely a bunch of that was older boards and cheaper boards that already were not the most stable, so a card that was out of spec would show the issue).

Yeah. That's why I hope AMD works with their partners to try and do some of that for themselves, although I think they just need to be able to afford to bin their chips better. Some of that maybe due to GlobalFoundries issues or something too, but AMD needs to figure out something. Its a shame they didn't bin more, but keep the subpar (but still fully capable) chips ready so that when mining takes off they'd have them ready to throw on miner cards (or let their partners figure up some special mining rigs where they throw the GPUs on a bunch of x1 PCIe boards).

I would be really curious about the binning/production differences between NVidia and AMD. I have a hunch that Nvidia probably could have similar issues but they bin more, but they might also put more work into the engineering (maybe do more "respins" or whatever to get optimal chips to finally produce), so that they also are starting at a better chip quality to begin with. And then they optimize them more via software on top of that.

If I were AMD, I'd be begging Microsoft to sell the One X chip to OEMs so they can make little SFF Win 10 PCs. It'd make for an awesome Steam Box that's for sure.

I agree. Indeed, Nvidia because of having more financial freedom went to considerable length and effort to create a very optimized architecture. And it shows.
Also, they went to considerable length because it is a great selling point for data centers that use lots of gpu's that are very energy efficient.
And it allows for more cores scaling without having an insane TDP.
Makes me wonder if undervolting is a useful option for Nvidia gpu's at all since they are so well optimised.

I hope that AMD will have a lot of fat years. They innovate with minimum resources and still maintain to stay within (close) vicnity to the competition. Imagine what AMD could do when having the financial resources like Intel or Nvidia. Although , the larger a company, the more less tech savy people work and make the decisions and the more money grabbing managers you get that have non realistic visions.