How To UNDERvolting GTX1070 for PERFORMANCE, yeah, weird, right?

Aug 2, 2003
3,132
261
136
#1
What if you could get the same/better performance, at a much reduced wattage?

Ok, so I am a noob at this, but the DC 24/7 community needs to try this for sure.

I presume this will work similarly for most recent Nvidia cards, not just the 1070. Sources:
LinusTechTips
Reddit
ROG.Asus

I do apologize, this particular technique is for Windows/MSI afterburner. This is merely intended to open the door for conversation here in the DC forum, as the other forums are less polite than we are. :) PLEASE contribute your knowledge in the comments, especially for Linux.

***********************************

Not all GPUs of the same model are created equal. Many are far better than the baseline/close-to-failure cards. But the cards are all programmed to run the same, because it would be an enormous task to produce them in mass quantities otherwise. Typically, your card is likely able to perform just the same, maybe even better, at a lower amount of power than the TDP. Better? How? Because of the thermal limitations mostly. At a lower power, there is less heat and the automated boost clock can hit higher frequencies (without the constant fluctuations normally seen). I am only looking to reduce the power consumption in this post however, not boost my clocks. These newer cards are incredible in how they automatically balance the many factors involved to produce the boost clock. Undervolting takes advantage of that.

First, you will need to be running Windows and the latest version (non-beta) of MSI's Afterburner for this method. Other programs will surely be capable of the same thing, but this addresses only Afterburner's approach.

First I would recommend saving the stock profile as number 1, should something go wrong. Second, temporarily set fans to 100% and make sure the card is below 39C before proceeding.

With Afterburner running press CTRL + F to bring up the voltage curve. What is the maximum frequency your card boosts to? Pick a voltage that you want the card to max out at ( 900mv for me, VERY conservative setting), then use the overclock slider to boost 900mv to the max clock shown. Then bring DOWN all the other voltage/frequency combinations to the right, down to 900mv or whatever you've chosen.

Save that as your second profile. Done. I personally have seen about 30 watts saved per 1070. Could I do better? Sure, but I'm lazy, complacent, and happy with 30 watts x 8 cards. It's like running 2 cards for free. :)

Additionally, for the 1070 at least, toss in 400MHz of Memory overclock for some extra performance.
 
Last edited:

lane42

Diamond Member
Sep 3, 2000
5,062
36
106
#2
There's not much in Linux you can do, that I know of, to overclock or underclock
Graphics cards. Nvidia x server will raise clocks but it depends on load and temps.
About the only thing you can do is adjust fan speed. I have the clock and memory
adjustment's in x server but they don't work :(
 

pututu

Junior Member
Jul 1, 2017
15
9
41
#3
Not sure if you will find what you want in this post here.

Other than challenges, I tend to limit my GPU to 55% to 60% power limit most of the time.
 
Jan 20, 2019
26
5
11
#4
Does also help overclock or mostly to give more boost time at stock clocks due to lower heats?
 

StefanR5R

Platinum Member
Dec 10, 2016
2,258
343
106
#5
Not sure if you will find what you want in this post here.

Other than challenges, I tend to limit my GPU to 55% to 60% power limit most of the time.
Very interesting link, thank you.
I see that you tested not only for points for GPU power, but also for points per host power, and found practically the same optimum GPU power limit for both. Do I understand this correctly?

You tested (a) with an Nvidia Maxwell GPU, (b) with two applications which utilized your GPU very well and have fixed credits per task without regard for return speed: PrimeGrid GFN-18 and PrimeGrid PPS-Sieve.

I did something similar a few months ago but (a) with Nvidia Pascal, (b) with an application which does not utilize the GPU fully and has a quick return bonus: Folding@Home. I remember that I was disappointed but not very surprised by the result. I searched and found my post of this test, and re-reading it now, I too found that a reduced GPU power limit increases PPD per host power somewhat. But the increase was less than 5 %.

With Afterburner running press CTRL + F to bring up the voltage curve. What is the maximum frequency your card boosts to? Pick a voltage that you want the card to max out at ( 900mv for me, VERY conservative setting), then use the overclock slider to boost 900mv to the max clock shown. Then bring DOWN all the other voltage/frequency combinations to the right, down to 900mv or whatever you've chosen.
I would like it a lot to be able to do this on Linux.
 
Last edited:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
14,240
206
55
#6
I suppose I should share my script that I use on Linux. This can lower the thermal limit and overclock - at the same time! - but I haven't investigated voltage changes yet.

Oh, researching this I remembered you have to set up "Coolbits" in your X configuration file first. I don't think I've set mine up for voltage control.

Code:
# First, get the GPU(s) by name.  This is for a 1060:
dev6=`nvidia-smi | grep "GTX 106" | sed -e 's/^| *//;s/ .*$//'`

# This lets you run these commands remotely, but a monitor does need to be hooked to the system.
DISPLAY=:0
XAUTHORITY=/var/run/lightdm/root/:0

# Manual fan speed setting:
nvidia-settings -a [gpu:$dev6]/GPUFanControlState=1
nvidia-settings -a [fan:$dev6]/GPUTargetFanSpeed=77
# Power limit, in Watts:
sudo nvidia-smi -i $dev6 -pl 100
# Overclocking:
nvidia-settings -a [gpu:$dev6]/GPUGraphicsClockOffset[3]=60
nvidia-settings -a [gpu:$dev6]/GPUMemoryTransferRateOffset[3]=100
 

lane42

Diamond Member
Sep 3, 2000
5,062
36
106
#7
I load this in Terminal and reboot, sudo nvidia-xconfig --thermal-configuration-check --cool-bits=28 --enable-all-gpus
 

pututu

Junior Member
Jul 1, 2017
15
9
41
#8
Very interesting link, thank you.
I see that you tested not only for points for GPU power, but also for points per host power, and found practically the same optimum GPU power limit for both. Do I understand this correctly?

You tested (a) with an Nvidia Maxwell GPU, (b) with two applications which utilized your GPU very well and have fixed credits per task without regard for return speed: PrimeGrid GFN-18 and PrimeGrid PPS-Sieve.

I did something similar a few months ago but (a) with Nvidia Pascal, (b) with an application which does not utilize the GPU fully and has a quick return bonus: Folding@Home. I remember that I was disappointed but not very surprised by the result. I searched and found my post of this test, and re-reading it now, I too found that a reduced GPU power limit increases PPD per host power somewhat. But the increase was less than 5 %.


I would like it a lot to be able to do this on Linux.
To your first question, that's correct. The optimum PPD/(PC_watt), with power level as measured from the outlet (kill-a-watt) and the PPD/(GPU_watt) as given by nvidia-smi share the same optimum GPU power limit. The first table of data in my first post has all the numbers. One can also calculate the delta power difference between PC(host) and GPU only. I live in California where the power cost is very high, hence I need to optimize for PPD/watt, except for project challenges where I normally run at stock speed. Overclocking GPU doesn't seem to gain much relative to the amount of power consumed with the exception of projects with QRB scoring (like folding). I recall seeing quite a number of sites where the crypto miners set their GPU at lower power limit well below 100% to improve power efficiency. We did tested Pascal cards and yield similar trend on Moo project.
 

Similar threads



ASK THE COMMUNITY

TRENDING THREADS