Indeed, the take home message is the same whether it is the old IDC data set cubic fit or the theoretical quadratic functionality, voltage is the largest power modulator knob.
In practice it actually tends to be a 50/50 sort of deal for us enthusiasts who are willing to invest in decent 3rd party cooling (or more extravagant lapping and delidding methods); allow me to show what I mean by way of answering ViRGE's question below.
(btw, always a pleasure to see you posting, you wouldn't know this but I am a longtime fan of yours from your posts over on XS :wub:, I never post there though, just a lurker for years now)
So let me ask you this IDC: why is it that power consumption always spikes above 4GHz? What's so special about that frequency that Intel and co can hold power consumption relatively low in the 3GHz range, but that gives way at 4GHz? BD/PD is a power pig, Intel's processors only touch 4GHz with one core active (implying it's consuming around as much power as 3-4 cores at 3.4GHz or so), and overclocking SNB/IVB past 4GHz comes with a major power penalty.
This can't just be a design issue, can it?
There is a short answer, a long
practical answer, and the verrrrry long
academic answer
😉 I'll only bore you with the former two and spare you of the latter
😀
The short answer is that power consumption markedly spikes upwards once you get above 4GHz because the CPU's operating temperature is also increasing in a way that is insidiously problematic.
The temperature increases, which increases leakage so power goes up for that alone...but the the increase in leakage raises the temperature, which raises the power consumption even more, which raises the temperature even more...this is referred to as the "
thermal runaway effect".
Eventually this foot-race between thermal runaway and peak operating temperature reaches a steady-state that depends entirely on our cooling efficacy. The TIM, the HSF, the ambient temps, etc.
But that is only the half of it, the reason I labeled it "insidiously problematic" is that as the operating temperature increases so too does the minimum voltage necessary for the CPU's circuits to operate in a stable and reliable fashion.
I am now getting into the longer answer I alluded to above, sans the more boring academic stuff I promise
😉, and so I will bring some data into the conversation to speak to:
These data were generated with my 3770k and what it shows is that the minimum voltage needed to stabilize the circuits (signal-to-noise) such that the CPU can reliably pass the rigors of LinX rises with temperature (which is expected, but the data gives numbers to speak to).
In this test the CPU is held steady at 4.6GHz, and if we use a fancy expensive H100 that is lapped and so forth then we can clock this 3770k at 4.6GHz with a mere 1.214V and the peak operating temperature is 65°C. (the green line, which is the CPU voltage, goes with the left-side axis)
As the temperature rises, which would happen if I had a lower quality cooling solution or if my ambient temperature rose, that 1.214V is no longer sufficient to keep the CPU stable.
So I must raise the CPU's voltage, solely because of temperature, and in doing so the voltage increase results in an increase in power consumption. (the red line, which is the CPU's power consumption, goes with the right-side axis)
The increase in temperature results in an increase in leakage current, which increases power usage, which increases temperature, which causes a required increase in the minimum voltage necessary for stability, which increases power, which increases temperature, etc etc.
That makes the higher clockspeeds particularly problematic because the clockspeed is what drives the voltage requirement to first-order, but voltage drives the power consumption as a quadratic, and the power consumption feeds back and drives the temperature as quadratic (thermal dissipation goes to the 4th power of the difference between hot and cold points in the system), and the temperature feeds back into an exponential function that drives the power consumption even higher.
In this case, a real-world test case, my 3770k requires 1.214V (measured with a voltmeter) to be stable at 4.6GHz if the temperature is kept below 65°C, but as the temperature rises up to the point of TJMax (105°C), the voltage required to maintain stable operation increases from 1.214V to 1.277V, an
increase of 5.2% in the voltage alone.
And the resulting increase in the CPU's power consumption (isolated from the rest of the system) rises from 121W to 152W (31.6W w/o the rounding) as the temperature climbs from 65°C to 105°C.
That is a
26% increase in power usage (31.6/121) just because we had to deal with the rising temperatures causing an increase in static leakage as well as requiring us to increase the operating voltage for stability purposes.
Now let's look at this from a different perspective, Intel's perspective, and touch on JumpingJack's comment a bit in the process.
Intel must ensure their CPU's are set to have enough voltage that they can reliably operate all the way up to TJMax at any given clockspeed within the spec. So while we enthusiast OC'ers have within our control the ability to optimize the Vcc to a value lower than that required for TJMax stability, Intel does not have that luxury.
So which pays more dividends in practive? Lowering the temperature or lowering the voltage? (naturally, and ideally, we'd find ourselves investing our time and dollars attempting to optimally lower both, of course)
Well that is what the graph above analyzes. If we took the Intel approach and determined the max voltage necessary for stable operation at say 4.6GHz when the chip hits TJMax (which it will and does if you are using the stock HSF) then you need 1.277V for my 3770k. Intel would of course set the VID value to a number that was much higher than this, but lets say for the sake of argument they use the minimum allowed value as I have done.
Now as enthusiasts we spend money buying 3rd party HSF coolers to lower our temperatures, increase CPU longevity, lower power usage, etc. So what happens if we do that but we don't bother to re-optimize the Vcc?
In the case of my 3770k the temperature decreases from 105°C to 65°C and the power consumption falls from 152.5W to 134W, a decrease of nearly 19W or
12.2% less power usage. This is a decrease in power usage solely due to less static power losses from lower leakage current because we invested in better cooling and reduced our temperatures without touching the voltage parameter. (the purple to olive-green line in the graph above)
If we further optimize the system to lower the voltage needed for stable operation then we can lower the power usage even more, from 134W to 121W at 65°C by reducing the voltage from 1.277V to 1.214V...this is
an additional 8.5% reduction in power consumption simply from being able to lower the voltage because the signal-to-noise ratio has improved because the temperature has been reduced.
Of course lowering the Vcc results in a lower temperature which results in a lower power consumption which results in a lower temperature which results in a lower required Vcc, etc, as the thermal runaway effect goes into reverse and the feedback results in a substantially lower power consumption.
And so we see that it is roughly 50/50 for us enthusiasts in terms of controlling power-consumption by way of controlling the voltage necessary for stable operation at a given clockspeed versus controlling the operating temperature by way of improving the cooling of the CPU.
Ideally we'd spend our time and money doing both, but if we had to choose one to go after it would not really make much of a practical difference at the outlet whether we had simply endeavored to lower the operating temperature with better cooling or if we had endeavored to lower the CPU's voltage (while remaining stable).
But in the end the reason why we see both Intel's and AMD's chips spiral upwards in power-consumption as you get near the high end of their clockspeed band is because the clockspeed drives the voltage requirement which in turn drives the temperature upwards which in turn drives the leakage upwards as well as the required voltage all the more, and we end up operating on the hairy edge of a thermal runaway situation.