Maxwell Power Consumption from Tom's Hardware

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

know of fence

Senior member
May 28, 2009
555
2
71
Thanks for sharing this, I don't like TH. But this time they deserve props for not just running benchmark scripts and regurgitating marketing slides, unlike the other review sites.
They at least made an effort to measure stuff with amp clamps and oscilloscopes and stumbled upon a clue to what kind of "magic" was used to achieve lower power this time around. Kind of similar to Intel's micro sleep states.

Using an ever finer resolution for downclocking steps particularly on the time scale is a good way to save power, and clearly an evolutionary step. More importantly it contradicts the explanations that architectural changes were responsible for these savings. It's reassuring to know that there isn't all that much that can be done with GPU processing aside from precisely throttling it down when not needed.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
There is something about Tom's hardware I still really don't like, I can't just put my finger on it.

Its the constant history of taking money for reviews and puff pieces. That feeling of being mislead and lied to by a paid advertiser never leaves anyone. A reputation once tainted in that way never returns and its behind the decline of Toms in popularity.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
This finding can lead us to another one, probably more interesting:

- Is the maxwell uarch achieving this steep power gating by selection of different voltages for different instructions (a la Intel with +0.1v on AVX), or this power gating happens only when the GPU finds there is no more performance to gain (as in: it is being bottlenecked by something else, first, see CPU), so then it reduces clocks and voltages accordingly to accomodate with said bottleneck? The latter would induce us to think that when we check GPU usage, it will always stay at 100% in gaming regardless of bottlenecks, as the GPU probably downclocking to stay on pair with your CPU. This will also mean that Maxwell perf/watt would be extremely inconsistent between games and tested platforms, as probably, using an extreme example, in a case where Skyrim is benched with an FX, the perf/watt of the chip would look phenomenal because it is heavily bottlenecked in a x87, really single-threaded game.-
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This seems like a refinement of the boost function. It's great that they can do this without hurting absolute performance. Now, it does raise the question on whether these avg. readings hold true for actual PSU requirements, or if they just bench good.
 

crashtech

Lifer
Jan 4, 2013
10,682
2,280
146
Maybe this means PSUs for big GPUs will need bigger caps across the outputs to ensure smooth power delivery.
 

KompuKare

Golden Member
Jul 28, 2009
1,224
1,582
136
Maybe this means PSUs for big GPUs will need bigger caps across the outputs to ensure smooth power delivery.

(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.
 

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.
Wouldn't that have a very negative effect on power usage on the GPU?
 

crashtech

Lifer
Jan 4, 2013
10,682
2,280
146
Large caps use a lot of real estate, something that is in short supply on GPUs. It was my surmise that power spikes of the magnitude described would take rather physically large caps to suppress. But I am no expert, either.

Capacitors don't consume appreciable amounts of power, a perfect theoretical one would consume none at all. In this case it would act very similarly to a battery, discharging and recharging very rapidly in sync with power demands.
 
Last edited:

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.

They are measuring it before the gpu. I think it's very relevant to figure out how maxwell relates to some past GPUs and AMD gpus and firstly determine if the spiking usage is abnormal or new, followed by the effect on a PSU.

It could be built into PSU capabilities (handle extremely fast fluctuations), but perhaps the PSU should have enough leeway etc.

I wish they would explore it further. I'm not one bit worried about it other than questions like should you really have a quality PSU with leeway, or not?

I still have to give them props for stepping up their analysis. It does look like some high end equipment.
 

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.

If you read the description of power gating on Haswell, it sounds very similar in concept. I'd be surprised if an oscilloscope check of Haswell CPU power usage didn't look similar to these Maxwell graphs.

IMO, it was borderline dishonest to post such graphs for Maxwell only without other modern GPUs and CPUs as a reference point or comparison.
 

KompuKare

Golden Member
Jul 28, 2009
1,224
1,582
136
If you read the description of power gating on Haswell, it sounds very similar in concept. I'd be surprised if an oscilloscope check of Haswell CPU power usage didn't look similar to these Maxwell graphs.

IMO, it was borderline dishonest to post such graphs for Maxwell only without other modern GPUs and CPUs as a reference point or comparison.

I imagine what happened is that they ran the tests they had been doing for a while now, noticed that these Maxwell cards gave strange results and then investigated further (getting the second oscilloscope and so on as the describe in the article). Which is fair enough: notice some new behaviour, investigate it and report it.

However, you are right they now should go back and investigate the behaviour of older GPUs and CPUs too.
 

lyssword

Diamond Member
Dec 15, 2005
5,630
25
91
yeah like the other guys have mentioned, the scientific method is to make test control in an experiment, not just new findings
 

Kallogan

Senior member
Aug 2, 2010
340
5
76
The 970 is a maxwell also ?

What strikes me is how the lower level 970 can draw as much power as the 980 ?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I don't know if I get TH right...
How much of the perf/watt increase is due to architecture improvements, and how much is a trickery done on crude power consumption tests?
 

Abwx

Lifer
Apr 2, 2011
11,837
4,790
136
I don't know if I get TH right...
How much of the perf/watt increase is due to architecture improvements, and how much is a trickery done on crude power consumption tests?

The lower the GPU loading the more the efficency, at high loading efficency converge to the values of the previous gen, i used 200W as the base TDP in the exemple below and indeed the 970/980 are not 150-180W TDP cards, these are genuine 250W TDP cards, the TDP claimed by Nvidia is the average power comsumption in games but push the cards in intensive computations and they will consume 240-280W.


I explain grossly the kind of optimisation in function of the load, let s take 7 values of throughput from 0 to 100%, i put first the comsumption without optimisation as if it was a regular 780 and then the optimised power management.

0% 0W 0W

20% 40W 10W

40% 80W 40W

60% 120W 80W

80% 160W 120W

90% 180W 160W

100% 200W 200W
 
Last edited:

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
The lower the GPU loading the more the efficency, at high loading efficency converge to the values of the previous gen, i used 200W as the base TDP in the exemple below and indeed the 970/980 are not 150-180W TDP cards, these are genuine 250W TDP cards, the TDP claimed by Nvidia is the average power comsumption in games but push the cards in intensive computations and they will consume 240-280W.

I don`t think you know how power and amp`s work. At all.
Spikes in amps and power are very much known in the world of electronics. Be it motors, capacitators, power supplies whatever. Starting current for example are very much known, where a 1500W motor may "consume" 3000W for a brief moment.

This can of course be visualized with the tool TomsHardware use. Its taking power measurements to the extreme, but luckily they plot down average power consumption. With a long wave graph you will clearly see that the GTX 980 consume 150-180W, and the small spikes every now and then (Toms saw one spike to 300W in a fraction of a second, <1ms) does not last long enough to interfere with the 170W TDP.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The bogus and FUD to find some kind of excuse to dismiss Maxwell is amazing.

Its Bulldozer all over again.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png


Litecoin.png


pic_disp.php
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png

Yep, the GTX 980 clearly consumes 80W less than GTX 780 Ti.
Techpowerup found the same with both Average (Gaming) and Maximum (Furmark).

Tomshardware mentioned PSUs with their findings and what PSU people should use, which is ok, but they should have also said that any PSU have a maximum rating that goes above the wattage it is marked with. Its because the PSUs have to deal with these short spikes now and then. Its nothing magical about it, its not typical to Maxwell, its typical to a big range of electronic products, also other graphic cards
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,837
4,790
136
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png


Litecoin.png


pic_disp.php

It use 240W for sure, Nvidia driver detect Furmark and will indeed cap the power usage, look like THG get over this trick, if they had made such a gross measurement mistake this would also show in their other comsumption graphs.

Besides, did you pay attention to the throughput/watt in lite coin mining.??.

1.74KHs/W for the 970 , 1.925 for the 980 and 2 for the 290X, what happened to the 2x perf/watt improvement.?.Or was it in respect of the GT 680 wich score about 1KHs/W..?.

And what would happen in a task where its throughput would be maxed out and at 290X levels..?.

Edit : in the last graph the 980 consume less because it has reached its max temp, the GPU is surely throttling :

pic_disp.php


That was really using a bench out of context...
 
Last edited:

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I think it is a pretty clever idea, if it is indeed doing some kind of real time voltage on demand function. I wouldn't be surprised if that is helping with some of the pretty nice overclocks I'm hearing about. I guess it isn't all perfect, as some scenes/games/settings may make the power saving look a little less stellar, and people may find they're still better off with a PSU slightly on the big side (if it spikes and drains the caps on a lower end PSU voltages could drop, could be instability). But, on the whole, averaged out over time, especially given how solid they perform for the low entry price (at least the GTX 970) they got a real winner here, I think.

Of course, we'd also have to see other cards tested in the same manner, maybe this behavior isn't that unique to some degree anyway.