• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Maxwell Power Consumption from Tom's Hardware

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Thanks for sharing this, I don't like TH. But this time they deserve props for not just running benchmark scripts and regurgitating marketing slides, unlike the other review sites.
They at least made an effort to measure stuff with amp clamps and oscilloscopes and stumbled upon a clue to what kind of "magic" was used to achieve lower power this time around. Kind of similar to Intel's micro sleep states.

Using an ever finer resolution for downclocking steps particularly on the time scale is a good way to save power, and clearly an evolutionary step. More importantly it contradicts the explanations that architectural changes were responsible for these savings. It's reassuring to know that there isn't all that much that can be done with GPU processing aside from precisely throttling it down when not needed.
 
There is something about Tom's hardware I still really don't like, I can't just put my finger on it.

Its the constant history of taking money for reviews and puff pieces. That feeling of being mislead and lied to by a paid advertiser never leaves anyone. A reputation once tainted in that way never returns and its behind the decline of Toms in popularity.
 
This finding can lead us to another one, probably more interesting:

- Is the maxwell uarch achieving this steep power gating by selection of different voltages for different instructions (a la Intel with +0.1v on AVX), or this power gating happens only when the GPU finds there is no more performance to gain (as in: it is being bottlenecked by something else, first, see CPU), so then it reduces clocks and voltages accordingly to accomodate with said bottleneck? The latter would induce us to think that when we check GPU usage, it will always stay at 100% in gaming regardless of bottlenecks, as the GPU probably downclocking to stay on pair with your CPU. This will also mean that Maxwell perf/watt would be extremely inconsistent between games and tested platforms, as probably, using an extreme example, in a case where Skyrim is benched with an FX, the perf/watt of the chip would look phenomenal because it is heavily bottlenecked in a x87, really single-threaded game.-
 
This seems like a refinement of the boost function. It's great that they can do this without hurting absolute performance. Now, it does raise the question on whether these avg. readings hold true for actual PSU requirements, or if they just bench good.
 
Maybe this means PSUs for big GPUs will need bigger caps across the outputs to ensure smooth power delivery.

(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.
 
(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.
Wouldn't that have a very negative effect on power usage on the GPU?
 
Large caps use a lot of real estate, something that is in short supply on GPUs. It was my surmise that power spikes of the magnitude described would take rather physically large caps to suppress. But I am no expert, either.

Capacitors don't consume appreciable amounts of power, a perfect theoretical one would consume none at all. In this case it would act very similarly to a battery, discharging and recharging very rapidly in sync with power demands.
 
Last edited:
(Disclaimer: I don't know much about circuit design especially analogue power circuits.)

Could the caps not be on the GPU side instead? This would probably mean a higher BoM for the graphic card PCB and possibly somewhat worse power consumption figures (because those peak surge capacitors would have to be powered) but far less chance of wearing out your PSU. In fact, properly designed on-card surge capacitors working in sync with GPU Boost (that is being aware of the capacitor's charging cycle) might be a better solution.

I was going to speculate that maybe this whole effect was something which was only the reference model but the guy doing the testing, Igor Wallossek, did test the Gigabyte 970 model too. Was surprised to not find any discussion about the tests on Tom's though (looked on .de, .co.uk and .com).

Anyway, kudos to Tom's for such thorough the testing (their equipment does not look cheap) just have to wait for some electrical engineers to answer if any of this would cause any actual problems or not. Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.

They are measuring it before the gpu. I think it's very relevant to figure out how maxwell relates to some past GPUs and AMD gpus and firstly determine if the spiking usage is abnormal or new, followed by the effect on a PSU.

It could be built into PSU capabilities (handle extremely fast fluctuations), but perhaps the PSU should have enough leeway etc.

I wish they would explore it further. I'm not one bit worried about it other than questions like should you really have a quality PSU with leeway, or not?

I still have to give them props for stepping up their analysis. It does look like some high end equipment.
 
Certainly after the changes Intel wanted for Haswell power supplies, I don't think consumers would be happy to have to get another power supply so soon.

If you read the description of power gating on Haswell, it sounds very similar in concept. I'd be surprised if an oscilloscope check of Haswell CPU power usage didn't look similar to these Maxwell graphs.

IMO, it was borderline dishonest to post such graphs for Maxwell only without other modern GPUs and CPUs as a reference point or comparison.
 
If you read the description of power gating on Haswell, it sounds very similar in concept. I'd be surprised if an oscilloscope check of Haswell CPU power usage didn't look similar to these Maxwell graphs.

IMO, it was borderline dishonest to post such graphs for Maxwell only without other modern GPUs and CPUs as a reference point or comparison.

I imagine what happened is that they ran the tests they had been doing for a while now, noticed that these Maxwell cards gave strange results and then investigated further (getting the second oscilloscope and so on as the describe in the article). Which is fair enough: notice some new behaviour, investigate it and report it.

However, you are right they now should go back and investigate the behaviour of older GPUs and CPUs too.
 
yeah like the other guys have mentioned, the scientific method is to make test control in an experiment, not just new findings
 
I don't know if I get TH right...
How much of the perf/watt increase is due to architecture improvements, and how much is a trickery done on crude power consumption tests?
 
I don't know if I get TH right...
How much of the perf/watt increase is due to architecture improvements, and how much is a trickery done on crude power consumption tests?

The lower the GPU loading the more the efficency, at high loading efficency converge to the values of the previous gen, i used 200W as the base TDP in the exemple below and indeed the 970/980 are not 150-180W TDP cards, these are genuine 250W TDP cards, the TDP claimed by Nvidia is the average power comsumption in games but push the cards in intensive computations and they will consume 240-280W.


I explain grossly the kind of optimisation in function of the load, let s take 7 values of throughput from 0 to 100%, i put first the comsumption without optimisation as if it was a regular 780 and then the optimised power management.

0% 0W 0W

20% 40W 10W

40% 80W 40W

60% 120W 80W

80% 160W 120W

90% 180W 160W

100% 200W 200W
 
Last edited:
The lower the GPU loading the more the efficency, at high loading efficency converge to the values of the previous gen, i used 200W as the base TDP in the exemple below and indeed the 970/980 are not 150-180W TDP cards, these are genuine 250W TDP cards, the TDP claimed by Nvidia is the average power comsumption in games but push the cards in intensive computations and they will consume 240-280W.

I don`t think you know how power and amp`s work. At all.
Spikes in amps and power are very much known in the world of electronics. Be it motors, capacitators, power supplies whatever. Starting current for example are very much known, where a 1500W motor may "consume" 3000W for a brief moment.

This can of course be visualized with the tool TomsHardware use. Its taking power measurements to the extreme, but luckily they plot down average power consumption. With a long wave graph you will clearly see that the GTX 980 consume 150-180W, and the small spikes every now and then (Toms saw one spike to 300W in a fraction of a second, <1ms) does not last long enough to interfere with the 170W TDP.
 
Last edited:
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png


Litecoin.png


pic_disp.php
 
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png

Yep, the GTX 980 clearly consumes 80W less than GTX 780 Ti.
Techpowerup found the same with both Average (Gaming) and Maximum (Furmark).

Tomshardware mentioned PSUs with their findings and what PSU people should use, which is ok, but they should have also said that any PSU have a maximum rating that goes above the wattage it is marked with. Its because the PSUs have to deal with these short spikes now and then. Its nothing magical about it, its not typical to Maxwell, its typical to a big range of electronic products, also other graphic cards
 
Last edited:
Its simple really. Tom's is not measuring power properly. Maxwell may be spiking to those amounts for extremely small amounts of time and toms is getting the max over a period instead of the average or something like that.

Fact of the matter is that the gpu power use under compute loads isn't the ~240W that they claim.

67755.png


Litecoin.png


pic_disp.php

It use 240W for sure, Nvidia driver detect Furmark and will indeed cap the power usage, look like THG get over this trick, if they had made such a gross measurement mistake this would also show in their other comsumption graphs.

Besides, did you pay attention to the throughput/watt in lite coin mining.??.

1.74KHs/W for the 970 , 1.925 for the 980 and 2 for the 290X, what happened to the 2x perf/watt improvement.?.Or was it in respect of the GT 680 wich score about 1KHs/W..?.

And what would happen in a task where its throughput would be maxed out and at 290X levels..?.

Edit : in the last graph the 980 consume less because it has reached its max temp, the GPU is surely throttling :

pic_disp.php


That was really using a bench out of context...
 
Last edited:
I think it is a pretty clever idea, if it is indeed doing some kind of real time voltage on demand function. I wouldn't be surprised if that is helping with some of the pretty nice overclocks I'm hearing about. I guess it isn't all perfect, as some scenes/games/settings may make the power saving look a little less stellar, and people may find they're still better off with a PSU slightly on the big side (if it spikes and drains the caps on a lower end PSU voltages could drop, could be instability). But, on the whole, averaged out over time, especially given how solid they perform for the low entry price (at least the GTX 970) they got a real winner here, I think.

Of course, we'd also have to see other cards tested in the same manner, maybe this behavior isn't that unique to some degree anyway.
 
Back
Top