Maxwell Power Consumption from Tom's Hardware

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Why not simply ask for more information to clarify what's going on? It would be interesting to know what's going on and if it's exclusive to maxwell.

I don't get the love or hate for the observation made by toms. It might not mean anything, but it would be interesting to know if it's software based and whether it has an effect on PSUs etc.

Well, a point could be that you were silent then and not now. Is that statement incorrect?

What about gaming? How does it do there according to your findings? I gotta say Abwx, this is really strange.

Even if what you're saying is accurate, and I'm not saying that it is by any stretch of the imagination, why would this be important for GPUs made primarily for gaming? Why are you focusing so strongly on this really strange aspect of a gaming card? Can you just tell us what your ultimate point is?

I think the thread is about Toms power consumption test or did I bump into some PM's? :D
 
Last edited:

ocre

Golden Member
Dec 26, 2008
1,594
7
81
So you believe that there is no way software could accomplish it?

What i am saying is that there is always a surge when you power up any circuit. There is a rush of current when you wake up sleeping transistors. This rush can be many times the normal current. It is a huge problem that can only be managed not avoided.

The measurements toms is getting is what you would expect to be reading. Its actually not the slightest strange. When you look at the circuit with that kind of time scale, you can actually catch the gating in action.

If it is shutting and restarting as needed to save power. The oscilliscope should read the surging. This is exactly the expected result.
Toms doesnt understand what he is reading. The chances of software being able to do that is hard to imagine and doesnt even make sense
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
What i am saying is that there is always a surge when you power up any circuit. There is a rush of current when you wake up sleeping transistors. This rush can be many times the normal current. It is a huge problem that can only be managed not avoided.

The measurements toms is getting is what you would expect to be reading. Its actually not the slightest strange. When you look at the circuit with that kind of time scale, you can actually catch the gating in action.

If it is shutting and restarting as needed to save power. The oscilliscope should read the surging. This is exactly the expected result.
Toms doesnt understand what he is reading. The chances of software being able to do that is hard to imagine and doesnt even make sense

You don't have to repeat yourself. I read what you posted. The bold part answered my question.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Why not simply ask for more information to clarify what's going on? It would be interesting to know what's going on and if it's exclusive to maxwell.

I don't get the love or hate for the observation made by toms. It might not mean anything, but it would be interesting to know if it's software based and whether it has an effect on PSUs etc.







I think the thread is about Toms power consumption test or did I bump into some PM's? :D

What are you talking about?
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
The more I read on this subject the more pointless it appears. We don't have any previous benchmarks in which to compare. But some are going to go ahead and make a claim this is something new and potentially dangerous.

I suspect if Tom went through previous generations using the same measuring interval. We would see something similar in every GPU. But the avg over time TDP would be much higher. And that ultimately is the point in measuring power consumption.

A car analogy would be accelerating and decelerating a car. At any given point your mileage can go from 5-99+MPG. But what is the avg over time? Why should I care about the peaks and valleys if my avg hits where I want it?
 

Cloudfire777

Golden Member
Mar 24, 2013
1,787
95
91
The current over the wires over the very short time periods that Toms is measuring are because the output caps of the PSU and the input caps of the GPU are connected by a very very low resistance wire. If you are complaining about picosecond power surges between two capacitor banks over 16 or 18 AWG wire, then you are just showing that you don't know how capacitors work.

The mosfets, which are actually the parts that are subject to blow, aren't being measured. Do you know how an 12-phase CPU VRMS circuit works? Each phase is run way above its rated power for a fraction of a millisecond and then turns off at which point the next VRM turns on. Running components like this is not a problem because it takes the heatsinks many milliseconds to reach heat saturation and heat death is the most common failure mode.

The card average power over a second or so is actually what matters unless it's peak is basically a short which is not the case here.

The crazy thing about Toms picosecond measurements is that the larger the output caps on the PSU, the lower ESR those caps have and the lower the Hague of the PSU wires, the higher the instantaneous current will be. The peak current looks worse the better the PSU is.
Well said, finally someone who understand electricity and electronics.

Just give it up. Some people are incapable or just dont want to understand.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
We don't have any previous benchmarks in which to compare. But some are going to go ahead and make a claim this is something new and potentially dangerous.

I suspect if Tom went through previous generations using the same measuring interval. We would see something similar in every GPU. But the avg over time TDP would be much higher. And that ultimately is the point in measuring power consumption.
I agree with this sentiment. Also if Tom's is claiming the 980 is getting its power efficiency through clever power management alone, that doesn't explain why it still uses less power under full constant loads.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
I agree with this sentiment. Also if Tom's is claiming the 980 is getting its power efficiency through clever power management alone, that doesn't explain why it still uses less power under full constant loads.

If it s furmark there s a driver detection of the bench, this was implemented on the 750ti since not capping it would had resulted in the destruction of the pcie power layouts if ever someone tried to do some intensive computing.

You can notice that Furmark use less power than a regular game (on the 750ti) wich point to said capping.
 

96Firebird

Diamond Member
Nov 8, 2010
5,742
340
126
Why are you bringing up the 750Ti?

I can't wait to hear about all the Maxwell cards that go up in flames during intense computing. Judging by posts here, it is inevitable. I'm sure we'll hear about video cards catching houses on fire, killing a family of 5 and their beloved dog.
 
Last edited:

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
If it s furmark there s a driver detection of the bench, this was implemented on the 750ti since not capping it would had resulted in the destruction of the pcie power layouts if ever someone tried to do some intensive computing.
Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.
 

PPB

Golden Member
Jul 5, 2013
1,118
168
106
To be fair, this is a mid-range maxwell clearly aimed at gaming, so to say it has no improvement in perf/w for HPC work isn't a revelation. For that, we have to see big maxwell in action.

The Uarch is the same for both SKUs in the end. And it clearly shows how gaming power optimized it is. This can even led us to believe that NV, in order to cater to the HPC crowd with the GM200, would incur in less gaming perf/watt that GM204 can offer you. Unless they make 2 totally diferent dies, one with a lot more native DP FP rate, the other one more trimmed like GM204 currently is.

There is a reason Nvidia debuted with the lowest performing SKU for Maxwell and followed suit with their mid range die, its because lower performing parts tend to be more gaming oriented with Nvidia and lower power parts tend to be the most power efficient. I cant see Nvidia claiming the same dubious perf/watt gains if they were debuting maxwell with GM200.
 
Feb 19, 2009
10,457
10
76
Well I do see in some benches, there's a massive increase in SP at similar TDP to 680, so indeed it could be claimed its doubled efficiency too *for some tasks.

I think its definitely a mix of hardware & software, else we would get major efficiency improvements for Kepler via a driver update and NV isn't talking about that at all.
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.

Not true, AMD throttling is due to the chip trying to limit the temp but not power if temp is low enough, Nvidia has a power hard limitation that can be seen on Furmark score with a 970 getting the same score as a 280X.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Anytime power gating is used there is a surge in current when they become active again. In other terms, anytime transistors are put to sleep, there is a huge surge in current when they wake up.

This is not speculation, it is absolute fact.

look at this paper
http://ieeexplore.ieee.org/xpl/logi...re.ieee.org/xpls/abs_all.jsp?arnumber=1656794

Power-gating is an effective approach for reducing both dynamic and static power dissipation in power management and test scheduling. This paper formulates the power-gating spike problem, derives a reduced power dissipation model as heuristics, proposes a vector control technique for post-gating circuits, and develops a sleep-transistor allocation scheme for power-on/off current spikes reduction of pre-gating systems. From experimental results, a justified controlling vector can reduce the on/off peak power up to 55%. For a pre-gating system, more than 83% of the power-gating spike can be reduced. From our preliminary simulations using HSPICE so far, this heuristics has been proved to reduce the supply-gating current spike

or page 71 and 72 in this book,

http://books.google.com/books?id=v4...age&q=power gating and current spikes&f=false

This has been a major problem that everyone has been trying to solve. Turning off things tight when they arent needed and back on when they are. It can save massive amounts of power, but there is one catch......the energy overhead (spikes) that occur when waking back up.
Maxwell would be using Ultra-Fine-Grained power-gating
And toms is capturing in the oscilloscope is the surging in action.

This is a case where the functional units shut down immediately after their use. If you can achieve this in a power gating system with low overhead, then you will find amazing power consumption reduction.
This is not something easily achieved and its not something you can achieve with some power tune algorithm. Its extremely difficult and from a lot of clever engineering.

I think Nvidia was in a special position to accomplish such improvements on their GPUs by working with tegra for so long.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
The mosfets, which are actually the parts that are subject to blow, aren't being measured. Do you know how an 12-phase CPU VRMS circuit works? Each phase is run way above its rated power for a fraction of a millisecond and then turns off at which point the next VRM turns on.

Your shore about that?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.
And if they're doing it right, they're throttling at the hardware level based on the TDP alone. Which is what all these tests point to for all the modern architectures (Kepler, Maxwell, and GCN 1.1+).
 

Subyman

Moderator <br> VC&G Forum
Mar 18, 2005
7,876
32
86
What I found the most interesting was this test highlighting how a program like Furmark works. It eliminates all the cycling of power and sustains peak draw, which the PCI-e connectors and heatsinks aren't quite rated to draw. Hence, the need for throttling and a power limiter built into the cards. When the power limiter and higher throttling was introduced Nvidia started chasing dynamic clocking for power and heat savings. That aligns quite well with the 680 launch.
 

know of fence

Senior member
May 28, 2009
555
2
71
Another thing TH testing touched upon is the idle consumption of 980 vs 970, it's curious that even in the down clocked state the 970 managed to consume almost twice as much. This means that rather than being this amazing value proposition, you get exactly what you are paying for a rather high voltage bin.
Considering that Nvidia doesn't have a long-idle mode, and multiplied with your PSUs inefficiency at low loads, browsing the AT-forums becomes a rather power consuming proposition.

I'd like to know if maybe having a 2 GB GDDR5 vs 1 GB card makes a big difference when it comes to idle power with say the 750 Ti.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Another thing TH testing touched upon is the idle consumption of 980 vs 970, it's curious that even in the down clocked state the 970 managed to consume almost twice as much. This means that rather than being this amazing value proposition, you get exactly what you are paying for a rather high voltage bin.
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?

Someone probably hit the spike with their measuring tool when noting idle consumption.

Someone should test maxwell using analog electricity meter so there is no low latency boosting mumbo jumbo
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Your shore about that?

Quoting articles doesnt always give credibility, quite the contrary...

If we look at the schematic provided as exemple we can see that it s a marginal case, because the author of the post doesnt specify the amplitude of the eventual peaks in GPUs and use a corner case where a single mosfet is used as switch and hence can have a limited current capability that cant accomodate peaks, this is very easily solved by using a cohort or parralleled devices, the peaks are nothing to worry about as the parasistic capacitances charging currents are not more current demanding that when charged and discharged at high speed when computing occurs, there s more concerns with logical states within the units as when switching on you must prevent latch ups (devices and parasistics devices entering auto conduction) within the gated units, perhaps that someone can bring some precision about this being a big concern or not for designers.
 

know of fence

Senior member
May 28, 2009
555
2
71
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?

Techreport.com confirms the 980 being below 970, in two separate measurements, with the 970 consuming more than all the other tested Nvidia cards, which is also curious, again I suspect 4 GB GDRAM being part of the problem. PCper.com also has a 2W difference favoring the 980. So even with their worthless, lazy method, they can pick up on the difference.

The 9.2 W and 15.8 W for the 980s vs. the 19.3 W for the 970, granted the first one appears to be an outlier. Still it's interesting because it's counterintuitive.