3DVagabond
Lifer
- Aug 10, 2009
- 11,951
- 204
- 106
But its not. Its happening so fast there is no way.
Please take the time to read my previous post.
So you believe that there is no way software could accomplish it?
But its not. Its happening so fast there is no way.
Please take the time to read my previous post.
http://forum.beyond3d.com/showpost.php?p=1875574&postcount=2284For what it's worth, none of my compute benchmarks break TDP containment according to NVIDIA's drivers. You can definitely light up enough CUDA cores and push the card into throttling itself, but it's not violating TDP as far as I can tell. Power at the wall is consistent with FurMark.
Well, a point could be that you were silent then and not now. Is that statement incorrect?
What about gaming? How does it do there according to your findings? I gotta say Abwx, this is really strange.
Even if what you're saying is accurate, and I'm not saying that it is by any stretch of the imagination, why would this be important for GPUs made primarily for gaming? Why are you focusing so strongly on this really strange aspect of a gaming card? Can you just tell us what your ultimate point is?
So you believe that there is no way software could accomplish it?
What i am saying is that there is always a surge when you power up any circuit. There is a rush of current when you wake up sleeping transistors. This rush can be many times the normal current. It is a huge problem that can only be managed not avoided.
The measurements toms is getting is what you would expect to be reading. Its actually not the slightest strange. When you look at the circuit with that kind of time scale, you can actually catch the gating in action.
If it is shutting and restarting as needed to save power. The oscilliscope should read the surging. This is exactly the expected result.
Toms doesnt understand what he is reading. The chances of software being able to do that is hard to imagine and doesnt even make sense
Why not simply ask for more information to clarify what's going on? It would be interesting to know what's going on and if it's exclusive to maxwell.
I don't get the love or hate for the observation made by toms. It might not mean anything, but it would be interesting to know if it's software based and whether it has an effect on PSUs etc.
I think the thread is about Toms power consumption test or did I bump into some PM's?![]()
Well said, finally someone who understand electricity and electronics.The current over the wires over the very short time periods that Toms is measuring are because the output caps of the PSU and the input caps of the GPU are connected by a very very low resistance wire. If you are complaining about picosecond power surges between two capacitor banks over 16 or 18 AWG wire, then you are just showing that you don't know how capacitors work.
The mosfets, which are actually the parts that are subject to blow, aren't being measured. Do you know how an 12-phase CPU VRMS circuit works? Each phase is run way above its rated power for a fraction of a millisecond and then turns off at which point the next VRM turns on. Running components like this is not a problem because it takes the heatsinks many milliseconds to reach heat saturation and heat death is the most common failure mode.
The card average power over a second or so is actually what matters unless it's peak is basically a short which is not the case here.
The crazy thing about Toms picosecond measurements is that the larger the output caps on the PSU, the lower ESR those caps have and the lower the Hague of the PSU wires, the higher the instantaneous current will be. The peak current looks worse the better the PSU is.
I agree with this sentiment. Also if Tom's is claiming the 980 is getting its power efficiency through clever power management alone, that doesn't explain why it still uses less power under full constant loads.We don't have any previous benchmarks in which to compare. But some are going to go ahead and make a claim this is something new and potentially dangerous.
I suspect if Tom went through previous generations using the same measuring interval. We would see something similar in every GPU. But the avg over time TDP would be much higher. And that ultimately is the point in measuring power consumption.
I agree with this sentiment. Also if Tom's is claiming the 980 is getting its power efficiency through clever power management alone, that doesn't explain why it still uses less power under full constant loads.
Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.If it s furmark there s a driver detection of the bench, this was implemented on the 750ti since not capping it would had resulted in the destruction of the pcie power layouts if ever someone tried to do some intensive computing.
To be fair, this is a mid-range maxwell clearly aimed at gaming, so to say it has no improvement in perf/w for HPC work isn't a revelation. For that, we have to see big maxwell in action.
Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.
Power-gating is an effective approach for reducing both dynamic and static power dissipation in power management and test scheduling. This paper formulates the power-gating spike problem, derives a reduced power dissipation model as heuristics, proposes a vector control technique for post-gating circuits, and develops a sleep-transistor allocation scheme for power-on/off current spikes reduction of pre-gating systems. From experimental results, a justified controlling vector can reduce the on/off peak power up to 55%. For a pre-gating system, more than 83% of the power-gating spike can be reduced. From our preliminary simulations using HSPICE so far, this heuristics has been proved to reduce the supply-gating current spike
The mosfets, which are actually the parts that are subject to blow, aren't being measured. Do you know how an 12-phase CPU VRMS circuit works? Each phase is run way above its rated power for a fraction of a millisecond and then turns off at which point the next VRM turns on.
And if they're doing it right, they're throttling at the hardware level based on the TDP alone. Which is what all these tests point to for all the modern architectures (Kepler, Maxwell, and GCN 1.1+).Both vendors (nVidia and AMD) have been throttling Furmark long before the 750 Ti arrived. This is nothing new or exciting.
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?Another thing TH testing touched upon is the idle consumption of 980 vs 970, it's curious that even in the down clocked state the 970 managed to consume almost twice as much. This means that rather than being this amazing value proposition, you get exactly what you are paying for a rather high voltage bin.
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?
Your shore about that?
Meanwhile, other sites show them being similar. So, who's right? Or, is there just that much variation?