How is TDP measured by a video card's drivers?

alcoholbob

Diamond Member
May 24, 2005
6,369
437
126
I noticed in the Kraken G10 review that water cooling can reduce power consumption (presumably its more efficient than a blower fan) and reduced the stock 290X power consumption by 44W.

Does this mean that in theory the card would have 44W more overclocking headroom? (i.e., sensors limit TDP based on PCI-E power draw measurements). Or does this have nothing to do with how much TDP headroom a card can use for overclocking? (i.e., drivers only sense how much power the GPU core itself is pulling, i.e the fan can draw 60W for all it cares and it doesn't affect TDP throttling).
 

Gloomy

Golden Member
Oct 12, 2010
1,469
21
81
The VRMs know how much they're drawing, and how much they're outputting.

I assume Nvidia and AMD also employ some predictive algorithms too, based on chip utilization ("if X load utilizes Y amount of resources, bad times please throttle"... or something).
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
The VRMs know how much they're drawing, and how much they're outputting.

I assume Nvidia and AMD also employ some predictive algorithms too, based on chip utilization ("if X load utilizes Y amount of resources, bad times please throttle"... or something).
NV doesn't even try to guess. They just measures the power usage directly. The power controllers they use are tapped into the 12V lines and know exactly how much power is flowing across, and can trigger a downclock as necessary to bring down power consumption.