Why are modern videocards so power hungry?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Throckmorton

Lifer
Aug 23, 2007
16,829
3
0
all my recent reviews are card only power measurements (not at the wall, graphics card only)



my measurements are always measurements not quoted TDPs



check my 9800 gt power numbers from a recent review. 9800 gt == 8800 gt


Found it

power_average.gif
power_peak.gif
 
Last edited:

tannat

Member
Jun 5, 2010
111
0
0
In the thread about bang for the buck cards, someone said that for 1920x1080+ the GTX470 is the best. Since I run 1920x1200 I started thinking about that getting one of those instead of a GTX460. But then I checked the power usage. Why is it so much higher than the 460?
It's based on a different chip wich is extremely powerhungry as compared to other modern GPUs. GTX470 is anyway much less power hungry than it's sibling GTX480.
 

ChrisAttebery

Member
Nov 10, 2003
118
3
81
According to AT's review, the GTX580 puts out 16% more frames while using 8% less power (Crysis). The perf per watt went up 26%.

I think the manufacturers have found the peak power that video cards will be "allowed" by the consumers, just as CPU's found their peak back in the P4 days.

As processes get smaller and computers get more mobile, perf per watt will become the king, not all out performance.

I
Is there some physical reason that videocards can't become more powerful without requiring more and more power, or is it just a cost tradeoff?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Beefier IMC too, right? Or does HT account for all of the extra ~35W?

IMC might be all of 2-3W difference, its really over-rated how much an impact that can make. But HT yes, it basically works to ensure those cores are truly loaded when under load and the power consumption goes up because of the extra computations being done.
 

Ben90

Platinum Member
Jun 14, 2009
2,866
3
0
IMC might be all of 2-3W difference, its really over-rated how much an impact that can make. But HT yes, it basically works to ensure those cores are truly loaded when under load and the power consumption goes up because of the extra computations being done.

Whats interesting, is HTT can actually lower power consumption when the pipeline is already being fully utilized such as linpack.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Power usage per / amount of capability has fallen significantly. Example:

Radeon HD 2900 XT - 475 GFLOPS - 215W TDP (512 bit GDDR4)
Radeon HD 3870 - 496 GFLOPS - 106W TDP (256 bit GDDR4)
Radeon HD 4670 - 480 GFLOPS - 59W TDP (128 bit DDR3)
Radeon HD 5570 - 520 GFLOPS - 39W TDP (128 bit DDR3)
Radeon HD 5670 - 620 GFLOPS - 64W TDP (128 bit GDDR5)

In comparison for all high end AMD chips:

Radeon HD 2900 XT - 475 GFLOPS - 215W TDP (512 bit GDDR4)
Radeon HD 3870 - 496 GFLOPS - 106W TDP (256 bit GDDR4)
Radeon HD 4870 - 1200 GFLOPS - 150W TDP (256 bit GDDR5)
Radeon HD 5870 - 2780 GFLOPS - 188W TDP (256 bit GDDR5)

Oh and the GTX 460 is using GF104, not the original GF100 used for the GTX 480 and GTX 470 which uses a cut down GF100 with some shaders disabled. It's a big reason why the GTX460 1 GB is faster than the GTX 465 which is a GF100 with even more disabled shaders than the 470. The 460 doesn't suffer with power leakage, so it can be clocked much higher and even with less shaders, it beats the 465 and still consumes much less power. Even with disabled SMD arrays, you'll still get power leakage into those areas. The GTX 580 is using GF100 yes, but a newer stepping with a better and more reliable node process that has the kinks worked out.
 
Last edited:

Throckmorton

Lifer
Aug 23, 2007
16,829
3
0
Power usage per / amount of capability has fallen significantly. Example:

Radeon HD 2900 XT - 475 GFLOPS - 215W TDP (512 bit GDDR4)
Radeon HD 3870 - 496 GFLOPS - 106W TDP (256 bit GDDR4)
Radeon HD 4670 - 480 GFLOPS - 59W TDP (128 bit DDR3)
Radeon HD 5570 - 520 GFLOPS - 39W TDP (128 bit DDR3)
Radeon HD 5670 - 620 GFLOPS - 64W TDP (128 bit GDDR5)

In comparison for all high end AMD chips:

Radeon HD 2900 XT - 475 GFLOPS - 215W TDP (512 bit GDDR4)
Radeon HD 3870 - 496 GFLOPS - 106W TDP (256 bit GDDR4)
Radeon HD 4870 - 1200 GFLOPS - 150W TDP (256 bit GDDR5)
Radeon HD 5870 - 2780 GFLOPS - 188W TDP (256 bit GDDR5)

Oh and the GTX 460 is using GF104, not the original GF100 used for the GTX 480 and GTX 470 which uses a cut down GF100 with some shaders disabled. It's a big reason why the GTX460 1 GB is faster than the GTX 465 which is a GF100 with even more disabled shaders than the 470. The 460 doesn't suffer with power leakage, so it can be clocked much higher and even with less shaders, it beats the 465 and still consumes much less power. Even with disabled SMD arrays, you'll still get power leakage into those areas. The GTX 580 is using GF100 yes, but a newer stepping with a better and more reliable node process that has the kinks worked out.

Interesting. No wonder the 470 and 480 are power hogs
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Whats interesting, is HTT can actually lower power consumption when the pipeline is already being fully utilized such as linpack.

Yeah that's because in that app the effect of HTT is basically the same as halting the cores every so many milliseconds (a silly version of throttling if you will)...the decrease in performance in that particular app with HTT enabled shows this.