• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Are the power requirements for video cards ever going to fall?

AmberClad

Diamond Member
Basically, are we ever going to see the video card version of a Conroe from Nvidia (or Ati)? Something that outperforms the previous generation, and on less power. What would that require?
 
If you believe the AMD video interview posted a week ago, they claim that R700 will offer significant performance increases while decreasing the power draw / heat levels.
 
Originally posted by: yacoub
If you believe the AMD video interview posted a week ago, they claim that R700 will offer significant performance increases while decreasing the power draw / heat levels.

So I guess the R600 cards were just a "warm-up" for when they start designing the R700 huh :laugh: ? But all jokes aside, if AMD makes good on that claim, I'll all for it (at least, if I can get one that isn't bright red w/flames on the heatsink...).
 
I'd say it would be difficult to both raise performance and lower power consumption, at least on the high-end cards. The more pixels/effects a card has to render, the more transistors and memory that are required. And the more transistors/memory you have, the more power you need.

A die shrink can reduce power draw, but not enough to bring overall levels down to that of the previous generation flagship card.
 
Originally posted by: AmberClad
Basically, are we ever going to see the video card version of a Conroe from Nvidia (or Ati)? Something that outperforms the previous generation, and on less power. What would that require?

That typically is hard to achieve when so much of the transistor budget is core logic. When your dealing with 1.5x - 2.0x performance jumps each generation power jumps by that much as well all things held equal.

A shrink without additional logic and holding the performance factor constant would indeed yield power reductions.

The general trend though is up as enthusiast video cards need to expand their power envelopes to keep up with these performance increases.
 
Yes, they are going to fall, at least in the near future. It's not like a new card with lower power consumption and significantly higher performance has never happened before... the X800XT consumed less/equal power to the 9800XT and was much, much faster. The 7800GTX was around the speed of 2 6800 Ultras and I believe it used around the same power (or a little bit less) as well.

The next-gen AMD card will be 65nm, which will enable AMD to improve performance AND singificantly reduce power consumption.
 
Originally posted by: Extelleron
Yes, they are going to fall, at least in the near future. It's not like a new card with lower power consumption and significantly higher performance has never happened before... the X800XT consumed less/equal power to the 9800XT and was much, much faster. The 7800GTX was around the speed of 2 6800 Ultras and I believe it used around the same power (or a little bit less) as well.

The next-gen AMD card will be 65nm, which will enable AMD to improve performance AND singificantly reduce power consumption.

X800 XT PE used all of it's process advantage to increase performance, the power envelope of the 9800 XT are basically equal at ~60 watts.

It's the same with the 6800 Ultra vs the 7800 GTX, power envelope is roughly maintained in exchanged for higher performance.

But the trend for the high end has been always to either maintain or increase the TDP envelope with higher performance. It's is rare to see any significant reduction on flagship TDP's.

Obviously if you have balanced cards you can have a bit of both, 9800 XT to X800 Pro represents a drop in TDP, 6800 Ultra to 7800 GT, 7800 GTX to 7950 GT.

It depends how hard you want to push the performance envelope or if you want to decrease power consumption. 65nm R650 if not pushed too hard will likely see some reduction in TDP.
 
So why are the video card companies behind the processor companies in terms of adopting die shrinks? Isn't Intel already talking about moving to a 45 nm process? Why can't Intel adapt that technology to video cards instead of pumping resources into developing integrated chipsets?

Thanks for the replies so far.
 
Originally posted by: AmberClad
So why are the video card companies behind the processor companies in terms of adopting die shrinks? Isn't Intel already talking about moving to a 45 nm process? Why can't Intel adapt that technology to video cards instead of pumping resources into developing integrated chipsets?

Thanks for the replies so far.

Intel and AMD both have their own fab facilities.

Both AMD and Nvidia outsource production to TSMC and UMC both of which simply cant afford "bleeding edge" process technology in their fabs.
 
i think power consumption will decrease once intel releases larabee
its architecture is completely different from whats there now and its supposed to be 65nm right?

also now that AMD has taken over ATI,power consumption shud go down(since anything other than that wud be against AMD's performance per watt initiative)
 
Back
Top