Geforce GTX 680 classified power consumption with overvoltage revealed, shocking.

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
HD7970 @ 1165 mhz trades blows with GTX680 @ 1290mhz (Xbitlabs review).

HD7970 @ 1150mhz 1.175V consumes 237W. Ok now find evidence that a GTX680 @ 1290mhz consumes less than 200W of power after I showed you that Asus GTX670 TOP OCed consumes 201W of power. :D

Not to change gears here, but when GK104 first came out, it was abuzz with how great Nvidia's perf per mm^2 managed to be. A smaller die and faster card than the hd7970 vanilla. But now that overclocking numbers have been firmly established, and the hd7970GE is out, it's apparent that GK104 is at a wall due to it's memory bandwidth. Refreshing this card for higher performance is going to be very, very difficult, and will almost guaranteed not net much of a performance increase unless Nvidia bites the bullet and goes with 7ghz vram.

I guess if worse comes to worse, GK104 can simply be refreshed to consume slightly less power (due to process maturity), and GK110 can fill enough slots to make the tiers of performance between Nvidia's gtx700 lineup make sense.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

Yes, exactly. But be careful when inferring power consumption from this review and comparing it to the other review I linked directly. The HD7970 at 1.175V is drawing 280W of power according to them at 1165mhz, while their stock GTX680 OC is drawing 220W of power at load, but it's only rated at 195W TDP (typical board power is closer to 170 W). This is because the power consumption they are quoting in that review is at the wall, not using the power supply correction factor:

"We measured the power consumption of computer systems with different graphics cards using a multifunctional panel Zalman ZM-MFC3 which can report how much power a computer (the monitor not included) draws from a wall socket."

The PSU they are using has an efficiency factor of around 0.85 (85%). That means their GTX680 OC is really drawing 187W and their MSI HD7970 @ 1165mhz is using 238W. So at Xbitlabs it looks like the difference is 51W of actual GPU usage.

The numbers I quoted as a reference earlier were from the review where GTX670 TOP OC drew 201W, GTX680 stock 169W and HD7970 Asus at 1150mhz drew 237W.

Refreshing this card for higher performance is going to be very, very difficult, and will almost guaranteed not net much of a performance increase unless Nvidia bites the bullet and goes with 7ghz vram.

Their die size is only 294mm^2 though. NV has made 320-/384-bit SKUs in the past. I don't think there is a technical limitation as to why they cannot make a card with > 2000 SPs with a wider memory bus. When they can sell a sub-300mm^2 chip for $400-500, why bother ;)? I think it's AMD that's got a monumental task for HD8000 series. Kepler is a lean and mean gaming machine. They have 60-70mm^2 of die space to play with before getting to Tahiti XT's size and that's not even getting to the fact that NV has made 500mm^2 chips before. NV has room to improve performance (40-50% faster chip), but it might consume 220-230W of power draw to do it.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Based on X-bit's findings it is around 72w -- which some may define this as a significant difference.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Based on X-bit's findings it is around 72w -- which some may define this as a significant difference.

I just explained in detail above why it's not 72W. When comparing GPU power consumption, you should to isolate the PSU efficiency factor when looking at the wall power consumption to get actual GPU power use. The actual GPU power consumption difference is 51W. The other reviews found 37-40W, which is pretty consistent with these findings.

Either way, you are so stuck on this 40-50W power consumption, you aren't even taking into considering that in the context of the overall system, it's more like 320W vs. 370W (which is not significant for anyone who is an enthusiast gamer spending $400+ on GPUs). Also, HD7970 costs significantly less, comes with 3 free games that can be resold, makes money on the side. Those things are not mentioned somehow. To each his own I guess. :biggrin:

I am just saying it's funny that such a low amount of power became 'significant' on an enthusiast forum, with people running overclocked CPUs and spending $400-500 on GPUs. But if you think it's better to spend $60-80+ more on a card with similar performance to save 50W of power, by all means.
 
Last edited:

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
I guess no one has kids and uses a laundry dryer (1800-5000W) 2-3x a week or no one has a wife who uses a blow dryer every morning (1200–1875W)?

Only 56W of power separates GTX670/680/HD6970/7970/7970GE/GTX560Ti/GTX570/580 on a typical modern system that already uses about 310-320W of power.

In other words, there is an 18% power consumption difference between a GTX670 system and a GTX580 system (the most power hungry single-GPU in that chart). Seriously, that's a big deal in 2012? BTW, there is only a 35-40W difference between the 7970 GE (with its 1.256V overvolt BIOS that we already discussed as irrelevant) and a stock GTX680. :D
Not to drag this too far off topic, but since you mentioned a clothes dryer...

If my computer was setup to exhaust hot air outside my house like my dryer is, that would be wonderful. But until then a high end computer is indistinguishable from a space heater, so trying to find a good balance between performance and heat/noise is very much a necessity if I don't want to have the noise and heat of a clothes dryer 2ft from me.:p

50W of power consumption is 50W less heat being created, 50W less heat needing to be moved, and at times 50W+ less A/C needing to be applied to balance out the heat. It does make a difference, which is why overvolting a video card for a few more MHz has always struck me as not being worth the trouble.
 
Last edited:

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Not to drag this too far off topic, but since you mentioned a clothes dryer...

If my computer was setup to exhaust hot air outside my house like my dryer is, that would be wonderful. But until then a high end computer is indistinguishable from a space heater, so trying to find a good balance between performance and heat/noise is very much a necessity if I don't want to have the noise and heat of a clothes dryer 2ft from me.:p

50W of power consumption is 50W less heat being created, 50W less heat needing to be moved, and at times 50W+ less A/C needing to be applied to balance out the heat. It does make a difference, which is why overvolting a video card for a few more MHz has always struck me as not being worth the trouble.

I agree with this entirely. I use to use a GTX480 which use to make the system pull ~350W at full load. Now when I replaced it with a GTX680, that figure dropped to 260W max. Although not scientific, the room that the PC is located gets alot less warmer due to the heat being dumped into the room.

Its a significant enough difference for me to actually use a standalone heater now where as before I would just use my pc (with the GTX480) as a space heater during winter :D
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I just explained in detail above why it's not 72W. When comparing GPU power consumption, you should to isolate the PSU efficiency factor when looking at the wall power consumption to get actual GPU power use. The actual GPU power consumption difference is 51W. The other reviews found 37-40W, which is pretty consistent with these findings.

Either way, you are so stuck on this 40-50W power consumption, you aren't even taking into considering that in the context of the overall system, it's more like 320W vs. 370W (which is not significant for anyone who is an enthusiast gamer spending $400+ on GPUs). Also, HD7970 costs significantly less, comes with 3 free games that can be resold, makes money on the side. Those things are not mentioned somehow. To each his own I guess. :biggrin:

I am just saying it's funny that such a low amount of power became 'significant' on an enthusiast forum, with people running overclocked CPUs and spending $400-500 on GPUs. But if you think it's better to spend $60-80+ more on a card with similar performance to save 50W of power, by all means.

So you claim.

For me power efficiency for an enthusiast platform is more-so on the bottom of the important totem pole. Even 50 watts difference on the same node is significant to some and not low.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Their die size is only 294mm^2 though. NV has made 320-/384-bit SKUs in the past. I don't think there is a technical limitation as to why they cannot make a card with > 2000 SPs with a wider memory bus. When they can sell a sub-300mm^2 chip for $400-500, why bother ;)? I think it's AMD that's got a monumental task for HD8000 series. Kepler is a lean and mean gaming machine. They have 60-70mm^2 of die space to play with before getting to Tahiti XT's size and that's not even getting to the fact that NV has made 500mm^2 chips before. NV has room to improve performance (40-50% faster chip), but it might consume 220-230W of power draw to do it.

I do not believe that adding an additional memory controller to a chip is non-trivial. I know of no existing architecture or chip with that kind of overhaul having took place. It sounds like an easy way to get 20% more performance out of GK104, while simultaneously adding a scant few more mm^2 (memory controllers take up a relatively small amount of die space on a 300mm^2 or bigger chip), but based on the block diagram of GK104 (here: http://www.techpowerup.com/img/12-03-16/180e.jpg ) it looks like Nvidia would have do to some reorganization and re-engineering, enough so that it would not be a small undertaking for a chip refresh.

Not to say they can't or won't do it, but I would be highly, highly surprised. I think the most likely scenario is that we'll see a GK104 refresh with slightly higher mem clocks (6200mhz), a boost up to 1150mhz or so, and 10 watts less power consumption. A less likely scenario is that they go with 7000mhz vram, and boost up to 1150-1200mhz. The least likely scenario is re-engineering GK104 with a 320-bit bus.

I think we'll see GK114 with +5-8% performance increase over the gtx680, with GK110 filling 3 slots above it with 40-50%, 25-30%, and 10-15% faster than GK114.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
So you claim.

For me power efficiency for an enthusiast platform is more-so on the bottom of the important totem pole. Even 50 watts difference on the same node is significant to some and not low.
I think you mean significant to your argument because it makes Nvidia look better in your eyes. There is no free lunch here, AMD has more memory on a wider bus and has a significant advantage in compute loads. Those factors chew up the transistor budget and directly correlate into higher power consumption.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
But if you think it's better to spend $60-80+ more on a card with similar performance to save 50W of power, by all means.

For the record: I did purchase a MSI Power Edition GTX 670! It's nice to have nice power efficiency but it wasn't on the top of the important totem pole. Purchased a nVidia product based on their pro-active nature, specifically on trying to improve immersion and the gaming experience.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I think you mean significant to your argument because it makes Nvidia look better in your eyes. There is no free lunch here, AMD has more memory on a wider bus and has a significant advantage in compute loads. Those factors chew up the transistor budget and directly correlate into higher power consumption.

I don't offer more watts -- AMD did, while offering less performance with the HD 7970. nVidia improved efficiency and based on AMD's engineering prowess -- they will as well, one may imagine.

Edit: minor corrections.
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Yes. A 1100Mhz base clock would be as high as most GTX 680s boost to today. So a card with a 1100MHz base would be faster than today's GTX 680s, not 15% slower.

My bad. I've read somewhere that review samples were boosting past 1250MHz.

@RussianSensation

That pie chart was compiled pre-GTX 460 launch from Steam data, and my guess is that it deals with volume, not value.
Margins... Nvidia was making $60 on each 550Ti sold, but $270 on GTX 580.
Also GTX 470/480 were not huge sellers, but 670/680 are topping Steam charts right now.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
I think you mean significant to your argument because it makes Nvidia look better in your eyes. There is no free lunch here, AMD has more memory on a wider bus and has a significant advantage in compute loads. Those factors chew up the transistor budget and directly correlate into higher power consumption.

Like the GTX 580 in its time. I didn't want that either :)

Some of us have been consistent in wanting a gaming card that games, not something that wastes 50-100 watts to support bitcoin mining or scientific computing.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
My bad. I've read somewhere that review samples were boosting past 1250MHz.

@RussianSensation

That pie chart was compiled pre-GTX 460 launch from Steam data, and my guess is that it deals with volume, not value.
Margins... Nvidia was making $60 on each 550Ti sold, but $270 on GTX 580.
Also GTX 470/480 were not huge sellers, but 670/680 are topping Steam charts right now.

If by topping steam charts, you mean 1% or less of total users, yes you're completely wrong! This is consistent with what RS stated and he's correct: very few people buy 299$+ GPUs, and the 680 definitely isn't topping any steam chart. It has definitely sold well since release for a discrete card, but most users do not buy cards at that price point. The chart toppers would be 560ti, HD4000/3000, HD 5770, etc GT540, GTX 460, ie the cheap crap

Yes, the 670/680 have sold well--and they are great cards don't get me wrong--but I don't know if your last statement was a typo or what.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
For the record: I did purchase a MSI Power Edition GTX 670! It's nice to have nice power efficiency but it wasn't on the top of the important totem pole. Purchased a nVidia product based on their pro-active nature, specifically on trying to improve immersion and the gaming experience.

I wonder if nvidia is sticking with efficient designs or if they'll go with the big die strategy in the future? Should be interesting to see how things pan out. GK110 should be a great piece of hardware if they stick to their old methodology, I hope they don't make a cut down consumer version. Give me big, give me performance :D
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Like the GTX 580 in its time. I didn't want that either :)

Some of us have been consistent in wanting a gaming card that games, not something that wastes 50-100 watts to support bitcoin mining or scientific computing.

Don't consider GPU processing a waste at all and very important moving forward. nVidia may have an efficiency advantage but AMD certainly offers compelling choice as well. Personally would like to see much stronger compute abilities with nVidia's enthusiast class products.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I wonder if nvidia is sticking with efficient designs or if they'll go with the big die strategy in the future?

It would be a sad day for gamers' choice to see the monolith designs only be used for the professional brand names, imho!
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Don't consider GPU processing a waste at all and very important moving forward. nVidia may have an efficiency advantage but AMD certainly offers compelling choice as well. Personally would like to see much stronger compute abilities with nVidia's enthusiast class products.
If you considered compute very important, you would be running an AMD card in your system, or an older generation Nvidia GPU, no?
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
I think its clear that nVIDIA is drawing a more clear cut line between its gaming based cards ala GK104 and compute based cards ala GK110. And I think they are right to go ahead and do this because most of us wont really take advantage of the compute features being there. Where as research groups and people who really need the fast hardware for their intensive algorithms will benefit, since now the compute cards will be specifically focused for compute (Vice versa for gamers - although the GK110 would be pretty good if there was a gaming card version of it).

Bitcoin, F@H, OpenCL accelerated blah blah etc all that stuff that everyday users consider "compute" aren't really "compute" imho since its barely using the resources available at hand (it'd be like driving a fast car to grab your groceries 2 blocks down). I can tell you that with first hand experience, its very challenging to take advantage of the hardware available via programming for general purpose tasks because of the limited tools/information/interface between the software and hardware being scarce/rudimentary etc.

Its alot better now (from what Ive heard from colleagues) but this only applies to nVIDIA and their huge investment into CUDA due to their support being there. They have their platform, that interface between the hardware and software which makes life for programmers that much easier. Not only that but its kind of caught onto universities globally. AMD on the otherhand.. I dont know why they have spent so much transistor budget on compute capabilities when they lack all to of the above. Sure it looks good on paper, but ive yet to see anything take advantage of them. Its a shame that they could have carried on with what they were doing best at.

Theres always trade offs, and I do like nVIDIAs approach (game focused - compute focused separate products) better since Id rather have a product thats a master at something than a jack of all trades while being mad with power.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
As soon as nVidia can make enough BigGK's to supply the consumer market, they will release a Geforce card based on it, IMO. Here's the reason I say that. When making bulk purchases the cost/price is not a gradual linear reduction with volume. Up to X number cost is Y. One more beyond that number and the price drops to Y-Z%. That reduction is applied as a rebate across the board. You get the savings on all of your prior purchases, as well. These breakpoints are determined not so much as a reward for buying more, as to be a penalty for not buying enough. The supplier wants you to reach those rebates. They have planned, budgeted, purchased materials, hired staff, etc. budgeting on those levels.

The company I work for had an extreme situation like this occur a couple of years ago. It was nearing the end of the quarter and we were ~475 units short from meeting the rebate point with the supplier. These units cost ~$600ea. The rebate for hitting that level was over $1mil. though. We actually made ~$700K by buying them. We needed to buy them to get our costs where they belonged, and the supplier needed to sell them because they had purchased materials, allocated factory time and had employees that needed the work. The rep for the supplier told me had we not bought them they would have had to have gone to a 4day work week at one of the factories for a period of time, or laid off workers, because of it. It's a lot more complicated than, "If you buy more they get cheaper."
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
If you considered compute very important, you would be running an AMD card in your system, or an older generation Nvidia GPU, no?

It may depend on what aspects are important to each individual -- investigated GPU Physic performance first based on how strongly I feel about the potential of GPU Physics. Was underwhelmed by the performance based on its a new generation. Offered more performance but more-so evolutionary and incremental.

What exactly does AMD offer here?

Then there are games that are compute heavy and in games; the GTX 6XXX series competed very well.

Personally wouldn't buy a gaming card based on double precision but GPU processing is still very important to me for DirectCompute and Cuda, which may encompass ambient occlusion features, cinematic features, Physics and more dynamics. GPU compute is important now and moving forward to me.
 

MrMuppet

Senior member
Jun 26, 2012
474
0
0
It might barely fit, but the top card will have almost no room to get fresh air. Look how massive the TTII cooler is! If you are going for a quiet design, might as well spend extra on the Asus Direct CUII 670 with premium components too. How much is the DCUII vs. the Palit vs. the Windforce?
(Converting to USD would be pointless because of different pricing, deals and a 25% VAT.)

The Palit 670 was 3140. Add a Twin Turbo II for another 329 and you're at 3469.

The 7970 Dual-X was 3290 (then add 100-200 for the cost of return shipping for the Palit).

A Gigabyte 670 Windforce would be 3410 after rebate (same), but PCI-E 2.0 x8 SLI would hamper performance (if PCI-E 3.0 indeed doesn't work).

The cheapest ASUS 670 DCUII I could find was over 3800 (same again), then I'd rather go with any of the above tbh (including the 7970 DX). I'm not even sure how much I'll actually game (so far I've only played for about an hour (maybe two) on this system and that was mostly my friend wanting to try it...).



However, it appears a certain person may have made a blunder and inadvertently brought my receipt for the 7970 Dual-X with him and gone on vacation, so I may not even be able to return it in time forcing the 7970 DX on me.

The 7970 is not a bad card and appears better all-round (at least with overclocking), so it may offer a higher future resale value. However, no SLI in the future and I'm worried I may have to settle for Very High in BF3 and Skyrim.
 
Last edited: