Geforce GTX 680 classified power consumption with overvoltage revealed, shocking.

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
No inconsistency here, I run my i5-2500 at stock along with my 680 at stock.

I've always preferred extra stability to bragging rights. Although I'm interested in the technology for its own sake, the reason I build gaming systems is to play games.

I don't care if your 680 or 7970 gets a few more FPS than mine, or your 2500 is 800 MHz faster either. Good for you, but I want the lowest noise, heat and power draw that will get the job done of making my games run well.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Well from the chart for GTX580, I see now how you got that $270. Total Manufacturer Price - Total Cost = $482 - $210 = 272.

However, NV sells the $120 GPU and that's it.

Man... that's the die cost. They bought it off TSMC for $120.

Your own data supports that it's tiny. Using your data:

Total desktop GPUs for that quarter = 6,550 ATI + 9,520 NV = 16,070 (000s)
$300+ desktop GPUs for that quarter = 213 ATI + 578 NV = 791 (000s)

So desktop GPUs for that quarter $300+ are just 791 / 16,070 = 4.9% of the entire desktop discrete GPU market.

Since we are talking about $400-$500 GTX670/680/7970 cards, it'll be smaller than that since it would even exclude GPUs in the $300-399 range. So really we are back to that 3.5-4%. That's rather small.


Using only Nvidia GPUs from that chart, and averaging everything out:

$0-100 (5 SKUs - avg. gross margin* $25) *difference between GPU selling price and bill of materials to be exact.
6312 * $25 = $158,000

$100-200
2025 * $58 = $117,000

$200-300
605 * $139 = $84,000

$300+ (GTX 570 and 580 are close in volumes according to Steam, so I'm not giving weights, simple averaging will do)
578 * $220 = $127,000

You decide what amount of money goes to NV, and how much goes to AIBs.

Not that tiny anymore! Aight :)
 
Last edited:

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
No inconsistency here, I run my i5-2500 at stock along with my 680 at stock.

I've always preferred extra stability to bragging rights. Although I'm interested in the technology for its own sake, the reason I build gaming systems is to play games.

I don't care if your 680 or 7970 gets a few more FPS than mine, or your 2500 is 800 MHz faster either. Good for you, but I want the lowest noise, heat and power draw that will get the job done of making my games run well.
Too bad you can't undervolt your 680 to reduce the power draw further. I reckon, you could save another 30-50W without affecting stability.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
I am going to read that article in detail, but there is a MUCH simpler calculator in place.

http://bitcoinx.com/profit/

For example: HD7970 @ 1150mhz gives 680 MHash/sec rate, let's assume 250W of power (but it's less since the memory can be downclocked since it's not used for this task).

Electricity cost: $0.15 per kWh

Revenue per month: $94.83
Cost of hardware: $0 from bitcoins generated by the previous AMD card + resale value of the old card
Less Electricity cost: $27.39
Net profit per 1 month: $67.44

Let's say you game 4 hours a day ==> $67 * 20/24 = $55. In 6 months = $330 towards your next GPU upgrade + resale value of your current AMD card. That's a free upgrade towards HD8970*, etc.
*Assuming Butterfly Labs new ASIC products don't destroy GPU mining.

What would be total wattage used by the system?

AMD has a tremendous difference here over-all --- another example of a compelling use for GPU processing.

The key for GPU compute for me is what can it do to improve the gaming experience? Improve realism and immersion, enhance image quality and fidelity -- things of this nature.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Or I could sell

sp_0501_04_m4.jpg


for $1.8/day and use my PC for whatever the heck I want :)

@Russian

Nvidia builds some of it's GPUs themselves. But that, or how the money flows between AIBs and NV was not my point(not that I know it anyway).
What I wanted to say is that discrete high-end market is not that tiny as usually perceived.
High-end margins are HUGE, and the lower you go - the slimmer they get.

Have a look:

q2msa.gif

I would like to see the costs involved for GK-104!
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Good point. At least in my mind, the gains from overclocking a high-end CPU are far better than overclocking a high-end GPU. A 3.3GHz SNB can regularly hit 4.5GHz with a bit more voltage and better cooling. Even after factoring out Turbo, that's a huge performance increase. And yet because TDP was only 95W in the first place, power consumption hasn't necessarily hit the roof and it's still practical to move that much heat quietly.

This is as compared to high-end GPUs, where they regularly have TDPs in the 200-300W range. GPUs (specifically those that aren't 2nd tier like the 670/7950) don't overclock nearly as well as a CPU, and if you overvolt it's easy to add a lot of power consumption in the process. The gains aren't nearly as great for the extra effort incurred. I'll chase more performance, but the heat generated means I do have a limit.

Indeed. I'm only paying something like $.07/KWh, which is incredibly cheap. So at least here the problem with power consumption isn't the cost of the electricity (or even running the A/C), but rather the overall heat and acoustics. Even though I have central A/C I still have to run a secondary A/C in my office on hot days because of my equipment. Portable A/Cs are neither quiet nor cheap, so while the latter is a sunk cost, if I can keep heat generation down and avoid listening to the A/C I'm all for it.:p

It's the difference between graphics and serial workloads. Rendering is embarrassingly parallel - add more functional units (shaders, ROPs, etc) and performance will increase in a fairly linear fashion. So every node shrink should bring with it around a 50% performance improvement for the same die/power.

This is as compared to serial workloads, which can't be distributed across additional functional units like that. The only solution is higher IPC and higher clockspeeds, and we've reached a point where both are difficult to significantly increase right now.
Virge, I couldn't agree more with the above comments. I decided running 2 GTX 670s in SLI stock in my multi-monitor rig makes sense. I overclocked the GPUs but didn't notice as much improvement as when I OC'd my CPU - 2500k, which by the way Intel markets for Overclocking due to the "k". Also, the 2500k OC'd doesn't draw much more power than stock vs OCing the GPUs.

I also agree with posters who mention that we Ocers who "blindly focus" on power consumption shouldn't be OCing at all. Myself included, we all want the "fastest" machine. It comes at a price. Suffice it to say, that when I decided to go with 2 video cards to power the 3 monitor setup, power consumption did play a larger role than a single card would due to my PSU ( which btw is fine with this rig). I spent the $$$ and bought a Kill-o meter device to give me an estimate of the power draw and decided that even with an OC'd 2500k I was fine with the Antec Gmer 750.
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I also agree with posters who mention that we Ocers who "blindly focus" on power consumption shouldn't be OCing at all. Myself included, we all want the "fastest" machine. It comes at a price. Suffice it to say, that when I decided to go with 2 video cards to power the 3 monitor setup, power consumption did play a larger role than a single card would due to my PSU ( which btw is fine with this rig). I spent the $$$ and bought a Kill-o meter device to give me an estimate of the power draw and decided that even with an OC'd 2500k I was fine with the Antec Gmer 750.

No one said that people who care about power consumption shouldn't overclock. What I said is that those people shouldn't be going out and overclocking their Core i5/i7 CPUs and then complain about extra 40-50W of power consumption between a 1.15ghz HD7970 and a GTX670/680, especially since it takes a 1270-1290mhz GTX680 to even try and match a 1.15-1.16ghz 7970. So in comparison to the 670, the 7970 OCed card is even faster. It's not even the case of just using 40-50W extra power for "nothing" since the card that consumes more power is actually faster.

Also, if you really cared about power consumption that much, why didn't you just get a GTX690? It's way more power efficient than 2x EVGA GTX670s and is faster since it's basically 2 680s. Again, no consistency in your view if you really cared about power consumption but chose the less power efficient NV option. If the main reasoning was that you wanted to find the "perfect" balance of power vs. price vs. performance, then you would have purchased the HD7950 instead and overclocked it. At 1060mhz, it would beat a GTX670 and consume just 185W of power, while costing much less $ (which I presume is the reason you didnt' get the GTX690 despite it being far more power efficient than GTX670 SLI setup).

Finally, in the context of your overall system, 2x GTX670 @ ~150-155W each + overclocked i5 processor + 3 monitors (!), the extra 74W of power is nothing (the difference between 2x HD7970 vs. stock GTX670s). And since a stock HD7970 = GTX670, that wouldn't even be 50W of difference since a stock 670 draws just 37W less using TPU's Peak #s (that's how I arrived at 74W).

Put it this way if you played games where HD7970 smokes the 670 (Crysis 1, Metro 2033, Anno 2070, Dirt Showdown, Alan Wake), you would not try to save 75W of power by getting a much slower setup after spending $800-900 (!) on 2 GPUs. Chances you the main reason you got the 670s was because it was fast or faster in the games you play, with lower power consumption mainly a bonus. So in the end, imo it comes down to performance and price, with power consumption being the least important factor among those 3 metrics.

Your PSU wouldn't be an issue either since it takes an 1150mhz HD7970 to consume 240W of power. So it could still easily power 2 of them heavily overclocked + an i5.

Either way, NV doesn't even give this option anymore. If the end user is perfectly fine with 40-50W of extra power consumption, why not give him/her a choice? Intel doesn't block CPU voltage on their CPUs and in fact encourages overclocking by offering warranty replacements should a user kill the CPU from overvolting. It's odd indeed that NV suddenly decided to block GPU overvoltage and at the same time made GTX670 on such a cheap reference PCB with bare minimum VRM circuitry. Coincidence?
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Exactly what RS said. If you don't want to overclock, fantastic, good on you guskline. You can opt out no problem. However, nvidia implementing these features across their entire product line is a step backwards. Fermi was highly regarded because of the overclock scaling / ability, and many users enjoy doing it.

Again, if you want to opt out, thats great good for you. I should remind that this limited voltage and limited OC headroom is precisely why the 7970 overclocked does so well against the 680 and beats it in so many tests [while overclocked]. If the 680 had similar OC ability, what do you think would happen? We all know the 680 stock clearly wins over the 7970 at stock, so its confuzzling why the 680 has so many features designed to prevent overclocking success. I personally don't mind having a 'power mad' video card that OCs really high :D
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
However, nvidia implementing these features across their entire product line is a step backwards. Fermi was highly regarded because of the overclock scaling / ability, and many users enjoy doing it.

Even if NV is afraid that their reference cards would have a much higher RMA rate due to overvoltage, the AIBs can provide a card with a much more robust power circuitry/VRMs, caps, PCB layer enhancements and cooling. NV is dead serious about blocking GPU voltage on the MSI Lightning even despite that card probably built to easily handle 1.25V without problems (The Lightning PCB/VRM design handles 1.25-1.30 on the 7970 MSI Lightning version, so surely it should be a piece of cake on the more efficient 680).

This is pure speculation, but maybe the actual 28nm process on which Kepler GPUs are made isn't able to handle GPU overvoltage long-term as well as their 40nm Fermi generation of cards did. If the actual card can handle the overvoltage but long-term the GPU itself cannot, that may be a possible explanation but I don't know. It could be that when Kepler cards are boosting to almost 1300mhz, the GPU's voltage is increased beyond GTX670/680 voltage but normal software such as GPU-Z can't detect this internal fluctuation? So in other words if NV allowed these cards another +100mhz, it wouldn't be like going from 1.175V but maybe more like going to 1.3V (but that's just me guessing ;)).
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Even if NV is afraid that their reference cards would have a much higher RMA rate due to overvoltage, the AIBs can provide a card with a much more robust power circuitry/VRMs, caps, PCB layer enhancements and cooling. NV is dead serious about blocking GPU voltage on the MSI Lightning even despite that card probably built to easily handle 1.25V without problems (The Lightning PCB/VRM design handles 1.25-1.30 on the 7970 MSI Lightning version, so surely it should be a piece of cake on the more efficient 680).

This is pure speculation, but maybe the actual 28nm process on which Kepler GPUs are made isn't able to handle GPU overvoltage long-term as well as their 40nm Fermi generation of cards did. If the actual card can handle the overvoltage but long-term the GPU itself cannot, that may be a possible explanation but I don't know. It could be that when Kepler cards are boosting to almost 1300mhz, the GPU's voltage is increased beyond GTX670/680 voltage but normal software such as GPU-Z can't detect this internal fluctuation? So in other words if NV allowed these cards another +100mhz, it wouldn't be like going from 1.175V but maybe more like going to 1.3V (but that's just me guessing ;)).


There are only 3 reasons I can think of for them blocking it.

1, Since they are only responsible for the GPU they fear that it won't handle it and no amt. of overbuilding added to the rest of the card will prevent that.

2, They don't want MSI to gain some type of performance advantage over other AIB's.

3, They want to insure differentiation between cards and don't want people overclocking a lower model to match the performance of the higher model. The 680 seems particularly susceptible to this from the 670. This sounds the most "nVidiaish" to me. :)
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
I think if it's not some sort of policy shift to lock their cards down from now on. Which I hope it is not, as that reeks of trying to milk every last dollar possible. Then it is that GK104 is already pushed to its limits.

There is a good chance of that as there is not much headroom on these cards to overclock. Along with the cards shipping with what is a pretty high clockspeed after boosting, generally 1100MhZ. There is a fair chance nvidia is redlining these cards to be performance competitive against AMD.

It is possible once again the big die was not ready/feasible to manufacture on time (GK110) and they saw they could clock GK104 to the moon and manage performance parity and went for it. If that's the case, a GK104 running with an addition .1V could end up failing. I assume they test voltage thresholds and tolerances and all sorts of stuff before shipping these things out.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
2, They don't want MSI to gain some type of performance advantage over other AIB's.

#2 is an invalid point because EVGA is charging for their EV Bot addon. So you're telling me that over voltage is okay with a 99$ addon but good lord, if you do it through software that is the spawn of satan. Give me a break, thats totally inconsistent. I've said it before but if nvidia truly didn't want overvoltage they would ban EV Bot.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
#2 is an invalid point because EVGA is charging for their EV Bot addon. So you're telling me that over voltage is okay with a 99$ addon but good lord, if you do it through software that is the spawn of satan. Give me a break, thats totally inconsistent. I've said it before but if nvidia truly didn't want overvoltage they would ban EV Bot.

I believe #3 is the most likely.

EVGA is a small company that, from what I understand, enjoys favorite partner status with nVidia. Stranger things have happened in business.

Spawn of Satan? :D
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
How does manually adding volts do to the complexity and stability of GPU boost?

As long as you have your stable overclock set, the card will handle the boost speeds based on power usage. At least it should. I don't think it should mess with its stability, and it will handle the over volt condition automatically, so the complexity is something we don't need to worry about ;)

Example:

1. An overclock of boost max 1250 MHz, at stock voltage. 100% power limit set. At full load, you may never see your card dip from 1250 MHz, unless your power usage hits your 100% limit. Then the card will throttle clock speed, but still boost to the max available to it without going over your power limit. If you raise the power limit, this clock throttle won't happen unless you pass 70C.

2. An overclock of boost max 1250 MHz, but you needed an extra 0.1 V to achieve it. 100% power limit set. You will still see 1250 MHz at full load, until you hit your power limit, which will obviously be a lot more frequent with higher voltage, and thus higher temperature and power usage. The card will still adjust for the higher usage. It will just throttle the card to whatever speed keeps it under 100% power usage.
 

jacktesterson

Diamond Member
Sep 28, 2001
5,493
3
81
Which is why I bought a stock 680, to run it at stock. Power draw, heat, noise are all very good when run at stock and it's fast enough that I have no reason to OC.

For this generation you need to OC the Radeons to match the performance of the nvidias, and power usage is bad even at stock.

My Radeon 6850 was a great card, but this generation the Radeons are as power-mad as the first-generation GTX 5xx cards. Hopefully there will be refinements to improve on that, like nvidia managed for the GTX 560.

Power Usage is not bad, even at stock.

Not as good as Nvidia, but com'on.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
RussinSensation: Have you tested 670s in SLI vs a 690 for power draw? BTW 2 670s is @$800 and a 690 is @$1,000 ?

Perhaps you missed my point. If you overclock, you most likely use more power so it shouldn't be the primary concern. That I agree with you.

Also why did I go with 2 670s in SLI? So I could run rig 1 with 3 monitors smoothly at 5760 x1080 resolution even with BF3, although I really like ROF, a flight sim.

Though the GTX 690 uses less power than 2 670s in SLI it is another $200 more so I made a money decision. I'm sure in your line of work you understand.

Cheers.
 
Feb 19, 2009
10,457
10
76
I don't get why they are so hardball on this, it's an MSI lightning edition with the best PCB and components, it will handle extra current and volts easily. It's an insult to enthusiasts to expect them to pay a huge price premium and get a dud that can't be pushed at OC due to a stupid limitation. To even release a new bios to lock out voltage tweak is extreme.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Exactly. An aftermarket card can handle the additional power and heat, reference cannot. Applying reference card parameters to something like the lightning / dc2 / classified is beyond stupid.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
It is possible once again the big die was not ready/feasible to manufacture on time (GK110) and they saw they could clock GK104 to the moon and manage performance parity and went for it. If that's the case, a GK104 running with an addition .1V could end up failing. I assume they test voltage thresholds and tolerances and all sorts of stuff before shipping these things out.

I believe this view the most. Rumours kept popping-up that NV was very nervous that HD7900 would beat them and then they found it "underwhelming" but their GTX680 beats it by 10%. I don't see how 10% lead and "underwhelming" go together in this case, unless GK104 wasn't NV's flagship Kepler. :D Also, lack of memory bandwidth increase, GK104 codename, completely gutted compute performance (after NV spend millions of dollars promoting GPGPU with G80, GT200, GF100/110 series), and just 35% speed increase over the 580 seems rather strange for Fermi flagship successor. If GTX780 is some full-blown GK110 Kepler chip with 384-bit memory bus and dynamic scheduling and 30-40% higher performance, this theory will be proven in 2013. If GTX780 is a mild 10-15% performance bump over the 680, then maybe GTX680 was the real flagship after all.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
How does manually adding volts do to the complexity and stability of GPU boost?

I can't find the article now, but on Guru3D they were saying that GPU boost was complicating things. AB had to maintain the boost (Updated in .5sec intervals, IIRC.) as the card wants to change the settings on the fly and reverted to stock settings. Also, AB's settings weren't able to be applied on boot. Not sure if they've gotten this 2nd part fixed or not. They were working on it.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I believe this view the most. Rumours kept popping-up that NV was very nervous that HD7900 would beat them and then they found it "underwhelming" but their GTX680 beats it by 10%. I don't see how 10% lead and "underwhelming" go together in this case, unless GK104 wasn't NV's flagship Kepler. :D Also, lack of memory bandwidth increase, GK104 codename, completely gutted compute performance (after NV spend millions of dollars promoting GPGPU with G80, GT200, GF100/110 series), and just 35% speed increase over the 580 seems rather strange for Fermi flagship successor. If GTX780 is some full-blown GK110 Kepler chip with 384-bit memory bus and dynamic scheduling and 30-40% higher performance, this theory will be proven in 2013. If GTX780 is a mild 10-15% performance bump over the 680, then maybe GTX680 was the real flagship after all.

Oh, cool! I've been looking forward to the discussion ending up down this road again. /sarc. :p
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,348
268
126
After nvidia saw the performance of Tahiti, they just decided to milk the cow. And why shouldn't they since their main objective is profit. Selling GK104 as a premium GeForce and saving GK100 for the 4-digit priced HPC cards (and giving themselves more time to revise it to GK110) is the smartest move the company could have done.

But they had major yield issues pushing GK104 to the clocks they did. I don't think of it as a smart move at all, but more a move out of necessity since GK100 flopped. Otherwise we wouldn't have had major supply issues on GK104 for months.

My card unlocks via LN2 bios but I am a little bit concerned about the long-term effects of running 1.35V through the GPU.