Geforce GTX 680 classified power consumption with overvoltage revealed, shocking.

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
How does manually adding volts do to the complexity and stability of GPU boost?

Not sure if this is what you are asking, but This explains the difficulty in making it work.

To bypass the limitations an AIC partner pretty much can do two things, hardwire / mod the graphics physically and apply extra voltage. An expensive alternative is to add extra core logic onto the card and completely bypass NVIDIA's restrictions. Both options seem simple enough, but with NVIDIA's new Boost / Turbo feature the Voltages on these cards jump up and down constantly changing the offset voltage and a way too regular interval. Very simply put, we can change the Voltage offset, but as soon as the dynamic clock frequency changes the voltage will change to NVIDIA's specification as well, resetting the voltage offset.

Here is the article it's from @ Guru3D
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
But they had major yield issues pushing GK104 to the clocks they did.

This is a load of crap unless backed up with real proof. Here is some observational proof: GK104 consumes less power at these "super high" clock speeds than the hd7970. Logic dictates that if a chip is being pushed extremely hard, to the point of making large sacrifices in yields and upper bins, then the chip is going to go far beyond the sweet spot of performance per watt. In case of GK104, that couldn't be further from the truth.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
This is a load of crap unless backed up with real proof. Here is some observational proof: GK104 consumes less power at these "super high" clock speeds than the hd7970. Logic dictates that if a chip is being pushed extremely hard, to the point of making large sacrifices in yields and upper bins, then the chip is going to go far beyond the sweet spot of performance per watt. In case of GK104, that couldn't be further from the truth.

There were no yield issues. It's just for a couple of months they didn't want to make enough to supply the demand. :D
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
There were no yield issues. It's just for a couple of months they didn't want to make enough to supply the demand. :D

OR the chips sold as well as Nvidia claimed http://pcper.com/news/Graphics-Cards/NVIDIA-claims-GTX-680-sales-outpace-GTX-580 , and coupled with the supply issues and apparent shutdown TSMC may have had just prior to the 680 launch, things suddenly kinda make sense. Suddenly normal inventories less than 3 months after launch does not make sense of yields are horrible. Things do not improve that much in that short a time without significant revisions.

Conspiracy theories are more believable when there are more facts to back up what they are saying.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
There were no yield issues. It's just for a couple of months they didn't want to make enough to supply the demand. :D

Well thats funny since execs at nvidia stated there were yield issues....

Obviously TSMC ramped up production and those issues have been fixed apparently...stock was non existent at release for the 680, that is not the case now.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,330
251
126
OR the chips sold as well as Nvidia claimed http://pcper.com/news/Graphics-Cards/NVIDIA-claims-GTX-680-sales-outpace-GTX-580 , and coupled with the supply issues and apparent shutdown TSMC may have had just prior to the 680 launch, things suddenly kinda make sense. Suddenly normal inventories less than 3 months after launch does not make sense of yields are horrible. Things do not improve that much in that short a time without significant revisions.

Conspiracy theories are more believable when there are more facts to back up what they are saying.

The 680 selling better than the 580 is hardly proof. The 580 was nothing more than a 480 with 22 more CUDA cores (4.6%), 70mhz higher stock clock (10%), with a 5% reduction in power, and a much better vapor chamber cooler. It was better, but none of those figures significant. The 680, on the other hand, and despite being the so called mid-range flagship, is an entirely new architecture that brought us +30% performance of the 580 with -30% power consumption. Now you tell me what logic will dictate here.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Well thats funny since execs at nvidia stated there were yield issues....

Yeah, and Morris Chang himself said the exact opposite - capacity/suppl problems and no yields issues whatsoever.

Fixing die issue in 3 months seem little far fetched.
Also, Qualcomm complained at least as much as Nvidia about capacity issues.

Matter of fact, among all 28nm customers, only AMD met the demand.
But... they are also the only one to drop prices twice.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Well thats funny since execs at nvidia stated there were yield issues....

Obviously TSMC ramped up production and those issues have been fixed apparently...stock was non existent at release for the 680, that is not the case now.

Jen-Hsun Haung said:
The gross margin decline is contributed almost entirely to the yields of 28-nanometer being lower than expected.

What some did was take this and some-how translated into poor or crappy yields. Conspiracy theories abound!
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
The 680 selling better than the 580 is hardly proof. The 580 was nothing more than a 480 with 22 more CUDA cores (4.6%), 70mhz higher stock clock (10%), with a 5% reduction in power, and a much better vapor chamber cooler. It was better, but none of those figures significant. The 680, on the other hand, and despite being the so called mid-range flagship, is an entirely new architecture that brought us +30% performance of the 580 with -30% power consumption. Now you tell me what logic will dictate here.

Your personal review and opinion of the 580 or it's relation to the 680 has no bearing on 680's imaginary bad yields. It was sold out for the first 8 weeks, but Nvidia said it's selling 60% faster than the 580, Steam's hardware survey shows 680 users already outnumbering 7970 users, 3 months with no chip revisions doesn't fix bad yields, TSMC supposedly stopped 28nm production for a short time just prior to the 680's launch. People who are saying that the 680 is pushing GK104 to the max is full of crap becauseit's maximum performance limit will push TDP well beyond it's sweet spot, which if were true with gtx680 it would not be more efficient than the Tahiti (and it is)....Anything Nvidia said regarding yields at 28nm were back before the 680 was out, and did not indicate whether the yields were "bad" or they weren't up to Emperor Huang's expectations. Two entirely different target points.

The 680 isn't suffering horrible yields. Let us know if your logic circuits fail to compute all this.
 

MrMuppet

Senior member
Jun 26, 2012
474
0
0
PCIe 2.0 vs. 3.0. The performance difference is very minor even with dual-GPUs. It'll be there, but won't be enough to actually impact settings in games.

HardOCP just investigated this in detail.
Interesting, thanks for the article! At 1080p the difference does appear negligible and PCIe 2.0 seems "good enough".

Apparently, a GTX 670 Windforce can now be had for 3051 SEK (after a 15% rebate, that is almost 100 SEK cheaper than my reference 670 was). So if I can live with PCI-E 2.0, that does seem like an attractive option (for future SLI).

(Apparently the 7970 Dual-X can also be had for that same price too.)

edit: Still, I was a bit chocked to find out that even this "super-cheap" 3051 SEK is more than 444 USD (albeit that's including VAT, without VAT it's ~355 USD).
 
Last edited:

The_Golden_Man

Senior member
Apr 7, 2012
816
1
0
The problem with the HardOCP PCI-E 2.0 VS 3.0 article is their using ASUS WS Revolution boards. These can run SLI and crossfire in x16 + x16 modes.

Most users have more common boards that use PCI-E x8 + x8 when utilizing SLI or Crossfire, making PCI-E 2.0 VS 3.0 more important, since PCI-E 3.0 x8 + x8 equals to PCI-E 2.0 x16 +x16 bandwith.

So in my opinion the test HardOCP did is pretty pointless. Everyone knows PCI-E 2.0 x16 + x16 is plenty on latest gen. graphics cards. As I've said, most users having more common Sandybridge/Ivybridge mobos will utilize x8 + x8 when in SLI/Crossfire.
 

Homeles

Platinum Member
Dec 9, 2011
2,580
0
0
PCIe 2.0 vs. 3.0. The performance difference is very minor even with dual-GPUs. It'll be there, but won't be enough to actually impact settings in games.

HardOCP just investigated this in detail.
That "investigation" is complete garbage. You can't use a 3770K and a 2600K and pretend like there isn't going to be a difference. Regardless, that just makes your point stronger -- PCI-E 3.0 is largely useless for gaming.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Most users have more common boards that use PCI-E x8 + x8 when utilizing SLI or Crossfire, making PCI-E 2.0 VS 3.0 more important, since PCI-E 3.0 x8 + x8 equals to PCI-E 2.0 x16 +x16 bandwith. As I've said, most users having more common Sandybridge/Ivybridge mobos will utilize x8 + x8 when in SLI/Crossfire.

Contrary to this being repeated, real world testing shows that there is almost no difference between PCIe 2.0 x8 and PCIe 3.0 x16. You won't be able to tell the difference in gaming without measuring exact #s in benchmarks. Even PCIe 2.0 x4 is not as bad as many people think.

This has been tested many times by other websites, not just HardOCP. It's already been shown by Legit Reviews during P55 vs. P67 chipset comparisons, and TechPowerup, that modern GPUs are not bottlenecked by more than 3-4% even with PCIe Express 3.0 x4 = PCIe 2.0 x8. For all intents and purposes PCIe 2.0 x8 and PCIe 3.0 x16 will be 1-3% difference, which is impossible to detect in the real world. That means for those running latest Z77 chipset + i5/i7 3xxx series, the difference between PCIe 3.0 x8/x8 and PCIe 3.0 x16/x16 (X79 platform) will be practically nil. As you'll see below, PCIe 2.0 x8 is only 2% slower than PCIe 3.0 x16, which is again immaterial.

perfrel_1920.gif

perfrel_2560.gif

Source

The PCIe Scaling has been discussed since HD5870/480/5970 days and it's been shown over and over and over that it doesn't really matter. Maybe when we get to HD9000/GTX800 series.

PCIe 1.1 x16 = PCIe 3.0 x4 = PCIe 2.0 x8 = PCIe 3.0 x16 is indistinguishable in real world gaming with modern GPUs. It can be measured in benchmarks within 2-4%, but that's about it. We can probably throw out PCIe 1.1x16 and PCIe 3.0x 4 since it's unlikely anyone would be running $1000 GPU setup with those. Thus, at minimum someone with a modern GTX670/680/7970 style GPU will be using PCIe 2.0 x8/x8. In the real world, CPU clock speed, SSD speed, GPU SLI/CF scaling, server latency, and GPU speed will each outweigh PCIe lane differences. The only exception has been running PCIe 2.0 x4 or slower, where a bottleneck starts to show.

The difference between PCIe 2.0 x8 and PCIe 3.0 x16 would less than going from DDR3-1600 to DDR3-2400 for games.

This is using Intel Core i5-3570K, overclocked to 4.5 GHz, with a GTX680:
batman.png

crysis-2.png

dirt.png

farcry-2.png

Source

Most people don't spend extra on DDR3-2400 memory for that extra 3-4% performance, so no reason for gamers to panic even if they are using PCIe 2.0 x8, especially not PCIe 2.0 x16 or 3.0 x8/x8 on Z77 :)
 
Last edited:

The_Golden_Man

Senior member
Apr 7, 2012
816
1
0
RussianSensation

I've seen that test before. It's missing SLI/Crossfire. Latest gen. high-end cards are actually beginning to show a bottlenecking when using SLI/Crossfire PCI-E 2.0 x8 + x8.

However, not severe enough to ruin gaming. Look at my system in signature. I'm scoring a bit low in 3DMark11 P. when compared to those utilizing PCI-E 3.0. This does not show much in gaming though, but clearly the bandwith limitations are beginning to show.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
Where one might see some solid differences may be with GPU Physics with 3.0/ 2.0
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,330
251
126
Your personal review and opinion of the 580 or it's relation to the 680 has no bearing on 680's imaginary bad yields. It was sold out for the first 8 weeks, but Nvidia said it's selling 60% faster than the 580, Steam's hardware survey shows 680 users already outnumbering 7970 users, 3 months with no chip revisions doesn't fix bad yields, TSMC supposedly stopped 28nm production for a short time just prior to the 680's launch. People who are saying that the 680 is pushing GK104 to the max is full of crap becauseit's maximum performance limit will push TDP well beyond it's sweet spot, which if were true with gtx680 it would not be more efficient than the Tahiti (and it is)....Anything Nvidia said regarding yields at 28nm were back before the 680 was out, and did not indicate whether the yields were "bad" or they weren't up to Emperor Huang's expectations. Two entirely different target points.

The 680 isn't suffering horrible yields. Let us know if your logic circuits fail to compute all this.

Do we know what the original TDP target was intended to be? And we have no idea how this card scales downward, as we don't have the ability to undervolt. Who is to say the power draw won't drop a significant amount by undervolting +1mV as? And I also doubt the crippling of GPGPU has anything to do with GK104's low power draw as well. The 7850, AMD's mid-range, can also overclock to ~1200 without significantly increasing its power draw. The only difference between AMD and Nvidia is that AMD didn't clock their 7850s at 1000-1100 stock. So why didn't AMD clock their cards so high if the TDP requirement on them still would have been very reasonable?

And what about fixing bad yields? How about using the same chip for the 680, 670, and soon to be released 660Ti and vanilla 660. Maybe we'll even see a GK104 based GTX 650Ti!

And as far as 580vs7970vs680 - did you ever just come to think for a minute, that maybe the 7970 (especially after Nvidia called it a disappointing, thus hyping their 680), or even most definitely the 580 (because it was barely any better in performance over a 480), didn't have anywhere near as much demand as the 680?

Edit : And yes, it looks like the 680 is no longer suffering horrible yields. But let me know when your reading comprehension circuits turn on since I never said that it currently is.
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
My conjecture premise was this:

nVidia may have had obstacles to try to bring to market GK-100 early with 28nm.

nVidia may have speculated that they would compete with the HD 7970 with the GK-104, which may of came back more than they expected. nVidia may of also speculated that AMD may continue with similar sweet spot strategies and may of been a GTX 670ti naming in the works.

When nVidia did see the performance and 50 percent rise in price from AMD, which set 28nm pricing, well, at nVidia head quarters, there may of been a lot of high fives and dancing in the halls.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Do we know what the original TDP target was intended to be? And we have no idea how this card scales downward, as we don't have the ability to undervolt. Who is to say the power draw won't drop a significant amount by undervolting +1mV as?

The gtx680 adjusts voltage dynamically based on the load. Turn the power target down, turn the offset down, and people would get whatever power consumption it is you are wrongly suggesting GK104 was intended to have. Compare GF114 to both Cypress and cayman. GF114 lost in power consumption. Compare GK104 to Tahiti. GK104 is more efficient, yet here you are claiming that Nvidia is pushing the chip beyond it's limits, and yet amazingly it's still performing more efficiently than Tahiti. WEIRD.

And I also doubt the crippling of GPGPU has anything to do with GK104's low power draw as well. The 7850, AMD's mid-range, can also overclock to ~1200 without significantly increasing its power draw. The only difference between AMD and Nvidia is that AMD didn't clock their 7850s at 1000-1100 stock. So why didn't AMD clock their cards so high if the TDP requirement on them still would have been very reasonable?

Because AMD is fully of geniuses. They released low clocked Tahiti parts, and therefore couldn't release higher clocked pitcairn parts because a 7870 with 150mhz more clock would be so close to hd7950 performance that it would licking the 7950's taint. Chock another one up to AMD's ability to screw their own product lineup.

And what about fixing bad yields? How about using the same chip for the 680, 670, and soon to be released 660Ti and vanilla 660. Maybe we'll even see a GK104 based GTX 650Ti!

How about GT-200 being GTX280, GTX260-216, and GTX260-192? How about GT200B being gtx285, gtx275, and gtx260? How about GF100 being gtx480, 475, and 465? How about GF110 being gtx580, gtx570, and gtx560ti-448. How about GF104 being gtx460, gtx460-768, and gtx460-SE? How about GF114 being gtx560ti, gtx560, gtx560-se, gtx550ti-oem? How about GK107 being 5 different chips? How about Tegra 3 being T33-4, T30L-4, and T30-4? HOW ABOUT IT? Are yields just so bad for every single Nvidia product that they can't get any fully functional parts at decent numbers and therefore rely on so many other variants of the same chip? Or is this just Nvidia taking advantage of EVERY SINGLE die possible? What do Nvidia financials tell you? That they have suffered horrible yields for 5 years straight and can't make money? HOW ABOUT IT?

And as far as 580vs7970vs680 - did you ever just come to think for a minute, that maybe the 7970 (especially after Nvidia called it a disappointing, thus hyping their 680), or even most definitely the 580 (because it was barely any better in performance over a 480), didn't have anywhere near as much demand as the 680?

Performance of a gtx680 over the hd7970 was smaller than the performance of the hd7970 was over gtx580 AND also less than gtx580 was over gtx480. HD7970 released in January
to no competition. GTX680 came out at the end of March. Hd7970 had a 10 week head start, and was basically passed up after the 20 week mark, despite having 10 weeks of uncontested performance, and despite only being marginally slower than the gtx680. Nice.

Edit : And yes, it looks like the 680 is no longer suffering horrible yields. But let me know when your reading comprehension circuits turn on since I never said that it currently is.

It's magic! Yields fixed themselves overnight!
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Its hard to take anything there seriously because you will never have an objective view of AMD and have stated that as such...
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I've seen that test before. It's missing SLI/Crossfire. Latest gen. high-end cards are actually beginning to show a bottlenecking when using SLI/Crossfire PCI-E 2.0 x8 + x8.

3Dmark11 is not a game though :). In fact, 3dMark11 is one of the worst representations of real world gaming performance. NV has a huge lead in that benchmark and yet loses to HD7970 GE in real world games? How does that even make sense? That's because 3dMark11 is synthetic ****.

Also, modern Z77 boards have 8x / 8x configuration. Each PCIe slot delivers that. So if you run 1 card on each of those slots, it should make no difference at all that those benchmarks by TPU use 1 GPU, since both GPUs will get PCIe 3.0 x8 which is pretty much identical to PCIe 3.0 x16 per benchmarks. Even in older Z68 boards, that'll still be PCIe 2.0 8x per GPU, and that amounts to approximately a 2-3% difference, nothing worth talking about.

The PCIe 2.0 x8 vs. PCIe 3.0 x16 hasn't been shown to be an issue in any credible review. Most benchmarks that a PCIe bottleneck have 3 GPUs where the 3rd may be running on x4 speed on some Z68/Z77 boards. For 2 GPUs though, it's a wash. Yes, you can measure the performance in benchmarks, but it isn't a big deal and I would wager graphics cards would need to get 4-5x faster than a single GTX680 before PCIe 2.0 x8 is a major bottleneck (i.e. 15%+ performance hit). Someone who has a GTX690 should bench it on PCIe 2.0 x8 and there probably won't be a huge difference, so even GPUs 2x faster than GTX680 are already good enough for 2.0 x8 interface.

Where one might see some solid differences may be with GPU Physics with 3.0/ 2.0

Maybe far in the future. Those PCIe 2.0 vs. 3.0 differences today will pale in comparison to the 2-3x GPU performance hit a modern NV GPU will incur once PhysX is enabled. GPU physics in general is such an intensive feature that modern GPUs would become unplayable having to simultaneously deal with next generation game engines/textures/shaders + GPU physics. Even without physics, HD7970 and GTX680 won't have a chance maxing out games released in 2014-2015 when next generation consoles titles launch. By that point, anyone buying $400-500 enthusiast GPUs will have moved to Haswell/Broadwell or whatever CPU AMD has, etc. I can't see many people with $500+ GPUs in 3-4 years still using a 2011 SB architecture.
 
Last edited: