OFFICIAL KEPLER "GTX680" Reviews

Page 37 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I wouldn't either.

I'm sure they've seen posts over the last few years of lesser cards overclocking past the performance of their flagship cards.

It's not good for profits when people like me post scores that out class $1100 580 SLI setups for less than half the cost.

Everyone knows a 570 will overclock past stock 580 performance, how much in sales do you think Nvidia lost to overclocking enthusiasts who wanted 580 performance, but didn't want to spend $500+ for it?

Instead of having to change the fab, or cut good dies to purposefully gimp them they could just simply hardlock the TDP limit and release the exact same cards without any additional effort with drastically different performance both stock and overclocked.


As the overclocker I think it sucks, but I can see it's merit from a business standpoint.



I'm sorry, if this is a 560 Ti replacement but branded as a 680, does that mean NVIDIA could've unleashed way more but didn't simply because they didn't have to? And, what exactly does it *mean* for this to be a 560 Ti replacement when it's branded as a 680? I don't see how you can call one part a replacement for another when it costs twice as much. As a consumer I look at price brackets...whatever costs the same as the 560 Ti but from the next generation of products is the true 560 replacement to me.

In the end are you confirming or denying that the 7800 series is currently a horrible buy right now? Should a shopper in the $250 range do anything except for wait for the 660 Ti at this point?

It's not rebranded though....

They could, and still can.

GK104 is a small die, it's probably easier to make than GK100 is too it's not nearly as complex without all the GPGPU.

What do we know about 28nm? It's costly. There aren't a lot of wafers.

What do we know about GK104vsGK100? GK104 is smaller, considerably. Which means more dies per wafer, and when you're wafer starved and have no competing products you have to go with what makes the most sense first, which is clearly GK104. Nvidia is also paying per wafer, not per good die. GK100 is more complex than GK104, meaning less good dies per attempt.

I think overall 28nm is a giant disappointment thus far, coming from mid/high end 40nm performance. However if you need a card now it's still the best option in most cases.
 
Last edited:

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Lol for them to purposely gimp TDP (even more than this boost part time overclocking crap) just to release the same card at a lower spec would equal a supremely douchebag move, wouldn't put it past em though.

They would have to be pretty stupid to do this,if they did i would prob lose all respect for nvidia as a company and i have been a nvidia guy by heart since 2006.

Most people who buy a $450+ gpu product don't care half the time about the tdp limit,most times its gonna be pushed to its limits under water or extreme air cooling,why gimp the tdp besides to cash in on possibly faster products?
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Yep, good point. Its why I bought a 6950 and 570's for sli before that, no need for flagships if lesser model overclocks and surpasses.

It'd be nice if they leak a BIOS or modified drivers or something to use the card how we wish :D
 

antef

Senior member
Dec 29, 2010
337
0
71
Well regardless of branding it seems a mid-range shopper can do nothing but wait at this point. The 7800s are disappointing and overpriced. The 560 Ti is too old. The 680 is too expensive, and the 660 is a ways off.

If I didn't want to wait and wanted to buy soon in the $250 range, what would the best option be? 7850 or an old 560 Ti or something else?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Lol for them to purposely gimp TDP (even more than this boost part time overclocking crap) just to release the same card at a lower spec would equal a supremely douchebag move, wouldn't put it past em though.

It's called binning and Intel does it for 1 processor that ends up as an entire family of SKU's. If GK104 chips can't function with the entire chip enabled AND / OR can't meet or exceed the TDP under load conditions that Nvidia has designated for gtx680, then Nvidia does not consider the chip good enough to run at gtx680 speeds. From there, they can set a NEW TDP specification and decide whether to disable parts running at at the same or slower speeds of gtx680, or if they have enough fully functioning chips that are flawed by drawing significantly more power per mhz than gtx680, they can release that as a fully functioning chip with a lower operating frequency.

Rather than throwing away cores that are MOSTLY functional, they put them into lesser products. It's called smart business.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
It's called binning and Intel does it for 1 processor that ends up as an entire family of SKU's. If GK104 chips can't function with the entire chip enabled AND / OR can't meet or exceed the TDP under load conditions that Nvidia has designated for gtx680, then Nvidia does not consider the chip good enough to run at gtx680 speeds. From there, they can set a NEW TDP specification and decide whether to disable parts running at at the same or slower speeds of gtx680, or if they have enough fully functioning chips that are flawed by drawing significantly more power per mhz than gtx680, they can release that as a fully functioning chip with a lower operating frequency.

Rather than throwing away cores that are MOSTLY functional, they put them into lesser products. It's called smart business.

Oh thanks for the in depth explanation of binning, I have never heard that word before

/sarcasm

We've already seen them in one way or another gimp their 680 from what its probably really capable of, by introducing this boost crap and marketing it like its special.

Its not, unless you don't know what you're doing with overclocking, or don't care.

Ungimped, we'd be able to clock it where we wanted, and volt it where we wanted. Call it binning, and it very well might be, but its a purposeful gimp nonetheless. In my opinion :)
 

Termie

Diamond Member
Aug 17, 2005
7,949
48
91
www.techbuyersguru.com
Well regardless of branding it seems a mid-range shopper can do nothing but wait at this point. The 7800s are disappointing and overpriced. The 560 Ti is too old. The 680 is too expensive, and the 660 is a ways off.

If I didn't want to wait and wanted to buy soon in the $250 range, what would the best option be? 7850 or an old 560 Ti or something else?

Honestly, it's not a great time to be in the $250 market, but I know some people are considering the GTX570: http://www.newegg.com/Product/Produc...CE&PageSize=20

It is faster than the 7850 for the same price, but it uses much more power. Which way to go on that is up to you.

The problem I think so many people have with the 7850 is that it's basically no faster than a 6950, but priced at about the same price - if you'd wanted that price/performance, you already got it.

And you say the 560Ti is "too old," but at $190AR for a nice version, it's actually a pretty good deal: http://www.newegg.com/Product/Produc...82E16814127565

You could always split the difference on this EVGA 560ti-448 FTW for $240AR: http://www.newegg.com/Product/Produc...82E16814130738. It would probably beat a stock 570 in performance.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I'm sorry, if this is a 560 Ti replacement but branded as a 680, does that mean NVIDIA could've unleashed way more but didn't simply because they didn't have to? And, what exactly does it *mean* for this to be a 560 Ti replacement when it's branded as a 680? I don't see how you can call one part a replacement for another when it costs twice as much. As a consumer I look at price brackets...whatever costs the same as the 560 Ti but from the next generation of products is the true 560 replacement to me.

In the end are you confirming or denying that the 7800 series is currently a horrible buy right now? Should a shopper in the $250 range do anything except for wait for the 660 Ti at this point?

It was branded a gtx680 because it outperforms anything AMD has and the flagship kepler chip isn't ready yet. It's also thoroughly discussed in any/all of the in-depth review articles (like the very one here on anandtech) that this is the successor to Fermi's GF104/GF114 chip. Given Nvidia's history of reusing the same (but improved) dies for different sku's, I anticipate that a future (slightly improved) version of GK104 will end up being called gtx760ti or something along those lines.

The hd7870 is AMD's best priced chip on 28nm.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
We've already seen them in one way or another gimp their 680 from what its probably really capable of, by introducing this boost crap and marketing it like its special.

The problem isn't the boost, the problem is the fact that you can't go outside of PCIe specs with it, 6+6 = 225 watts MAX.

A 8+6 version would have bumped that up to 300 TDP available for boost.


8+8 will see 375 TDP boost possible.

The problem isn't the boost, the problem is more so the limitations of 6+6 and being mandated to staying within it's specs.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Oh thanks for the in depth explanation of binning, I have never heard that word before

/sarcasm

We've already seen them in one way or another gimp their 680 from what its probably really capable of, by introducing this boost crap and marketing it like its special.

Its not, unless you don't know what you're doing with overclocking, or don't care.

Ungimped, we'd be able to clock it where we wanted, and volt it where we wanted. Call it binning, and it very well might be, but its a purposeful gimp nonetheless. In my opinion :)

I think it's being purposely gimped as well in the limit of it's TDP overclock ability the evga precision software allows you to set, but I also thinked it's gimped simply by the design of the PCB. I think AIB custom boards are going to go wild with GK104... hence Zotac saying they are going to release a "godly" gtx680 version.

When Nvidia castrates GK104 into lesser products, it will probably be gimped in the same manner, but it won't be "more gimped" than it already is.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
Yes custom PCB versions should perform great, based on reference. I'll probably sell my 680 once they are released lol
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
The problem isn't the boost, the problem is the fact that you can't go outside of PCIe specs with it, 6+6 = 225 watts MAX.

A 8+6 version would have bumped that up to 300 TDP available for boost.


8+8 will see 375 TDP boost possible.

The problem isn't the boost, the problem is more so the limitations of 6+6 and being mandated to staying within it's specs.

Would it have been hard for nvidia to make it a 8+6 or 8+8 pin design?Or are we seeing a new market of not just performance selling for a x amount of money,but also the max allowed tdp meaning we could see a gtx685 soon,or the so called gk110?
 

Soccerman06

Diamond Member
Jul 29, 2004
5,830
5
81
The problem isn't the boost, the problem is the fact that you can't go outside of PCIe specs with it, 6+6 = 225 watts MAX.

A 8+6 version would have bumped that up to 300 TDP available for boost.


8+8 will see 375 TDP boost possible.

The problem isn't the boost, the problem is more so the limitations of 6+6 and being mandated to staying within it's specs.

Yes because no card has ever gone passed the 375w hard limit. Just because pci-e specs call for a max power draw of 75w doesnt mean it cant and wont draw more power, ie 6990 http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/5 and if you look at the total system power draw it shows a change from 550w to 684w with OC. The author (ryan smith) goes on to say the card draws more than 500w itself. I wouldnt put too much into the pci-e limit of 75w.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Would it have been hard for nvidia to make it a 8+6 or 8+8 pin design?Or are we seeing a new market of not just performance selling for a x amount of money,but also the max allowed tdp meaning we could see a gtx685 soon,or the so called gk110?

No the reference PCB that is being shipped has 6+8+6 pinouts soldered into it.

I assume they limited it because of GK100, but there is really no way to know. Soon would be relative, at this point it's more IF. We know they'll have to make GK100 for workstation, the if comes from if it will ever come to desktops.

GF110 isn't that much less efficient than GF114, but it is. And with GK104 getting such high TDP for a mid-range card it doesn't leave much room for BigK unless Nvidia is willing to push past 250 TDP on a single gpu card.


I think there are a lot of factors at play here, including fab/competition. GK104 sitting at $500 is the best thing that could have happened for Nvidia. A small die gpu offers better cost/profit ratios than their previous generation 500+mm2 dies have... They also produce more dies per wafer and are supply limited both in retail and at the fab currently, however Nvidia still needs to refresh their workstation lineup and introduce cooler, more efficient cards for HPC.

Here's a pick off newegg of a retail 680, look at the PCIe pinouts:

14-121-626-08.jpg


Yes because no card has ever gone passed the 375w hard limit.

This goes without saying, I'm running 2 470's with 225w TDP 6+6 with 57% overclocks, do you think I might be outside the 225w TDP limit, just a hair?

What you do, what they can do, are two different things. Nvidia can not release a card that by design exceeds TDP limits. That's a basic point of fact, what you and I do is of no concern since we're operating the cards outside of their designs specs.
 
Last edited:

antef

Senior member
Dec 29, 2010
337
0
71
Honestly, it's not a great time to be in the $250 market, but I know some people are considering the GTX570: http://www.newegg.com/Product/Produc...CE&PageSize=20

It is faster than the 7850 for the same price, but it uses much more power. Which way to go on that is up to you.

The problem I think so many people have with the 7850 is that it's basically no faster than a 6950, but priced at about the same price - if you'd wanted that price/performance, you already got it.

And you say the 560Ti is "too old," but at $190AR for a nice version, it's actually a pretty good deal: http://www.newegg.com/Product/Produc...82E16814127565

You could always split the difference on this EVGA 560ti-448 FTW for $240AR: http://www.newegg.com/Product/Produc...82E16814130738. It would probably beat a stock 570 in performance.

Thanks. I do want low noise, which typically translates into low power, one of the reasons I'm not sure I want a last gen card if if it uses a lot more power. My main problem with the 560 Ti, even at that sub-$200 price, is the 1 GB of memory. Even though I don't play RPGs like Skyrim, seeing the performance differences between the 560 Ti and 7850 on that game, possibly mostly due to memory, makes me think I'll want 2 GB soon (since I'd probably keep the card at least 3 years). The 570 and 560-448 are a little better with 1280 MB but still not 2 GB.

Looking at some other benches besides BF3, the 7850 and 7870 aren't THAT bad, but still not the generational leap we would've liked, as you mentioned.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
No the reference PCB that is being shipped has 6+8+6 pinouts soldered into it.

I assume they limited it because of GK100, but there is really no way to know. Soon would be relative, at this point it's more IF. We know they'll have to make GK100 for workstation, the if comes from if it will ever come to desktops.

GF110 isn't that much less efficient than GF114, but it is. And with GK104 getting such high TDP for a mid-range card it doesn't leave much room for BigK unless Nvidia is willing to push past 250 TDP on a single gpu card.


I think there are a lot of factors at play here, including fab/competition. GK104 sitting at $500 is the best thing that could have happened for Nvidia. A small die gpu offers better cost/profit ratios than their previous generation 500+mm2 dies have... They also produce more dies per wafer and are supply limited both in retail and at the fab currently, however Nvidia still needs to refresh their workstation lineup and introduce cooler, more efficient cards for HPC.

Here's a pick off newegg of a retail 680, look at the PCIe pinouts:

14-121-626-08.jpg




This goes without saying, I'm running 2 470's with 225w TDP 6+6 with 57% overclocks, do you think I might be outside the 225w TDP limit, just a hair?

What you do, what they can do, are two different things. Nvidia can not release a card that by design exceeds TDP limits. That's a basic point of fact, what you and I do is of no concern since we're operating the cards outside of their designs specs.

Good information and yup that picture tells a story for sure.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
The most interesting part is the extra paints so I could make my card colorful.... DO WANT :sneaky:
 

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
Any word on a dual GK104 card? This GPU is just awesome for such a card, they can cram two of those unchanged and still not go above GTX590s TDP. I can't imagine how AMD can produce anything remotely competitive with dual GK104.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Honestly, it's like reading a nvidia marketing pamphlet. :p

That's funny you accuse me of providing "NV marketing post" while you are the one missing the entire picture here - a more efficient, cheaper card is faster. You can discount the review I linked if you don't like its results. That review is not the only review which shows that in certain games 680 leads by a 20-30%. Furthermore, their card reached 1300mhz on reference cooling, and so did Xbitlabs' card. That's at least 2 samples that have achieved > 1300mhz overclockers. Many reviewers were able to achieve 1180-1250mhz as well. Regardless of how "cherry-picked" it could have been, I haven't seen any HD7970 accomplish 1300mhz on air. In fact 2 out of 3 Lightning 7970s couldn't even get to 1250mhz on air.

In fairness, if we are going to compare the best 7970 card vs. a reference 680, it makes sense to wait until after-market 680s arrive. There is little doubt that HD7970 will be left in the dust completely once Direct CU, Windforce 3X, Lightning, Classified editions of those cards launch. The fact that we are even discussing a $600 Lightning cards vs. a reference $500 is in itself telling how great the 680 is.

It's interesting how last generation HD6970 was 90% as good as GTX580 and you claimed it offered amazing bang for the buck for $370 and now HD7970 offers 90% of the performance of a 680 and it's OK that it's priced at $500? 2nd fastest card should always cost less unless it offers some unique features worth paying a premium for.

You say 3GB is a key advantage but so far no benchmarks have shown this to be true. I'd much rather recommend someone spends $500 on a card with proper working drivers and proper working features (h.264 encoding):

GTX-680-92.jpg

Gugila-GroundWiz-Alpine_DX11_Benchmark.jpg


3GB of HD7970 didn't really help it in SKYRIM where on 3 monitors 680 handily won.

Previously you claimed that HardOCP focusing on newer games is preferable since testing older games is irrelevant. This was before all the 680 reviews came now. Now the only way HD7970 even manages to get tangible wins is when 2-4 year old games are used in the reviews: Crysis Warhead, Metro 2033 and AvP. Do those games matter more than BF3, Dirt 3, SKYRIM? Maybe to some gamers, but prob. not to most.

I don't believe that a reference HD7970 can be justified even at $500.

If you don't like the Bjorn3D review, look at any other notable ones: Hardware Canucks, ComputerBase, AnandTech and even your #1 source to go to - HardOCP. GTX680 wins in all of them.

GTX-680-82.jpg


"We could prattle on and on extolling the GTX 680’s virtues but here’s what really matters: NVIDIA’s newest flagship card is superior to the HD 7970 in almost every way. Whether it is performance, power consumption, noise, features, price or launch day availability, it currently owns the road and won’t be looking over its shoulder for some time to come." ~ Hardware Canucks

In their review, GTX680 was 16% faster at 1080P 4AA and 15% faster at 2560x1600 4AA. HD7970 needs to cost at most 90% of the GTX680 -- that's $450. And that's being generous since people always pay a premium for the fastest single GPU, especially one that's better in most other metrics too: power consumption, noise, most 680s have longer 3 year warranty, working H.264 decoding, new TXAA mode that might be good (or maybe marketing), etc.

I find it odd that you are trying to defend 7970's pricing and yet accuse me of spitting out NV "marketing pamphlet post". Last time I checked lower prices are better for consumers and here you are advocating that HD7970 doesn't need to fall in price by more than $50. It is a rather interesting position esp. after you've promoted AMD's bang for the buck philosophy for years. Yet now you think consumers should pay the same for a card that needs to be overclocked to match guaranteed performance. I don't see that as a reasonable expectation since overclocking is always just a nice bonus, while performance out of the box is guaranteed. Actually part of the enthusiasm behind overclocking is to get a card that's cheaper and performs as fast as a higher SKU part. In this case, a reference card is faster than a 1050mhz 7970 and that performance is guaranteed for everyone.

Since you already stated you won't support NV because of their business practices, thank you for finally admitting that you prob. won't buy NV products in the first place. If you prefer AMD cards for any reason, there is nothing wrong with that. Plenty of posters buy NV cards for Folding@Home for example. However, by more or less stating that you won't support NV as a company due to their business practices, don't expect most posters to view your opinion as objective when it comes to videocard recommendations.
 
Last edited:

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
snip


It's amusing you are trying to defend 7970's pricing and yet accuse me of spitting out NV "marketing information". Last time I checked lower prices are better for consumers and here you are advocating that HD7970 shouldn't fall in price by more than $50. Interesting position you've taken after promoting bang for the the buck 5850/6950/6970 for years and years. :rolleyes:

But since you already stated you won't support NV for their business practices, thanks for finally coming out admitting that you won't buy NV products in the first place. If you prefer AMD cards for any reason, that's great, but then don't expect people to view your opinion as objective.

Some comments there been a long time coming RS, thanks for that!
 
Last edited:
May 13, 2009
12,333
612
126
If I was nvidia I'd pay Russian. :biggrin: That is a very well thought out and cleverly worded argument that's dead on as to why the gtx 680 is an absolute no brainer over the 7970 at current pricing.