GTX680 reall a 660Ti?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Even though this is basically a 660ti, but if it were labeled a 660ti, I don't think the memory clock would be 6gbps and the GPU clock would be as high. Also, I believe it would have less cuda cores as well. I think they took the 660ti and they beefed it up and relabeled it 680.

I beleive that they beefed up clocks significantly. But not so much on the core side.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
They might have beefed up the memory to make it faster than the 7970 but the core is what were really are talking about.

This is why nv just sat back and tuned the 104 up to be stable and run cool and still be fast enough to beat the 7970.

the real 680 is going to now cost us 799 when it comes out tho so get ready for a raping on that card with with more cores and a higher bit memory system.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
I beleive that they beefed up clocks significantly. But not so much on the core side.

they had no choice since the card is bottlenecked by its 256bit bus.

When they first made this card it was supposed to be a mid range runner and once they saw that they could match or beat the 7970 with upping the memory speed they did and just relabled.

we would all do the same thing and just imagine what this card would do with 3gb ram and a 384bit bus lol

edit we will know once the real 680 hits the streets!!!
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
OP the short answer is yes. Anyone who doesn't think the GTX 680 was derived from the GTX 560 (aka GF114) needs to read this: http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/2 and the page after that one. It is abundantly clear, especially with the lack of HPC performance (not just disabled after the fact--the HPC performance wasn't there to begin with!).

/thread


Just because its derived from the 560 doesn't mean its exactly a midrange card. I mean, something must of been appealing to the 560's design for them to replicate it to tweak better than 580 performance with much less power consumption. Just a die shrink on the 580 would still merit 200+ watt power consumption.
 

Edrick

Golden Member
Feb 18, 2010
1,939
230
106
Bottom line is the 680 is a 680. It is now a higher end card than the GF114 was because nvidia wanted it to be. But it is not the highest end Kepler GPU. That will come later as the GK110.

Question answered.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
Its hard to know for sure but I would say that the current GTX 680 is more of a spiritual successor to the GTX 560 Ti. We don't know if GK100 was having yield problems or something else that might have delayed it but we do know that GTX 680 is the current nVidia flagship card and that they don't expect GK100/110 to be ready within the next few months. GK100/110 might launch in Q3 2012 from what I have been reading/hearing.

/speculation
 

kidsafe

Senior member
Jan 5, 2003
283
0
0
So i heard several rumors about this and found several people posting about it. is it true that nvidia released their GTX 660Ti (or other lower end card) as a 680 because they didnt consider ATI to offer enough competition?
I think what happened is they decided to hump up the clocks of their GK104 parts considerably when they realized "BigK" would not be ready in time. They simply did not want AMD to have the performance crown for that long. It presents an interesting situation for Nvidia now.

I imagine a GK104 based GTX 670 with 1440 shaders and maybe 112 texture units will make an appearance at the between US$400-450. Then maybe a GTX 660 Ti with 1344 shaders and 96 texture units at $350. And then beyond that a GTX 660 with 1280 shaders 80 texture units and 24 ROPs around $280.

It's weird because we don't know if there's anything in between the low end GK107 and what originally should have been the midrange GK104. I imagine there will be two GK107 parts for $150 and $200 respectively to round things out for real Kepler parts. The absolute low-end will be rebranded previous generation CPUs.
 
Last edited:

MrTeal

Diamond Member
Dec 7, 2003
3,919
2,708
136
Even though this is basically a 660ti, but if it were labeled a 660ti, I don't think the memory clock would be 6gbps and the GPU clock would be as high. Also, I believe it would have less cuda cores as well. I think they took the 660ti and they beefed it up and relabeled it 680.

I think you vastly underestimate what it takes to "beef up" a GPU with more CUDA cores.
 

kidsafe

Senior member
Jan 5, 2003
283
0
0
they had no choice since the card is bottlenecked by its 256bit bus.
This I suspect is going to be largely the case after both cards are OC'd to their fullest potential. In the LinusTechTips video a 1125MHz/1575MHz 7970 was ever so slightly behind a 1250MHz/1500MHz GTX 680. Assuming both can attain similar max clocks on both core and memory, then I think the HD 7970 wins just about any benchmark that doesn't utilize FXAA.
 

sandorski

No Lifer
Oct 10, 1999
70,861
6,396
126
Perhaps Nvidia originally intended it to be a lower Card, but that's been rendered moot since it is being sold as a 680. Could be for numerous reasons, perhaps because AMD didn't deliver what Nvidia expected them to or perhaps the original 680 GPU still has issues that would take too long to fix within a reasonable time period.

Either way, the constant calling of the 680 as a 660(or other such comparison) has already grown old and it has been <24 hours since it started. AMD and Nvidia have their Cards on the table, make your choice based upon what's available.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
So i heard several rumors about this and found several people posting about it. is it true that nvidia released their GTX 660Ti (or other lower end card) as a 680 because they didnt consider ATI to offer enough competition?

Yes, no, sorta. To say that they released the GTX 660 Ti "as a 680 because they didn't consider [AMD] to offer enough competition" is flamebait and pushing the issue. Yes it's got more in common wit the GTX 560 Ti than the GTX 580 (except for performance, heh), but it's also a 195W+ card (pretty high for a 28nm card, especially a supposedly "midrange" one, and about 20+W higher than the GTX 560 Ti), ridiculously fast GDDR5 memory and high clockspeeds that it probably wouldn't have gotten if Nvidia was releasing an even higher specc'ed card simultaneously.
------

The GTX 680 is indeed a derivative/relative of the GTX 560 Ti. Its 294mm² die size and 256-bit bus hilight its relationship to the 324mm²/256-bit GTX 560 Ti, and less so to the massive 520mm²/384-bit GTX 580.

However, the current GTX 680 is more a reflection of Nvidia addressing issues with its current GTX 570/580 (massive die size, huge TDP). Similarly, ATI went the opposite direction from their "small-die" strategy and beefed up their 7950/7970 to be more competitive with Nvidia. Both companies had weaknesses that they shored up by essentially emulating strengths of their competitors. Nvidia has become the current performance-per-watt champion in a big way with the GTX 680; a very savvy move to make at this time.
 
Last edited:

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
I think you vastly underestimate what it takes to "beef up" a GPU with more CUDA cores.

I may have but how do I know? This is just all speculation. I am not making any statements.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
well the rumors are the big kepler chip is as fast as 3 gtx580s and there was a guy on extreme systems that said last month he had a low end kepler and it was faster than the 7970.

they can easily beef up the 104 core and give it 384bit memory so who knows what NV has waiting for the dual 7970.

From the looks of the card that just came out Im willing to bet a single big kepler gpu will go head on with 2 7970s.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
Just because its derived from the 560 doesn't mean its exactly a midrange card. I mean, something must of been appealing to the 560's design for them to replicate it to tweak better than 580 performance with much less power consumption. Just a die shrink on the 580 would still merit 200+ watt power consumption.

Please just stop and read the 2 pages. Really. Read them.

The chip is physically different that what you'd expect from a 28nm GF100 style chip.

Parts are MISSING, not merely disabled. Physically. Like the GF114.
 

kidsafe

Senior member
Jan 5, 2003
283
0
0
Please just stop and read the 2 pages. Really. Read them.

The chip is physically different that what you'd expect from a 28nm GF100 style chip.

Parts are MISSING, not merely disabled. Physically. Like the GF114.
You guys are arguing in circles. It's semantics. The GTX 680 is a GTX 680. It's the fastest single GPU Nvidia offers and it beats a stock HD 7970. It's also priced at $500. "BigK" has not made an appearance and likely will not for another half year.

So you can argue if it looks like a duck ("680" moniker), swims like a duck (gaming performance) and quacks like a duck (price,) then it probably is a duck (a high-end card).

That's not to say it doesn't share underpinning similarities with GF114. Obviously it doesn't have the FP64 performance, or transistors at all. Obviously it has a smaller die, you'd expect a 560 Ti etched on a 28nm process to be even smaller than 294mm^2. It should be closer to 260mm^2. The fact is even if you count the 680's shaders in pairs, it has more pairs than the GTX 580 has individual shaders. It has 128 texture units like a proper high-end card, but it also has 32 ROPs like a 560 Ti.

I see no issue with calling the GTX 680 Nvidia's high-end card right now. Just as I see no issue with calling "BigK" Nvidia's high-end card 6 months from now. See how that works? It doesn't matter how the GTX 680 got there...just that it's there, firmly at the top for while yet.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
You guys are arguing in circles. It's semantics. The GTX 680 is a GTX 680. It's the fastest single GPU Nvidia offers and it beats a stock HD 7970. It's also priced at $500. "BigK" has not made an appearance and likely will not for another half year.

So you can argue if it looks like a duck ("680" moniker), swims like a duck (gaming performance) and quacks like a duck (price,) then it probably is a duck (a high-end card).

That's not to say it doesn't share underpinning similarities with GF114. Obviously it doesn't have the FP64 performance, or transistors at all. Obviously it has a smaller die, you'd expect a 560 Ti etched on a 28nm process to be even smaller than 294mm^2. It should be closer to 260mm^2. The fact is even if you count the 680's shaders in pairs, it has more pairs than the GTX 580 has individual shaders. It has 128 texture units like a proper high-end card, but it also has 32 ROPs like a 560 Ti.

I see no issue with calling the GTX 680 Nvidia's high-end card right now. Just as I see no issue with calling "BigK" Nvidia's high-end card 6 months from now. See how that works? It doesn't matter how the GTX 680 got there...just that it's there, firmly at the top for while yet.

We agree with everything you are saying...all we are trying to say is this card was not supposed to be nv top end card.

What the heck are they going to call the real 680 when it comes out?if i was nv i would of called it a 660ti just to bust amds ballz and to let them know they have a monster waiting.

The op asked if its a relabled gpu and it is 100% not a real 680 just called one because it beats amds top end gpu.

Dont forget its on a 256bit bus also
 
Last edited:

kidsafe

Senior member
Jan 5, 2003
283
0
0
Well you're arguing hypotheticals, which is pointless. At one point it was labeled a GTX 670 Ti, but from what we can tell it was never internally known as a GTX 660 [Ti].

What will Nvidia call BigK? Who knows... probably the GTX 780, but I'm secretly hoping both the Radeon and GeForce brandnames get retired at some point. They've beaten those names to death after a decade. Surely they can come up with a new brand and start all over again? It might be harder for AMD since they now even have Radeon branded SDRAM...

Besides I personally think it would have been a horrible idea to call it the GTX 660 Ti. The performance is there, the price is there...They may as well convince the casual buyer that it's the high end card so they buy it. Nvidia is surely getting more dies per wafer than AMD at a similar price point, so margins will be slightly better.

It's basically the lovechild of GF114 and GF110, but it's capable of being sold as a halo product. That's a win-win situation and allows Nvidia to trickle down GK104 to GTX 670/660Ti/660 level cards over time.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
You guys are arguing in circles. It's semantics. The GTX 680 is a GTX 680. It's the fastest single GPU Nvidia offers and it beats a stock HD 7970. It's also priced at $500. "BigK" has not made an appearance and likely will not for another half year.

So you can argue if it looks like a duck ("680" moniker), swims like a duck (gaming performance) and quacks like a duck (price,) then it probably is a duck (a high-end card).

That's not to say it doesn't share underpinning similarities with GF114. Obviously it doesn't have the FP64 performance, or transistors at all. Obviously it has a smaller die, you'd expect a 560 Ti etched on a 28nm process to be even smaller than 294mm^2. It should be closer to 260mm^2. The fact is even if you count the 680's shaders in pairs, it has more pairs than the GTX 580 has individual shaders. It has 128 texture units like a proper high-end card, but it also has 32 ROPs like a 560 Ti.

I see no issue with calling the GTX 680 Nvidia's high-end card right now. Just as I see no issue with calling "BigK" Nvidia's high-end card 6 months from now. See how that works? It doesn't matter how the GTX 680 got there...just that it's there, firmly at the top for while yet.

Analogy: say Abe and Cain are King and Duke; they have some other relatives but they are all lower ranked. King Abe for whatever reason is having a tough time making a baby (low yield of sperm?). Duke Cain's wife pops out a son, and for several months it seems as though Duke Cain's kid is going to end up being the heir to the throne by default. During this time that kid may be treated as though he were the prince. But the moment King Abe makes a baby, that is going to be deemed the true heir to the throne--not Duke Cain's son, no matter how much the Duke's son looks and acts like a prince.

The GTX 680 is Duke Cain's son, and is going to be treated much like a prince by default, since King Abe's low-yield sperm is taking forever to make an heir. It may end up the case that the King never sires an heir, and in the meantime, the Duke's son becomes the de facto prince as more and more people give up hope that the King will ever sire an heir. But we don't know for sure that the King will never sire an heir, and until then, it's disingenuous to crown the Duke's son a prince, so long as there is a possibility that the King sires a true heir.

What you are rambling on about re: die size is incorrect (again: shader clocks, read the 2 pages, specifically the portion about die space). Obviously you need to read those 2 pages as well to understand why it isn't as simple as shrinking GF114 down to 28nm. Two words: shader clock.

Yes of course a gtx 680 is called such and is marketed by NV to be such, but it is clearly derived not from a GF110 but a GF114, so it is perfectly understandable why people feel like it's just a souped-up GF114 descendant (gussied up with more than 2-way SLI ability and fast RAM, but note it's still only a 256-bit card and physically derived from a GF114) and not the true heir to the GF110.

The bulk of what you say I've already said a bazillion times myself and I doubt there is much disagreement on this: the gtx 680 is marketed and priced as the top end card. Further, it is apparently the fastest Kepler available and will continue to be so for a long time (several months minimum), therefore even if BigK came out, it will come out so late that it might as well be considered a refresh, rather than a part of the same "generation." There is no disagreement here. You're beating a straw man.

But that doesn't change the fact that it is clearly a GF114 derivative--not GF110--and offers much lower performance increase over the previous top-end card, and it's obviously based on the previous midrange card and lacks HPC characteristics. Thus it invites comparisons to last-gen midrange placement.

So yes it is basically a GTX 660 Ti that's been souped up and clocked high, no doubt affecting yields badly, which is probably part of why there is such low availability of the GPU right now.

That is nothing to be ashamed of; NVDA can gloat that their souped-up midrange card trades blows with the rival top-end card. (They are about as fast when both are overclocked and the 7970 gets a little voltage to match up with the overvolting that GPU Boost does, but the GTX 680 is more power efficient and cheaper to boot.)

Things do not have to be one way or another. Both things can be true. Obviously this wannabe-princeling card is a GTX 680 in many respects but it is also obviously derived from the loins of a midrange Duke and not the King; when mapping GK104 onto the GF11x family tree, it is obviously the GF114 analog.

The end.
 
Last edited:

kidsafe

Senior member
Jan 5, 2003
283
0
0
I never did I say the GK104 was based on GF110. From the looks of it, you are so mad you didn't take the time to read what I said. I mentioned that it takes 2x as many shaders in Kepler to perform the same task as Fermi. Because GF114 only has 384 shaders, 1536 is in fact 4x the number of shaders.

2 Kepler shaders (or CUDA cores) occupy more die area than 1 Fermi shader. We also know that GK104 has 4x the total shaders than GF114 has. Somehow Nvidia managed to squeeze double the amount of larger 'shader pairs' into a die area that was only about 50mm^2 larger than a 28nm GF114 would have been. That is a technical marvel, and it shows how monstrous GK104 actually is in terms of pure shader performance per area. If we were merely talking about an evolution of a GF110 sized die, it would be a 1024 shader core with more TUs and ROPs. It would be a different beast and arguably slower.

But that's the point, isn't it. Think of GF110 as a normally aspirated 8 liter V10 engine. It's quite powerful, but it just got replaced by a turbocharged 6 liter W12 (two merged 3L V6s using a common crankshaft) that absolutely smokes it.
 
Last edited:

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
If I were mad, I wouldn't have cooked up a silly Duke and King story for your entertainment. I don't really care about this topic as it's all academic now and your statement is still wrong when you tried to talk up the GK104 saying its die size proved it's the high end GPU. No, sorry, read the AT review. Your Duke's son is not the prince though he be of noble blood and if the King's low-yield sperm continues to be a problem, the Duke's son might as well be the new King anyway. (Well either him or his siamese-twin brothers, assuming they can fit under the 300 watt TDP limit.) Anyway, I'm out of here as my download of Wing Commander Saga just completed (finally!). Time to put my GPU to good use. Good night.

Edited to add: for those who are interested, they made a derivation of the Wing Commander series. Free. http://www.wcsaga.com/ It just came out today.

I never did I say the GK104 was based on GF110. From the looks of it, you are so mad you didn't take the time to read what I said. I mentioned that it takes 2x as many shaders in Kepler to perform the same task as Fermi. Because GF114 only has 384 shaders, 1536 is in fact 4x the number of shaders.

2 Kepler shaders (or CUDA cores) occupy more die area than 1 Fermi shader. We also know that GK104 has 4x the total shaders than GF114 has. Somehow Nvidia managed to squeeze double the amount of larger 'shader pairs' into a die area that was only about 50mm^2 larger than a 28nm GF114 would have been. That is a technical marvel, and it shows how monstrous GK104 actually is in terms of pure shader performance per area. If we were merely talking about an evolution of a GF110 sized die, it would be a 1024 shader core with more TUs and ROPs. It would be a different beast and arguably slower.

But that's the point, isn't it. Think of GF110 as a normally aspirated 8 liter V10 engine. It's quite powerful, but it just got replaced by a turbocharged 6 liter W12 (two merged V6s using a common crankshaft) that absolutely smokes it.
 
Last edited:

kidsafe

Senior member
Jan 5, 2003
283
0
0
My point about die size is that unlike a strict derivation of GF114, they doubled the number of equivalent functional units. The result was a fairly notable increase in die size... It could have been more, but Nvidia finally managed a supremely high transistor count per area, just ever so slightly less than Pitcairn.

It sits firmly in between GF114 and GF110 in terms of 28nm scale, and it has more 50% more shaders than a Kepler analog of GF110. If anything it's a single-package GTX 660 Ti X2, hence the automotive analogy.
 

Chris_82

Junior Member
Mar 22, 2012
1
0
0
GK104 is a midrange design. It was never meant to compete in the high-end space that's why it doesn't have any compute features because nV always use their best chips for tesla cards. That's also why it only has 256bit memory bus. Because nv flopped with GK100 and tahiti wasn't very impressive they realized that with the right clocks their mid-range chip can compete with tahiti just fine, so they slapped high-end price for a midrange design. And they can get away with it because it actually outperforms tahiti in most cases.

Completly agree it was just luck that this card matched some would say bettered amd's top offering at the time so they could then push it out as a top end rather than mid range option.

It's a pretty simple and obvious conclusion to what has happened,with all the evidence over multiple sites and such I don't see why this keeps coming up to be debated about tbh....
 

Smoblikat

Diamond Member
Nov 19, 2011
5,184
107
106
Well you're arguing hypotheticals, which is pointless. At one point it was labeled a GTX 670 Ti, but from what we can tell it was never internally known as a GTX 660 [Ti].

What will Nvidia call BigK? Who knows... probably the GTX 780, but I'm secretly hoping both the Radeon and GeForce brandnames get retired at some point. They've beaten those names to death after a decade. Surely they can come up with a new brand and start all over again? It might be harder for AMD since they now even have Radeon branded SDRAM...

Besides I personally think it would have been a horrible idea to call it the GTX 660 Ti. The performance is there, the price is there...They may as well convince the casual buyer that it's the high end card so they buy it. Nvidia is surely getting more dies per wafer than AMD at a similar price point, so margins will be slightly better.

It's basically the lovechild of GF114 and GF110, but it's capable of being sold as a halo product. That's a win-win situation and allows Nvidia to trickle down GK104 to GTX 670/660Ti/660 level cards over time.

The price is certainly not here. 500$ for a presumably 350$ card is hardly fair.

As a side note, imagine if the 600 series had the per core performance of the 500/400 series. It takes the 680 triple the amount of cores to beat a 580. Now THAT card would be impossible to beat.
 

kidsafe

Senior member
Jan 5, 2003
283
0
0
The price is certainly not here. 500$ for a presumably 350$ card is hardly fair.

As a side note, imagine if the 600 series had the per core performance of the 500/400 series. It takes the 680 triple the amount of cores to beat a 580. Now THAT card would be impossible to beat.
Again, you are presuming that this was supposed to be a $350 card...based on what, S|A, OBR, Fudzilla? Sites that make up rumors for ad revenue? I also don't get what you're trying to say about per core performance. It's not magic...you can't just conjure up performance out of nowhere. Nvidia gave up hot-clocks so that they could devote more die area to CUDA cores while shrinking the front-end and back-end units. This means that while each CUDA core does half as much, each "SM" devotes a lot more transistor space to them as well without as much scheduling overhead.

Remember, this GPU has twice the CUDA cores because it needs to, then it has twice that number again because Nvidia basically decided, "hey let's double the performance." As a result the GTX 680 is literally twice as powerful as what I would have guessed a proper GTX 660 Ti would have been like...without the trouble associated with SLI (separate banks of memory, separate memory controllers, a PCIe multiplexer, additional VRMs, etc.)

If GF114 is a V6 and GF110 is a V10, then GK104 is either a V12 or W12. It's simply more than a V10 no matter what way you cut it. It doesn't matter that it started out blueprint for a V6. It ended up being a V12/W12...basically two V6s in one (without the overhead of having two whole engine blocks and redundant components.) As Jeremy Clarkson would say...it's simply more.

I'm not sure how you can call $500 unfair. It spanks the GTX 580, which was $500-550 just 3 months ago. It has 500M more transistors than GF110 and 1550M more than GF114. Both companies are also being gouged by TSMC along with the rest of the fabless chip designers.
 
Last edited: