• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[SA] GK110 aka GTX 680 release date: Late Q3 '12

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
mobile (smartphone)? NV's Tegra line may not be the greatest moneymaker yet, but at least they even HAVE a mobile division, unlike AMD's which was sold off for cheap years ago.

And nVidia has lost about $280 Million in it's Consumer Products Division since the 2008 fiscal year (which was the last time that time it made any money in fiscal year). Given how strong the competition was and is now I can't see why anyone would think that exiting the mobile ARM market was anything but a good idea for AMD.
 
Nvidia took their lumps on the 480 release and are not going to rush out a high end part again and get the same treatment...

Release a mid-range card, get something on the market to talk about and get sales

...a delay till late Q3 2012 from nvidia

Agreed. I wonder if the GK104 will be fast enough to be called a 680 and compete with the 7970, then GK112 be labeled a 780 in September.
 
Well if this news is indeed true, it looks like my prediction that Nvidia would have their high end 28nm card out within 3 months of AMD's is WAY OFF. 🙁 It's a good thing silverforce didn't take that bet with me, otherwise I'd owe him a steam game of his choice. 🙂 I hope GK104 can bring some serious competition and pricing pressure to the table in the meantime, otherwise it's going to be a long year of high prices.

This entire post is ifs based on ifs based on internet rumors, so take it with a few grains of salt, but...

If GK104 arrives on time, and
If the GTX660 does indeed perform as well as a GTX580, and
If the GTX660 is priced at $320

it could help to bring down prices in both camps. Even without a GK110 part to compete at the top end, a card with GTX580 level performance for $320 would make a $450 HD7950 a complete loser. The HD7970 might still be viable for someone who absolutely needs the best single card, but even then you'd have to look long and hard at a single 7970 for $550 vs 660SLI for $640 and way more performance. If that those predictions are true, prices should go down even without big K.
 
Another copy and paste from lenzfire 🙄

Rollo has contacted about 30 websites with that same link it appears. He's still fighting the good fight !


Br1nq.jpg
 
Well if this news is indeed true, it looks like my prediction that Nvidia would have their high end 28nm card out within 3 months of AMD's is WAY OFF. 🙁 It's a good thing silverforce didn't take that bet with me, otherwise I'd owe him a steam game of his choice. 🙂 I hope GK104 can bring some serious competition and pricing pressure to the table in the meantime, otherwise it's going to be a long year of high prices.

SA stated they've seen the die of gk104, it's around ~342mm2. Going from gtx580 -> gk104 is a full shrink as you are all aware. I'm going to estimate performance purely based on perf/mm2, which is not accurate but close enough. 550mm2 -> 275mm2 is a direct shrink, keeping things the same, nothing new added, ~similar performance (some headroom factored in vs speed increases).

gk104 being much bigger than that suggests this mid-range chip is a very good performer, it HAS to be significantly faster than a gtx580 or its been seriously stuffed up. Now, NV engineers are not retards so i place my bets on gk104 a good part faster than gtx580. Leaks so far indicate late Q1 or early Q2 for this part, so its not a huge delay (if it arrives in April).
 
"Why didn't they clock them higher FROM THE BEGINNING? :awe:" is a horrid argument that falls apart the second you consider the existence of GF110. That said there are plenty arguments against a refresh that hold up.

You won't see TSMC go down a node for atleast two more years, so squishing all the power so quickly would be stupid. 7970 being 40% faster than the 6970 and the 8970 being 40% on top of that is far better for product differentiation. Also, they seem to be aiming for HSA asap, and so can't afford to split resources between a refresh and Canary/Sea Islands.

Superclocked cards from AIB's yeah, but no refresh.
 
Perhaps you should be a tad more conservative with the math, hard to get more "theoretical" assuming a straight halving of the mm2 for a die shrink. IMO, if it's in the low 300mm2 range than barring edge case scenarios it should be orbiting the GTX580 performance level, should clock better but that might just offset the hit from a rumored 256-bit bus. My wallet will stay closed for 28nm if I can't get a card like that for around $300.

SA stated they've seen the die of gk104, it's around ~342mm2. Going from gtx580 -> gk104 is a full shrink as you are all aware. I'm going to estimate performance purely based on perf/mm2, which is not accurate but close enough. 550mm2 -> 275mm2 is a direct shrink, keeping things the same, nothing new added, ~similar performance (some headroom factored in vs speed increases).

gk104 being much bigger than that suggests this mid-range chip is a very good performer, it HAS to be significantly faster than a gtx580 or its been seriously stuffed up. Now, NV engineers are not retards so i place my bets on gk104 a good part faster than gtx580. Leaks so far indicate late Q1 or early Q2 for this part, so its not a huge delay (if it arrives in April).
 
I don't think anyone really cares about die size anymore. Nvidia having the bigger die has been the norm for what, 3 gens at least? At least since G80. So whatever it is, 290mm2 or 579mm2, it'll be ok guys.
 
Well if this news is indeed true, it looks like my prediction that Nvidia would have their high end 28nm card out within 3 months of AMD's is WAY OFF. 🙁 It's a good thing silverforce didn't take that bet with me, otherwise I'd owe him a steam game of his choice. 🙂 I hope GK104 can bring some serious competition and pricing pressure to the table in the meantime, otherwise it's going to be a long year of high prices.
Your prediction was 2 months and it expires March 9th.
Just in case you thought a bit of revisionist history would work.😉
 
I don't think anyone really cares about die size anymore. Nvidia having the bigger die has been the norm for what, 3 gens at least? At least since G80. So whatever it is, 290mm2 or 579mm2, it'll be ok guys.

Just because it's been done for the last three years/gens doesn't make it right. As a matter of fact, that makes it worse.

Three wrongs don't make a right. Huge dies are a nightmare when it comes to yields, manufacturing costs, heat dissipation/cooling, heat output, efficiency, and many other things.

But you wouldn't find that very relevant since you're, according to your sig, an "NVIDIA Focus Group member".
 
Last edited:
I care about die size...AMD chips are small and cute whilst NVDA chips are big ,have unwanted body hair and smell funny.()🙂

I heard ultrabooks don't get along with large power hungry dies? I heard that desktop PC shipments are down 20% compared to last year in the US and Europe? I heard dell's new PC shipments went down 35% this year? And Acer down 65%? Can anyone confirm or deny? I have no idea why AMD would want a small efficient die, someone help me understand! I really don't know why AMD or NV would want chips for ultrabooks

Sarcasm aside, maybe GK104 and below are their efficient chips. Shrug. It really does not make sense to have large power hungry die at this point in the game, unless GK110 is following a different design philosophy than the lower end GK parts.
 
Last edited:
Just because it's been done for the last three years/gens doesn't make it right. As a matter of fact, that makes it worse.

Three wrongs don't make a right. Huge dies are a nightmare when it comes to yields, manufacturing costs, heat dissipation/cooling, heat output, efficiency, and many other things.

But you wouldn't find that very relevant since you're according to your sig you're an "NVIDIA Focus Group member".

Who says it's wrong? It has no merit. Not with the success they have had regardless of die size. You can even see AMD dies getting bigger and bigger as gens progress. But it really just doesn't matter unless you REALLY want it to. And I believe you guys are MAKING yourselves want it to matter way more than it ever should or could. Believe it LOL. Hey, you never know. You might be secretly thanking me later for saying this. 😉
 
Who says it's wrong? It has no merit. Not with the success they have had regardless of die size. You can even see AMD dies getting bigger and bigger as gens progress. But it really just doesn't matter unless you REALLY want it to. And I believe you guys are MAKING yourselves want it to matter way more than it ever should or could. Believe it LOL. Hey, you never know. You might be secretly thanking me later for saying this. 😉

http://hexus.net/business/news/general-business/34929-pc-market-declines-20-warnings-trouble-2012/

Obviously the ultrabook market is going to be far more lucrative than desktop sales in the coming years, this is why efficiency matters. That said, I don't care about power use (within reason, I don't want a freakin GTX 480) for my desktop PC. But there are reasons why NV / AMD strive to have efficient chips.
 
http://hexus.net/business/news/general-business/34929-pc-market-declines-20-warnings-trouble-2012/

Obviously the ultrabook market is going to be far more lucrative than desktop sales in the coming years, this is why efficiency matters. That said, I don't care about power use (within reason, I don't want a freakin GTX 480) for my desktop PC. But there are reasons why NV / AMD strive to have efficient chips.

I think that is where Optimus tech comes in, and perhaps Tegra3, 4, 5, etc.
Don't get too fixated on one product though. They are but one of many.
 
I could really give two shits about die size and efficiency since I don't own either company's stock. I only care about price/performance and if Nvidia is willing to eat the costs of low yields and fat dies to give it to me, I don't really care. That's why I got my 5870, it was a great value to me at the time and it's why I'm not getting a 7970 now.
 
I could really give two shits about die size and efficiency since I don't own either company's stock. I only care about price/performance and if Nvidia is willing to eat the costs of low yields and fat dies to give it to me, I don't really care. That's why I got my 5870, it was a great value to me at the time and it's why I'm not getting a 7970 now.

I am down with this as well. I could give a rats for how big the die is or how much power it uses. I would care about how hot=loud it runs though, but I've decided to just watercool after dealing with my current setup's noise levels.

When the massive die comes at the cost of taking 6 months or more to compete with the competition leaving me with lack of choice as a buyer, I guess then I care about die size. Once it's out though, could care less, especially if some monster die can deliver obscene performance.

I think the 480 bugged so many people because it was pretty massive compared to Evergreen's die but not proportionately faster in performance in the additional die size, power use and heat.
 
I am down with this as well. I could give a rats for how big the die is or how much power it uses. I would care about how hot=loud it runs though, but I've decided to just watercool after dealing with my current setup's noise levels.

When the massive die comes at the cost of taking 6 months or more to compete with the competition leaving me with lack of choice as a buyer, I guess then I care about die size. Once it's out though, could care less, especially if some monster die can deliver obscene performance.

I think the 480 bugged so many people because it was pretty massive compared to Evergreen's die but not proportionately faster in performance in the additional die size, power use and heat.



I'm tempted by the water setup too. I wonder at times what kind of quiet gaming heaven would exist if I water cool the whole thing and have no fans spinning all the time.

As for the large die, if it causes all that then I agree. It affects our price/performance as consumers however I still view it all as an overall product which poor cooling solutions, substandard components etc. play just as much into the overall experience as the die size did.
 
Price/perf needs to factor in power use as well, if you live outside the USA and especially in the EU where some countries pay >0.35 EUROs per kwh as opposed to USA's ~0.08 cents.

All i am saying is if gk104 is ~342mm2, its performance should be quite stellar for a mid-range chip, ie. faster than gtx580.
 
Who says it's wrong? It has no merit. Not with the success they have had regardless of die size. You can even see AMD dies getting bigger and bigger as gens progress. But it really just doesn't matter unless you REALLY want it to. And I believe you guys are MAKING yourselves want it to matter way more than it ever should or could. Believe it LOL. Hey, you never know. You might be secretly thanking me later for saying this. 😉

Cypress=338mm^2
Cayman=389mm^2
Tahiti =352mm^2
:colbert:

Also, die sizes DO matter. Temperatures are hot enough here and in the southern states of the US. We don't need a card that has a built-in furnace as a consequence of it housing a huge chip inside.
 
Last edited:
Tahiti is actually 365mm^2.

And die size alone isn't indicative of its power draw/heat because some chips have their layouts more spread out (GT200 comes to mind compared to RV770), or the transistor density isn't as small as its competitors even with a tightly packed layout.

The only thing on the top of my head that die size matters is more to do with financial side of things for the IHV, and not anything that relates to having a built in furnace.

A good example I can think of is R600 (420mm^2) vs G80 (484mm^2), and the former even with a process advantage (80nm process) vs 90nm for G80 chip, it was more power hungry than the latter.
 
Tahiti is actually 365mm^2.

And die size alone isn't indicative of its power draw/heat because some chips have their layouts more spread out (GT200 comes to mind compared to RV770), or the transistor density isn't as small as its competitors even with a tightly packed layout.

The only thing on the top of my head that die size matters is more to do with financial side of things for the IHV, and not anything that relates to having a built in furnace.

A good example I can think of is R600 (420mm^2) vs G80 (484mm^2), and the former even with a process advantage (80nm process) vs 90nm for G80 chip, it was more power hungry than the latter.

Tahiti is 352mm^2. The reviews got the die size wrong.

The AMD Radeon™ HD 7970 Series GPU offers more than 1.54X times the compute power/mm2 when compared to the AMD Radeon™ HD 6970 Series GPU: the AMD Radeon™ HD 6970 Series GPU has been calculated at 2.703 TFLOPs of compute power with a measured die size of 389mm2, while the AMD Radeon™ HD 7970 Series GPU has been calculated at 3.789 TFLOPs of compute power with a die size of 352mm2.

http://www.amd.com/us/press-release...paign=Feed:+amdpressreleases+(Press+Releases)

Scroll to the 2nd point in small font at the bottom.

And die size IS indicative by a good amount of the amount of power a card will consume. Transistors/leakage also play a big roll, but it doesn't really detract from the point. A 350mm^2 die will mean less power consumption than having a 550mm^2 die.

Of course, it does relate directly to manufacturing costs of the chip and how good yields will be.
 
Perhaps you should be a tad more conservative with the math, hard to get more "theoretical" assuming a straight halving of the mm2 for a die shrink. IMO, if it's in the low 300mm2 range than barring edge case scenarios it should be orbiting the GTX580 performance level, should clock better but that might just offset the hit from a rumored 256-bit bus. My wallet will stay closed for 28nm if I can't get a card like that for around $300.

Actually, if that Lenzfire leak is correct, Kepler is much more transistor efficient than GF110. The GTX580 packed 3B transistors into a 520mm^2 die, or 5.77 million transistors per square mm. The leaked value for GK104 is 3.4B on 290mm^2, or 11.72 million per square mm. That would be more than double the transistor density. GK110 was leaked to be about the same. That's the same transistor density as AMD is getting with Tahiti.

Again, if.
 
Back
Top