• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[SA] GK110 aka GTX 680 release date: Late Q3 '12

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Price/perf needs to factor in power use as well, if you live outside the USA and especially in the EU where some countries pay >0.35 EUROs per kwh as opposed to USA's ~0.08 cents.

All i am saying is if gk104 is ~342mm2, its performance should be quite stellar for a mid-range chip, ie. faster than gtx580.

Too hard to say. A full node shrink doesn't necessarily mean 2x performance, and in fact it almost never means that. History will show that to be the case, e.g., HD 5870 wasn't 2x the performance of HD 4890 or even HD 4870. But a node shrink + vastly better arch might get close. I would expect GK104 to be about as fast as an GTX 570 but with more overclocking headroom. That's a pretty conservative guess, so I would not be surprised if it were more like a GTX 580.

P.S. I'm really pissed at AMD right now for holding prices high. I know why they're doing it, market monopoly power, yadda yadda, but I want to upgrade NOW, not in 2+ months when NV issues the GK104. NV, please hurry the hell up, we need more competition. And if you misprice your cards, too, then hell with this entire generation of GPUs, I'll just wait for the end of year refresh. NV, we all know you can cut prices for gamers and shift the costs to HPC/pro graphics buyers. DO IT DO IT NOW!
 
Last edited:
Here's yet another rumor:

TL;DR Big Kepler to have 4,096 shaders. Major WTF rumor. I have no idea if the guy is trolling, but he doesn't seem to be. If there is any shred of truth to this rumor, then Kepler is not just a simple evolution of Fermi, and the complexity of individual cores will have been completely overhauled (and simplified).

The rumors are flying now. This is when it gets fun(ny).
 
There's no way nVidia would wait till Q3 to launch their flagship high end 680. We'll see it in April.

If GK104 performs as well as the current round of rumors have it, then there certainly is a strong possibility it won't launch until Q3. This is the price Nvidia pays for having a big chip at their high end to serve dual-purpose roles.
 
If GK104 performs as well as the current round of rumors have it, then there certainly is a strong possibility it won't launch until Q3. This is the price Nvidia pays for having a big chip at their high end to serve dual-purpose roles.

My GTX 295 is finally showing it's age and I only hope that I can get a nice shiny new GTX 680 for Max Payne 3. DX11 wouldn't be a bad idea either...
 
Here's yet another rumor:

TL;DR Big Kepler to have 4,096 shaders. Major WTF rumor. I have no idea if the guy is trolling, but he doesn't seem to be. If there is any shred of truth to this rumor, then Kepler is not just a simple evolution of Fermi, and the complexity of individual cores will have been completely overhauled (and simplified).

The rumors are flying now. This is when it gets fun(ny).
Yes...I particularly liked the "Sinle" Precision TFLOPS.
It adds a certain authenticity to it...^_^
 
Actually, if that Lenzfire leak is correct, Kepler is much more transistor efficient than GF110. The GTX580 packed 3B transistors into a 520mm^2 die, or 5.77 million transistors per square mm. The leaked value for GK104 is 3.4B on 290mm^2, or 11.72 million per square mm. That would be more than double the transistor density. GK110 was leaked to be about the same. That's the same transistor density as AMD is getting with Tahiti.

Again, if.

The GK100 transistor density of that leak/rumor is what makes me think the specs are not correct.

Usually NV chips have lower transistor density than AMDs, that doesn't mean it will be the same case with Kepler and 28nm HKMG but it is not impossible.
 
There's a rumor that NV is pulling an AMD on Kepler.. try to design their cores similar to 4/5VLIW like the radeons, better efficiency and perf/w, perf/mm2. But it's too unbelievable to consider.
 
Tahiti is 352mm^2. The reviews got the die size wrong.

Scroll to the 2nd point in small font at the bottom.

And die size IS indicative by a good amount of the amount of power a card will consume. Transistors/leakage also play a big roll, but it doesn't really detract from the point. A 350mm^2 die will mean less power consumption than having a 550mm^2 die.

Of course, it does relate directly to manufacturing costs of the chip and how good yields will be.

Ah may bad. So GK104 should be similiar in size to that of tahiti.

Anyway, tell me how die size indicative of power draw? The R600 clearly from a die size perspective was smaller than G80 yet it consumed more power ignoring other things that are much more indicative of power draw than die size. I could give you more examples such as GTX 260 (576mm^2) vs HD4870 (256mm^2), where the former has a little lower/similar power draw compared to the latter even with a half node advantage. Sure the GTX280 consumed more power(The SKU based on the full fledged GT200), but the point is that with performance being equal (for comparisons sake), the GTX260 even with almost double the die size consumed similar/less power than the HD4870.

What you're saying is something along the lines of saying that a bigger PCB (so components are spread out) = higher power consumption compared to a smaller PCB with similar components.

The process tech, its maturity, the architecture, maybe layout, and a bunch of other things are probably more indicative of its power draw than the die size. You know what, until you have the actual product/working prototype it would be kind of hard to approximate its power draw since things could be way off (ala pentium4).
 
My hunch is that Nvidia is going to release the GTX 680 (GK104) card in March/April.

And then when August/September rolls around they will release the GTX 780 (GK110).
 
Actually all else being equal from a consumer perspective the bigger the die the better because it's easier to cool. More contact with a heatsink means better cooling.
 
The consumer doesn't benefit from a bigger or smaller die. The consumer benefits from the higher performance/price and performance/watt ratio (at the same price point), no matter the die size. 😉
 
Cypress=338mm^2
Cayman=389mm^2
Tahiti =352mm^2
:colbert:

Also, die sizes DO matter. Temperatures are hot enough here and in the southern states of the US. We don't need a card that has a built-in furnace as a consequence of it housing a huge chip inside.

Ohhh my god you got me!!! 😛

Actually, would you mind placing the manufacturing process next to each item in your list? TYIA.

Cypress=338mm^2 40nm
Cayman=389mm^2 40nm getting bigger on same process
Tahiti =365mm^2 28nm (27mm2 bigger than Cypress on 40nm)

So, are you going to say it's getting smaller? Because ya can't. And remember, I said it really isn't important as you make it out to be. As somebody else above said, and pretty wisely, unless you own stock, nobody should really have any interest in die size. Performance, price are at the top of the heap in importance, followed by power consumption, heat, noise.

"With so many stream processors coupled with a 384bit GDDR5 memory bus, it’s no surprise that Tahiti is has the highest transistor count of any GPU yet: 4.31B transistors. Fabricated on TSMC’s new 28nm High-K process, this gives it a die size of 365mm2, making it only slightly smaller than AMD’s 40nm Cayman GPU at 389mm2." -Anandtech Review of 7970

Even if it was 352mm2, it's still getting bigger.
 
Last edited:
I’d be very happy for a single-GPU GTX580 killer in April, but sadly I don’t think it’ll be here that early.
 
My hunch is that Nvidia is going to release the GTX 680 (GK104) card in March/April.

And then when August/September rolls around they will release the GTX 780 (GK110).

And if this turns out true, I doubt those ones echoing "but the poor early adopters who got shafted" won't make a peep.

That's the name of the game!

WTB GTX 660 >= GTX 580!
 
Q3?? Bah if this is true....

R8hK7.png
 
Ah may bad. So GK104 should be similiar in size to that of tahiti.

Anyway, tell me how die size indicative of power draw? The R600 clearly from a die size perspective was smaller than G80 yet it consumed more power ignoring other things that are much more indicative of power draw than die size. I could give you more examples such as GTX 260 (576mm^2) vs HD4870 (256mm^2), where the former has a little lower/similar power draw compared to the latter even with a half node advantage. Sure the GTX280 consumed more power(The SKU based on the full fledged GT200), but the point is that with performance being equal (for comparisons sake), the GTX260 even with almost double the die size consumed similar/less power than the HD4870.

What you're saying is something along the lines of saying that a bigger PCB (so components are spread out) = higher power consumption compared to a smaller PCB with similar components.

The process tech, its maturity, the architecture, maybe layout, and a bunch of other things are probably more indicative of its power draw than the die size. You know what, until you have the actual product/working prototype it would be kind of hard to approximate its power draw since things could be way off (ala pentium4).

Given that AMD and NVIDIA have similar transistor densities now it's a good indicative.
 
Ohhh my god you got me!!! 😛

Actually, would you mind placing the manufacturing process next to each item in your list? TYIA.

Cypress=338mm^2 40nm
Cayman=389mm^2 40nm getting bigger on same process
Tahiti =365mm^2 28nm (27mm2 bigger than Cypress on 40nm)

So, are you going to say it's getting smaller? Because ya can't. And remember, I said it really isn't important as you make it out to be. As somebody else above said, and pretty wisely, unless you own stock, nobody should really have any interest in die size. Performance, price are at the top of the heap in importance, followed by power consumption, heat, noise.

"With so many stream processors coupled with a 384bit GDDR5 memory bus, it’s no surprise that Tahiti is has the highest transistor count of any GPU yet: 4.31B transistors. Fabricated on TSMC’s new 28nm High-K process, this gives it a die size of 365mm2, making it only slightly smaller than AMD’s 40nm Cayman GPU at 389mm2." -Anandtech Review of 7970

Even if it was 352mm2, it's still getting bigger.

You didn't bother to read the link from AMD I posted before. Anandtech got the die size wrong and didn't correct it. It's 352mm^2.

The AMD Radeon™ HD 7970 Series GPU offers more than 1.54X times the compute power/mm2 when compared to the AMD Radeon™ HD 6970 Series GPU: the AMD Radeon™ HD 6970 Series GPU has been calculated at 2.703 TFLOPs of compute power with a measured die size of 389mm2, while the AMD Radeon™ HD 7970 Series GPU has been calculated at 3.789 TFLOPs of compute power with a die size of 352mm2.

And maybe I'm crazy, but that's smaller than the 389mm^2 Cypress. So no, dies are not getting bigger.
 
The consumer doesn't benefit from a bigger or smaller die. The consumer benefits from the higher performance/price and performance/watt ratio (at the same price point), no matter the die size. 😉

This keeps coming up, and again, the only customers that don't care about power use are enthusiasts. You and I don't care about power use, that is true. But NV's incompetance in creating efficient parts yet again will relegate them to the discrete market (where they do very well) while they are shunned in the ultra mobile sector.

Judging by sales, its obvious where most sales are headed. Discrete sales were down 20% year over year in 2011 IIRC. New PC sales were down by 35% at Dell, and 60% at Acer. Thats why efficiency matters, FYI.
 
There's no way nVidia would wait till Q3 to launch their flagship high end 680. We'll see it in April.

Yeah, agreed. Its not like it just taped out and isn't even in engineering sample status yet. Hell, lets just skip the ES and validation phase, who needs it!
 
Q3?? Bah if this is true....

R8hK7.png

About the sum of things if this is on point. Nvidia has nothing to beat the 7970 until the end of the year, but will take care of the mid-range before then. Nothing exciting, but a good choice, because more mid-range cards sell than top-tier. Maybe they make more money this way, so I bet they are excited if so. :colbert:
 
This keeps coming up, and again, the only customers that don't care about power use are enthusiasts. You and I don't care about power use, that is true. But NV's incompetance in creating efficient parts yet again will relegate them to the discrete market (where they do very well) while they are shunned in the ultra mobile sector.

Judging by sales, its obvious where most sales are headed. Discrete sales were down 20% year over year in 2011 IIRC. New PC sales were down by 35% at Dell, and 60% at Acer. Thats why efficiency matters, FYI.


The consumer benefits from the higher performance/price and performance/watt ratio (at the same price point), no matter the die size.

The above is true for all consumers, enthusiasts or not 😉

Mobile Kepler GPUs will be launch in April alongside Intel's Ivy Bridge mobile CPUs.

Desktop and Mobile sales plummeted in 2011 because of the Hard Disk shortages.
 
Back
Top