NVIDIA GeForce GTX 780 To Be Based on GK114 GPU

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Those specs are on the low end of the the 2880 CUDA cores, 240 TMU 15 SMX GK110 die that many expected in K20. The clock speeds of 705mhz is what I estimated them to be a while back based on 1.2Tflop DP estimation in the white-paper and it looks to have been correct as well. At the same time NV may be more conservative with clocks of its Tesla cards to increase yields and keep the power consumption in check in the server environment. K10 is only clocked at 745mhz to stay at 225W TDP.

2496 CUDA cores @ 705mhz is just 8% faster than a 1058 mhz GTX680. Further evidence NV had no chance at all to launch a consumer GeForce GK100/110 in 2012 at reasonable clocks this year to make it worthwhile over the leaner 294mm^2 GK104 chip.

Of course 5-6 more months from today can make a lot of difference in the maturity of the 28nm node. If NV can get those clocks to 1Ghz at 2496 CUDA cores, this chip will be fast.

I wonder what the TDP is on that K20 2496 CUDA cores 705mhz chip?

My guess is Nvidia wanted to stay within the 225w TDP restrictions on the K20. Nvidia designs its Tesla high end SKU keeping in mind HPC server design restrictions. I expected a 384 bit memory controller with 2 SMX disabled but this is even worse.

Even with a higher TDP of 250w for desktop Geforce we can expect clocks around 775 - 800 Mhz . With watercooling this chip could be driven to reach 1 Ghz but the power consumption would definitely cross 300w. Also its not known how bad leakage power affects overclocking headroom for such a massive chip.

In H2 2013 a fully enabled GK110 chip with 800 - 825 Mhz chip could be possible. But currently its not looking so good. If HD 8970 can launch in Jan 2013 with a 25% higher performance I think AMD would be in a better situation than Nvidia.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I for one do not want another GTX480 monstrosity, and am quite happy with the small-die derivatives. Actually I hope they strip it down even more, like removing DP.
 
Feb 19, 2009
10,457
10
76
I for one do not want another GTX480 monstrosity, and am quite happy with the small-die derivatives. Actually I hope they strip it down even more, like removing DP.

No way, not with the way DX11 is going with compute features heavily used in games. Not with OpenCL getting more software support.

The basic assumption we have to go on, Tesla dies are the best binned parts that enables it to use the least power. For K20 to be so crippled is really telling us TSMC is indeed failing hard.

I'm not so sure whatever else that could not bin as a Tesla will be saved for a consumer GF, would it be cranked up in clocks and 300W TD? Even then, 13 SMX @ 1ghz isn't a huge leap from gtx680... so i dont think 20-25% perf increase at a huge power draw is "worth it". There may NOT be a gk110 based GF for a long time (not until 28nm is really mature). The harvested dies are probably going to be downclocked further and squeezed into lesser Tesla variants where they still fetch thousands of $ and their DP/compute perf vs gk104 is "worth it".
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
2496 CUDA cores @ 705mhz is just 8% faster than a 1058 mhz GTX680. Further evidence NV had no chance at all to launch a consumer GeForce GK100/110 in 2012 at reasonable clocks this year to make it worthwhile over the leaner 294mm^2 GK104 chip.

lol wow, how did you come up with that?

I for one do not want another GTX480 monstrosity, and am quite happy with the small-die derivatives. Actually I hope they strip it down even more, like removing DP.

480 was an amazing card, many people enjoyed it.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
brandon888, I am still disappointed in these 15% faster only rumours, but I am not buying them yet. More than 12-15 months to squeeze just 15% more sounds like it's not even worth the effort and R&D, especially for NV since just adding more memory bandwidth and expanding the CUDA cores from 1536 to 2048 would already pass 15% easily.

maybe AMD stated the 30% increase when they didn't had the GE edition
now, ~15% over the GE edition is not bad, for a refresh
 

AdamK47

Lifer
Oct 9, 1999
15,782
3,604
136
I note that this statement is carefully parsed.
He is refering to a dual GPU(on one PCB)690 as the "fastest card" while alluding to a follow up to the GTX680(single GPU)..aka GTX780 as being it's worthy successor.o_O

He said March which is the month the 680 was released. This makes his statement totally incorrect.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
nVidia only has so much TDP left, unless they really want to take it to 300w.

That said the biggest factor in if GK110 is going to be worth it is what kind of OC potential these cards have.

If they're limited to 250w TDP they aren't going to blow the 680 out of the water outside of refinements (690 vs 680 sli power consumption) as well as the other benefits of more cores clocked lower.

Probably 30% faster than the 680 IMO, with a 245w TDP. Of course it will probably be clocked well below it's actual potential unlike the 680 which will provide increased value to those of us willing to risk hardware through overclocking.

I wouldn't be surprised if GK110 is very similar to GF100, low clocks for TDP, huge OC potential. My 470s can grab 45% or more performance over stock with an OC, if they can deliver 30% over the stock 680 and another 30-45% on OC there is a lot of potential these cards could approach 690 levels.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
GTX480 was much better than the 5800Ultra. It's not even close. The 5800 was a shit DX9 card, the GTX480 redefined how a DX11 card should be.
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
Yeah, amazing that it didn't cook itself. 5800 Ultra, version 2 (tm).

I owned 2 of them and when you SLI'd them they were unbelievably hot. I mean come on, they have a caution sticker on the backside of the PCB and they didn't recommend installing any other cards near them.
 

Rvenger

Elite Member <br> Super Moderator <br> Video Cards
Apr 6, 2004
6,283
5
81
GTX480 was much better than the 5800Ultra. It's not even close. The 5800 was a shit DX9 card, the GTX480 redefined how a DX11 card should be.


5800 Ultra underperformed and was a part of when Nvidia cheated their drivers for 3dmark 03. It was not that bad of a card. They are two completely different gen cards and you really can't compare other than they both were hot running cards. I actually owned an FX5800 Ultra Nbox from MSI.
 
Last edited:

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
GTX480 was much better than the 5800Ultra.
Not as far as noise levels go, which is the point of the comparison. I had one and it was literally like sticking a hairdryer inside your case. I could hear the wail over headphones even at a high volume.

That's what happens if you put a bunch of transistors that gamers don't need onto a card, pushing the limits of current manufacturing to the edge. The leaner GTX680 design is a far better chip for gaming, and I hope the next version is even more stripped down and efficient.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
5800 Ultra underperformed and was a part of when Nvidia cheated their drivers for 3dmark 03. It was not that bad of a card.

The architecture was shit. It used the same DX8 register from the previous generation and had the worst possible DX9 implementation. It was nearly useless for shaders with FP accuracy. It's like AMD's Cypress chip and Tessellation. The only game with FP shader in which the performance of NV30 dopped not like a stone was HALO and it used FP16...


Not as far as noise levels go, which is the point of the comparison. I had one and it was literally like sticking a hairdryer inside your case. I could hear the wail over headphones even at a high volume.

Yes, the cooler. So? 7 months later nVidia released a card with the same power consumption but with a much better cooler. :whiste:
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Not as far as noise levels go, which is the point of the comparison. I had one and it was literally like sticking a hairdryer inside your case. I could hear the wail over headphones even at a high volume.

That's what happens if you put a bunch of transistors that gamers don't need onto a card, pushing the limits of current manufacturing to the edge. The leaner GTX680 design is a far better chip for gaming, and I hope the next version is even more stripped down and efficient.

Maybe you just had really horrible case airflow?

Besides that's a silly point, TDP and cooling solution make up that equation not some mythical "compute cost" that doesn't really exist.

Bunch of transistors that are cut, meaningless non power consuming... What's the difference between the GTX 470 and the C2075?

The leaner 680 is the highest end product from kepler out, the ideology you're trumping is that the GTX 560 Ti was the crowning achievement of the 40nm generation and by far the best gaming chip Nvidia made that generation. You were willing to play $500 for that chip.

While it's ok you don't understand the relationship between performance, TDP, cooling solutions, and compute vs workstation please don't champion misinformation at the cost of reasonable market prices. I wouldn't have thought the GTX 560 Ti overclocked to 195w TDP was a $500 card, and I don't think many others would have either.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
The 320 bit memory bus for the high end Tesla K20 SKU is a clear indication of yield problems. This kind of crippling is unheard of on the flagship Tesla SKU. This does not bode well at all for desktop GK110. Nvidia might clock their desktop GK110 at 775 - 800 Mhz but the extent of the chip being crippled is definitely going to affect perf and perf/watt.


Correct me if I am wrong, but I recall first generation Fermi Tesla cards having gtx470 specs, meaning it also had a 320-bit bus. The subsequent gtx480 came with all 384 memory bits enabled.

Sounds like Nvidia is having large die complications like they did with GF100, or these are K20's are hot-lot and hopefully a short run of chips before the more functional chips ramp up with yields.
 
Last edited:

The_Golden_Man

Senior member
Apr 7, 2012
816
1
0
Not as far as noise levels go, which is the point of the comparison. I had one and it was literally like sticking a hairdryer inside your case. I could hear the wail over headphones even at a high volume.

That's what happens if you put a bunch of transistors that gamers don't need onto a card, pushing the limits of current manufacturing to the edge. The leaner GTX680 design is a far better chip for gaming, and I hope the next version is even more stripped down and efficient.

I agree. When making cards for gaming they don't need to put all kinds of stuff not needed for games on the GPU die.

I have 2x ASUS GTX 670 DC II running comfortably, cool and very silent in my rig. Kepler is a very good GPU for gaming, and a step in the right direction.

I've heard AMD owners talk about this compute stuff in discussions relating Nvidia Kepler VS the latest AMD cards.. As I've understood it, these functions are not needed for gaming and just adds extra power and heat to a gaming card.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Correct me if I am wrong, but I recall first generation Fermi Tesla cards having gtx470 specs, meaning it also had a 320-bit bus. The subsequent gtx480 came with all 384 memory bits enabled.

I didn't think they made a GF100 with 480 specs, I thought they went to the 580?

I could be totally wrong though.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
If acoustics bother someone so strongly -- AIB's offer improvements over reference designs.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Correct me if I am wrong, but I recall first generation Fermi Tesla cards having gtx470 specs, meaning it also had a 320-bit bus. The subsequent gtx480 came with all 384 memory bits enabled.

The first Fermi Tesla cards were M2050 and M2070:
448 @ 515 (1030)MHz with 384bit and 3/6 GB memory.

Sounds like Nvidia is having large die complications like they did with GF100, or these are K20's are hot-lot and hopefully a short run of chips before the more functional chips ramp up with yields.
Sounds normal for me. AMD is only selling a 3,2/0,8 TFLOPs card in this market.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,476
136
Correct me if I am wrong, but I recall first generation Fermi Tesla cards having gtx470 specs, meaning it also had a 320-bit bus. The subsequent gtx480 came with all 384 memory bits enabled.

Sounds like Nvidia is having large die complications like they did with GF100, or these are K20's are hot-lot and hopefully a short run of chips before the more functional chips ramp up with yields.

No you are wrong . the first Fermi based C2050 Teslas were 448 sp, 515 mhz and 384 bit memory controller

http://www.anandtech.com/show/3693/...their-gpu-servers-to-include-fermilevel-tesla

TSMC started 28nm production in early Q4 2011. even after 1 year of TSMC 28nm production such a crippled top end Tesla SKU confirms all the yield problems nvidia has been facing.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
So a 480, with 470 SP, with almost 100Mhz less core and nearly 200Mhz slow shader speeds than the reference 470.


Seems pretty chopped as well.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Sure nVidia has yield problems. What's different from the last 4 years?!

BTW if nVidia has problems why is AMD only selling a 28 SIMDs chip with 900MHz in the server market?!