GK110--when do you think it will be released?

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Based your answer on reasoning, knowledge, and history. I've heard the end of this year or next march.

I'd really like one because GK100 isn't future proof at all... it's held back by slow Double Precision performance (double precision will certainly be used for games in the future), a small memory bus, and I heard that integer operations are slower. I think they really should've gone with the GTX460's design, then added bindless textures support and DX11.1 compliance, made the IMC 512 bit wide (at a lower clock speed than 6 GHz of course, maybe like 3.6-4 GHz, which would be 1/5-1/3 more bandwidth), and increasing the number of CUDA cores to maybe 768 with 1/3 DP performance as well as single cycle (rather than 1/2 speed) RGBA16 and D32 (maybe the shaders at 1.2 GHz and everything else at 700 MHz) Although that would've used more power and probably would've reduced their short term profits, it would've been a lot faster than GK100... and I wouldn't need to be waiting to upgrade.
 

Crap Daddy

Senior member
May 6, 2011
610
0
0
I'd really like one because GK100 isn't future proof at all... it's held back by slow Double Precision performance (double precision will certainly be used for games in the future), a small memory bus, and I heard that integer operations are slower. I think they really should've gone with the GTX460's design, then added bindless textures support and DX11.1 compliance, made the IMC 512 bit wide (at a lower clock speed than 6 GHz of course, maybe like 3.6-4 GHz, which would be 1/5-1/3 more bandwidth), and increasing the number of CUDA cores to maybe 768 with 1/3 DP performance as well as single cycle (rather than 1/2 speed) RGBA16 and D32 (maybe the shaders at 1.2 GHz and everything else at 700 MHz) Although that would've used more power and probably would've reduced their short term profits, it would've been a lot faster than GK100... and I wouldn't need to be waiting to upgrade.

Please report to work tomorrow. Your experience and vision of the future will certainly help us clean the mess our engineers have managed to create while designing the new Kepler architecture. With your precious help we might be able to launch the GK110 by the end of the week.

Your sincerly
JHH
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
626
126
Please report to work tomorrow. Your experience and vision of the future will certainly help us clean the mess our engineers have managed to create while designing the new Kepler architecture. With your precious help we might be able to launch the GK110 by the end of the week.

Your sincerly
JHH
Ooookay.

Anyway, I think we will see the Big K (we should always call it this) make a showing near the end of the year, actual availability 3 months after. I don't think they should go to a 512bit bus that is excessive to me. It could end up being a really exciting part if Nvidia is able to parlay the efficiency of Kepler into a more rounded GPU.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
I do not think GK110 will be released for the desktop/consumer market. It's extremely probable, however, that it will be released for the professional market because of its much higher compute performance.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I do not think GK110 will be released for the desktop/consumer market. It's extremely probable, however, that it will be released for the professional market because of its much higher compute performance.

LMFAO! I can see it now. Nvidia engineers: "I know we've spent $150+ million in developing this chip, but lets keep it to the HPC market, where we'll only sell 75,000 total units. I realize we won't make any significant money on it, but JHH's salary is only $1 per year, so it's all good."


Seriously though. December / January as a Tesla and/or Quadro card. Vr-zone said Big K as a Geforce card in March, and that sounds plausible. And like Tahiti, it's power draw will hold it back. Hopefully Nvidia won't lock down voltage control like they have done with GK104.
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
LMFAO! I can see it now. Nvidia engineers: "I know we've spent $150+ million in developing this chip, but lets keep it to the HPC market, where we'll only sell 75,000 total units. I realize we won't make any significant money on it, but JHH's salary is only $1 per year, so it's all good."


Seriously though. December / January as a Tesla and/or Quadro card. Vr-zone said Big K as a Geforce card in March, and that sounds plausible. And like Tahiti, it's power draw will hold it back. Hopefully Nvidia won't lock down voltage control like they have done with GK104.

I could have sworn that the significant money of compute/HPC, and the majority market share that Nvidia currently enjoys, is what many people here said was one of the main reasons for Nvidia's Geforce delays?
 
Feb 19, 2009
10,457
10
76
LMFAO! I can see it now. Nvidia engineers: "I know we've spent $150+ million in developing this chip, but lets keep it to the HPC market, where we'll only sell 75,000 total units. I realize we won't make any significant money on it, but JHH's salary is only $1 per year, so it's all good."


Seriously though. December / January as a Tesla and/or Quadro card. Vr-zone said Big K as a Geforce card in March, and that sounds plausible. And like Tahiti, it's power draw will hold it back. Hopefully Nvidia won't lock down voltage control like they have done with GK104.

How about another scenario:

TSMC can't for their life, make a ~600mm2 GPU on 28nm with decent yields.

Thus, NV gets much less GK110 dies than they want. Do they a) turn those dies into $5000+ cards or b) make a consumer GTX for <$1000.

Note the vast bulk of 28nm production is for Apple and other smartphone chips, so wafers for gpu is even more limited than usual.

I could see them doing #B, but would be limited edition until TSMC get their act together.
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Guessing next year Q1~Q2? nVIDIA is probably prioritising the HPC market first (rumours point to Q4 release for Teslas followed by Quadros) especially since Intel managed to get their larrabee concept finally going in the form of 22nm Knights Corner.
 

hyrule4927

Senior member
Feb 9, 2012
359
1
76
LMFAO! I can see it now. Nvidia engineers: "I know we've spent $150+ million in developing this chip, but lets keep it to the HPC market, where we'll only sell 75,000 total units. I realize we won't make any significant money on it, but JHH's salary is only $1 per year, so it's all good."

Do you realize how enormous the profit margin is in the professional GPU market? Taking the chips they have and selling them for $5000 instead of $500 just might be a little more profitable.

This is pretty much explained in Ryan's article about the new FirePro lineup and can be applied to Nvidia too:
http://www.anandtech.com/show/6137/the-amd-firepro-w9000-w8000-review-part-1/3
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Well, I guess the TSMC situation explains why we haven't seen them, although I think that 2880 CUDA cores is overkill... especially when considering the limitations of TSMC and when they're faster than GK100's cores.

In any event, I'm getting P.O.'d at this false dichotomy (i.e., compute vs. gaming performance) being thrown around. You need good compute performance and good double precision performance for good future gaming performance.
 

n0x1ous

Platinum Member
Sep 9, 2010
2,574
252
126
Well, I guess the TSMC situation explains why we haven't seen them, although I think that 2880 CUDA cores is overkill... especially when considering the limitations of TSMC and when they're faster than GK100's cores.

In any event, I'm getting P.O.'d at this false dichotomy (i.e., compute vs. gaming performance) being thrown around. You need good compute performance and good double precision performance for good future gaming performance.

What is this gk100 you keep referring to? there is gk104 (gtx 680 670 690 660) and big K is GK110
 

nenforcer

Golden Member
Aug 26, 2008
1,773
13
81
GK110 sounds like total overkill and I wouldn't expect it in any decent quantities anytime before 2013, sometime after Radeon HD 8000 series launches.

And its most likely going to cost $500 plus.
 

sandorski

No Lifer
Oct 10, 1999
70,677
6,250
126
As his first act as President, Romney will sign the Constitutional Amendment that releases GK110 immediately.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Guessing next year Q1~Q2? nVIDIA is probably prioritising the HPC market first (rumours point to Q4 release for Teslas followed by Quadros) especially since Intel managed to get their larrabee concept finally going in the form of 22nm Knights Corner.
Q4 isn't a rumor, it's confirmed. Baring any delays, Tesla K20 will ship in December of this year.

The real question is what happens after that. A GK110 Quadro card is practically assured, so even after NVIDIA gets more GK110 GPUs they're already going to be gobbled up by another professional division. Quadro will also give NVIDIA an outlet to offload defective GPUs (assuming K20 is fully enabled), so selling the rejects to consumers is by no means a given for GK110.

Assuming that NVIDIA can move enough defective GK110 GPUs through Tesla and Quadro, a consumer GK110 card is going to depend on what AMD has to offer and what GK114 is like. If NVIDIA can beat AMD with GK114, why use GK110? They're better off financially using a 300mm2 die than a 500mm2 die since the price of their high-end consumer card is more-or-less fixed at $500.
LMFAO! I can see it now. Nvidia engineers: "I know we've spent $150+ million in developing this chip, but lets keep it to the HPC market, where we'll only sell 75,000 total units. I realize we won't make any significant money on it, but JHH's salary is only $1 per year, so it's all good."
The bulk of the development costs for a GPU are generational costs. NVIDIA had to develop the common Kepler architecture regardless of whether they actually sell GK110 in large volumes, so while GK110 was by no means cheap to develop (thanks to HyperQ and other new GK110 functionality), it's also not going to have to pay for the R&D of the entire Kepler program. It's a very Intel-like arrangement this time around, meaning that it's okay to sell relatively few chips if the margins are high enough.
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
So here's my question - if there's a consumer (ie geforce) part in the first half of next year, what tradeoffs will be made for having the compute stuff on die? There's little doubt that there will be tradeoffs to an extent, the question is what - will part of the core be disabled? Lowered clockspeeds? Or will compute be removed from the geforce part? If there are tradeoffs, what performance benefit will gk110 have over the old kepler in gaming performance? (in exchange for having compute on die??)

Will they go back to power hungry huge dies? Or will they eschew compute in a geforce kepler refresh? More interestingly, will AMD eschew compute in their next part? 7970 is still more efficient than fermi (which obviously has compute on die as well) -- but will AMD go the route of making 2 separate SKUs? If AMD removes compute and nvidia adds it - then AMD is again more efficient as in the old days while nvidia has monster die sizes again -- will this create a rift in the time space continuum? Will 8970 be the compute free efficient part while GTX 780 is the super large die monster that drinks watts by the gallon?

So many questions, so few answers. :)
 
Last edited: