[3dcenter] GK104 specs

Page 14 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Arzachel

Senior member
Apr 7, 2011
903
76
91
Yep! I was going to add that to badboy's reply. It's all ironically converging together.

Yeah, both went for the same goal (performance) with very different methods. Starting from this gen, both companies are picking up things that work from each other, so it'll be interesting to watch how they'll develop these concepts further.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
If these numbers are fake, someone did a heck of a job.

btw, i've never seen comprehensive set of "numbers" like this be fake before any ati/nv launch. yes i've seen the bar graphs be faked a bunch.

The percentages correspond with what you're used to seeing per game. aka better in 3dmark, and not so good with AA in BF3.

If true, then the gk104 part will be 5-15% faster than a 7970 and 35-45% faster than a GTX580. That's a GIGANTIC leap forward for Nvidia if this is true considering gk104 is the mainstream part and also even considering these numbers are from an overclocked part (950 to 1050mhz).

The revolutionary changes are the shader core count and the single clock domain. Could it be true!?!?!? *que suspenseful music*

gk104benchmarkspcinlip2y1w.png


edit: If the rumored price is legit, then Nvidia has beat ATI on performance/dollar. If this chip is 345ish mm2 (compare tahiti 352mm2), then Nvidia has closed the gap and maybe even taken the lead on performance/watt and performance/mm2.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,188
2
76
If these numbers are fake, someone did a heck of a job.

btw, i've never seen comprehensive set of "numbers" like this be fake before any ati/nv launch. yes i've seen the bar graphs be faked a bunch.

The percentages correspond with what you're used to seeing per game. aka better in 3dmark, and not so good with AA in BF3.

If true, then the gk104 part will be 5-15% faster than a 7970 and 35-45% faster than a GTX580. That's a GIGANTIC leap forward for Nvidia if this is true considering gk104 is the mainstream part and also even considering these numbers are from an overclocked part (950 to 1050mhz).

The revolutionary changes are the shader core count and the single clock domain. Could it be true!?!?!? *que suspenseful music*

gk104benchmarkspcinlip2y1w.png

I've seen at least 10 of these charts with comprehensive numbers that are fake in the last 2 weeks.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
I'd like to see one of the ten you've seen, aside from this one.

If you trasnlate the entire page that this chart originates from what THEY say is that it comes from the same poster that brought all the other fake chiphell slides -- and that chiphell poster was banned for peddling fake slides. Therefore it is fake. Further, he registered on a new board with the same name as a new registree and made 1 post with this chart. And like vulgar said many charts that were fake in the past had the same fonts and outline and such.

If you suddenly saw a newly registered user here with an AMD 8970 slide, and that poster has 1 post, what would you think?

Pretty fishy. Like I said that i'm sure some NV guys believe it no matter what :D (just like the lenzfire specs ) But hey if its real thats great too -it will be a great card (and expensive if it numbers are real). But the chart is fishy to say the least.
 
Last edited:

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
I'm an ATI fan, and I believe the chart. Not saying I know it's real or fake, just that it seems plausible in my opinion.

Most of the time the real leaks come from an account nobody knows. First 5870 pics and benches out of tweakers did. PHK and that old infamous Czech fellow have been leaking crap under their names for a while... like the first gtx470 and 4870 pictures.

You're just as likely to prove it fake as I am to think its legit.

These are the type of benches I'm talking about that i havent seen faked. The legit in-house numbers that are used to create the slides in the review decks. Like this seen a month before the 5870:

20090914202757_5850%205870.png
 

Crap Daddy

Senior member
May 6, 2011
610
0
0
I'm an ATI fan, and I believe the chart. Not saying I know it's real or fake, just that it seems plausible in my opinion.

Most of the time the real leaks come from an account nobody knows. First 5870 pics and benches out of tweakers did. PHK and that old infamous Czech fellow have been leaking crap under their names for a while... like the first gtx470 and 4870 pictures.

You're just as likely to prove it fake as I am to think its legit.

These are the type of benches I'm talking about that i havent seen faked. The legit in-house numbers that are used to create the slides in the review decks. Like this seen a month before the 5870:

20090914202757_5850%205870.png

Let's not forget the chiphell table that everybody was bashing and which has the same look and which is now proved to be similar to the specs of the GK104 as presented by 3dcenter, brightsideofthenews and posted by many respectable sites. Hell, even Charlie commented something like "I told you" regarding no hotclocks and the shader count.

So, if it's close to reality, I'd say that what we really don't know is the TDP and based on this I would say that what you think is an overclocked part (1050 core, 1425 mem) is in fact the stock clock. For this part to deserve the GTX680 moniker it has to be faster than AMDs fastest and GK104 is able to do this with higher clocks.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I have major issues believing that nVidia midrange chip is going to be that much faster than AMD's top end chip. As that means that the top end chip is going to be what, 100% faster than a GTX580?! I just don't see it.
 

Crap Daddy

Senior member
May 6, 2011
610
0
0
I have major issues believing that nVidia midrange chip is going to be that much faster than AMD's top end chip. As that means that the top end chip is going to be what, 100% faster than a GTX580?! I just don't see it.

No, just between 60-70%. Same as the difference between the GK104 and the GF114.
 

superjim

Senior member
Jan 3, 2012
293
3
81
(supposedly) leaked GK104 benchmark numbers:
http://videocardz.com/30564/nvidia-geforce-600-specifications-geforce-gtx680-benchmark-leaks-out

GeForce GTX 680 GK104 (GPU Clock: 1050, Memory: 1450 MHz) (overclocked)
vs
Radeon HD 7970 (stock)

=

Geforce GTX680 about ~5-10% faster, when overclocked vs stock 7970, in leaked results.

200 Mhz core overclock + 200 Mhz memory overclock = ~5-10% faster than 7970 stock.

Not bad for a card probably gonna be around 399$.

GK104 was being touted as the "mid-range" chip and according to that link it's second only to the dual-GPU GK110 GTX690. That's not exactly mid-range in my book and I call BS on the $399 price. I'll be blown away if it's anything under $499.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Performance/watt for Kepler is supposed to be ~3x that of Fermi, according to Nvidia.


nope, DP flops/watt should be 3x of Fermi.

With rumors having the GK100 at 2304 cores @ 950MHz and close to 4,37GFLOPs SP. Lets assume DP will be half of that (2.185) and you have 3x more than Fermi at 0.79 ;)
 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
edit: If the rumored price is legit, then Nvidia has beat ATI on performance/dollar. If this chip is 345ish mm2 (compare tahiti 352mm2), then Nvidia has closed the gap and maybe even taken the lead on performance/watt and performance/mm2.

what makes you assume that *if* it really ~is~ as good as that graph, that nvidia wont jack their prices up past amd cards?
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
It's funny that AMD has had the more efficient GPU's at graphics processing, yet they are the ones now confirming and moving in the direction Nvidia has been going in since the 8800GTX introduction (dual-purpose, GPGPU foundation). I think it's a strong indication that Nvidia's engineers saw that it was more efficient, from a business perspective, to sacrifice efficiency in graphics prowess to create GPU's that serve a purpose for more than just rendering graphics. And now AMD's GCN all but confirms Nvidia's vision is the better way to make money off of GPU's.

Weird how it works out, isn't it?

No, no it's not. The only reason why NVIDIA went for compute performance from the beginning and they were pushing CUDA hard was that put them in a favorable position for enterprise--you'll see that NVIDIA has much higher enterprise market penetration than AMD. The problem is, NVIDIA has focused on performance at all costs; they haven't made efficient GPUs for a long time now. AMD instead focused entirely on consumer GPU/gaming performance.

AMD didn't want to compromise their efficiency, so they kept die sizes largely the same and they kept high transistor density high in their priority list. NVIDIA and AMD have largely similar compute performance now that GCN is here, but NVIDIA has a huge leap when it comes to corporate support and applications. AMD is still more efficient for now, of course (we'll see what happens with GK104, but I don't expect it to be more efficient). So that's the difference: AMD never sacrificed performance/watt.

perfwatt_1920.gif


Now hopefully companies start making enterprise and consumer programs exploiting things like DirectCompute.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
what makes you assume that *if* it really ~is~ as good as that graph, that nvidia wont jack their prices up past amd cards?

Maybe they will. $299 was rumored. Theo V. says $349-$399. I think $349-$499. Who knows, maybe it'll be $559. It's a lot different now than with Fermi vs. Cypress. Cypress was much faster than a GTX460.

If they price it cheap, then Nv caught AMD like AMD caught Nv with the $649 GTX280.

If it's priced the same as a 7970/580... I have no idea what they'll price it. I'll just stick with Theo's 349-399 since I'm not sure on that part.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
I have major issues believing that nVidia midrange chip is going to be that much faster than AMD's top end chip. As that means that the top end chip is going to be what, 100% faster than a GTX580?! I just don't see it.

That could also be the reason the top end chip is apparently going straight from GF110 to GK110, without a GK100 in between. Seems to me like NV might have overextended themselves with GK100 with the end result being GK100 got cancelled, but GK104 is a beast and will compete on the high end until GK110 comes out. You also have to consider that GK110 looks like it will be launching closer to summer/fall so it will most likely be competing with AMD's 28nm refresh and not Tahiti.

...pure speculation on my part based on unconfirmed rumors, but this would make sense IMO.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Usually a substantial and significant node and arch change redefine price/performance when compared to the older arch and node. This may be the first time in GPU history that a MSRP increase was more than its performance increase. Execution sure has its rewards.

Really?
Is it really? Really really?


GTX280 says "Hi!".

First result on Google for "GTX280 MSRP" http://arstechnica.com/hardware/reviews/2008/06/nvidia-geforce-gx2-review.ars/8

(The 9800 was simply a die shrunk 8800, so while you could argue that the GTX280 was the same node as the 9800, it was the first new architecture on that new node, and was competing against an older node designed card. NV also chose to use an older process rather than transitioning to the newest one for their high end GPU, partly due to timings, probably, and also due to yield concerns, so in fact it could have been a 55nm GPU on launch vs a GPU designed at 90nm).


For all 7970's improved efficiency and massive horsepower, one simple fact nearly torpedoes its attractiveness: the card's MSRP is set for $549. Actual prices, of course, could run much higher, depending on availability. For comparison's sake, take a look at the going prices for the other cards we tested today, as listed on NewEgg on 6/16/08. I priced out an EVGA 9800 GTX, as the Palit model was not available.

Palit 9800 GX2: $507.99
EVGA 9800 GTX: $269.99
Sapphire Radeon 3870X2: $319.99


It's hard to justify buying a $549 HD7970 when it's outperformed by a GTX560 2Win that happens to cost $30 less, and I know which card I'd personally pick. Make no mistake, price is the really problematic factor here. If AMD was launching the HD7970 at an MSRP of $399, the card would offer an entirely different, and vastly improved, price/performance ratio. With an estimated die size of 576mm2, however, a $399 price point just isn't a realistic option for a full GTX 280 GPU. Transitioning to a 55nm process will undoubtedly improve the situation, but until that happens, NVIDIA is somewhat stuck.

If you're both an AMD fan and a member of this highly elite market, a set of HD7970 cards in Crossfire are the upgrade you've been waiting for, no questions asked. Almost everyone else, however, will be better off with a G92-based board, whether that's a 9800 GX2, a 9800 GTX, or even something a little easier on the wallet. Anyone currently in the market for a new video card would be well advised to hold off a little longer to see what NVIDIA drops down the pipe. Even if you aren't in the market for an NVIDIA solution, a new product refresh inevitably shakes up prices on both sides of the fence.

As for the HD7970, it doesn't really change the field. For the vast majority of customers, it's older card "business as usual," and probably will be for at least the next few months. AMD did launch a second Taihiti part this month, the $449 HD7950, but based on my HD7970, results I can't see how much better the 7950 could possibly fare. At $449, it's cheaper than our ~$500 GTX580 card, but it'll also run only slightly faster across the board.

Even given the HD7000s disappointing debut, AMD is still sitting pretty with a commanding performance lead in virtually every segment. NVIDIAs GTX6x0 parts may be shipping soon, but the burden of catching up to AMD is firmly on NVIDIA's shoulders. For the moment, the Radeon manufacturer may feel it can afford to wait and milk the highest end of the market.
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
That could also be the reason the top end chip is apparently going straight from GF110 to GK110, without a GK100 in between. Seems to me like NV might have overextended themselves with GK100 with the end result being GK100 got cancelled, but GK104 is a beast and will compete on the high end until GK110 comes out. You also have to consider that GK110 looks like it will be launching closer to summer/fall so it will most likely be competing with AMD's 28nm refresh and not Tahiti.

...pure speculation on my part based on unconfirmed rumors, but this would make sense IMO.

Yeah, this probably how it'll play out. Nvidia opting to skip GK100 entirely instead get GK 104 out as fast as possible to counter Tahiti and work on GK110.

Lonyo said:
*edited quote*

There is one glaring difference that kinda invalidates your writeup: 30$ is vastly different than 141$ and quite many people are willing to pay a 30$ premium to not deal with dual GPU's
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
No, no it's not. The only reason why NVIDIA went for compute performance from the beginning and they were pushing CUDA hard was that put them in a favorable position for enterprise--you'll see that NVIDIA has much higher enterprise market penetration than AMD.

Nvidia has much higher market penetration in the pro space because they built their GPU's for it. DING DING DING. CUDA would be nothing if it wasn't for Nvidia creating a dual-purpose GPU on high end that has good, sustainable, compute power.

The problem is, NVIDIA has focused on performance at all costs; they haven't made efficient GPUs for a long time now. AMD instead focused entirely on consumer GPU/gaming performance.

It's funny how the very graph you linked to shows the hd6970 and gtx580 tied for the exact same efficiency. Did AMD lose focus on their high end or did Nvidia catch up?

NVIDIA and AMD have largely similar compute performance now that GCN is here

Neat AMD caught up with Nvidia for a few months. I'm sure that will all change here in the next few weeks, and then will become just as lopsided as before this fall.

AMD is still more efficient for now, of course (we'll see what happens with GK104, but I don't expect it to be more efficient). So that's the difference: AMD never sacrificed performance/watt.

You're right, we shall see with GK104. But back to my original post - AMD confirming Nvidia' direction in GPU's. This is what happened with GCN.

it’s interesting that GCN effectively affirms most of NVIDIA’s architectural changes with Fermi. GCN is all about creating a GPU good for graphics and good for computing purposes; Unified addressing, C++ capabilities, ECC, etc were all features NVIDIA introduced with Fermi more than a year ago to bring about their own compute architecture.
http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/7

So yeah, it is ironically funny. I guarantee AMD had to give some die space and performance per watt to add in those compute features. Nothing is for free.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Nvidia has much higher market penetration in the pro space because they built their GPU's for it. DING DING DING. CUDA would be nothing if it wasn't for Nvidia creating a dual-purpose GPU on high end that has good, sustainable, compute power.

There's another big reason nVidia has so much larger a presence, they were first. Contrary to what many think the pro graphics market doesn't change because one company offers more performance. They take the path of least resistance. Many programs were/are programmed using nVidia hardware. Optimizing for another brand, or both, is too much work for no real reason. If it works fine on nVidia hardware, it's no benefit to them to change.
 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
Maybe they will. $299 was rumored. Theo V. says $349-$399. I think $349-$499. Who knows, maybe it'll be $559. It's a lot different now than with Fermi vs. Cypress. Cypress was much faster than a GTX460.

If they price it cheap, then Nv caught AMD like AMD caught Nv with the $649 GTX280.

If it's priced the same as a 7970/580... I have no idea what they'll price it. I'll just stick with Theo's 349-399 since I'm not sure on that part.




I'm thinking someone affiliated, or at least loyal to nvidia cooked up some reasonably believable numbers, and slapped a too good to be true price on it.

seems like a pretty ingenious strategy if you think about it. Tech forums would obviously pick up on the 'leaked specs/price' and maybe, possibly stall thousands of buyer's decisions that were going to pull the trigger on a 79xx card.
from nvidia's perspective, who cares if the leak is false? They could still 'win' from this if it plants a seed of doubt in some people's mind about buying a 79xx card.

then again if the performance numbers are accurate, I think nvidia would be silly to price their cards below the competition.

(just my take on the situation)
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
There's another big reason nVidia has so much larger a presence, they were first. Contrary to what many think the pro graphics market doesn't change because one company offers more performance. They take the path of least resistance. Many programs were/are programmed using nVidia hardware. Optimizing for another brand, or both, is too much work for no real reason. If it works fine on nVidia hardware, it's no benefit to them to change.

This is also true.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Sounds like you're hedging your bets already tviceman without knowing anything concrete ;)
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
That could also be the reason the top end chip is apparently going straight from GF110 to GK110, without a GK100 in between. Seems to me like NV might have overextended themselves with GK100 with the end result being GK100 got cancelled, but GK104 is a beast and will compete on the high end until GK110 comes out. You also have to consider that GK110 looks like it will be launching closer to summer/fall so it will most likely be competing with AMD's 28nm refresh and not Tahiti.

...pure speculation on my part based on unconfirmed rumors, but this would make sense IMO.


This is how I am seeing it as well. They hit the same problems with getting a massive die on a new node out in a timely fashion and have pushed it back, but rather than wait till it's ready, they'll release a smaller and more easily made chip sooner; GK104.

My belief is that they'll release the card as a 680 that can compete with a 7970 with a die size similar to Tahiti, charge at least $450-$500 for it, and put out a few cut down models for $250-$400.

Late this year they'll release the huge GK110 chip to compete against an AMD refresh of Tahiti, at which point they'll solidly take the performance crown, unless AMD breaks the mold and decides to make a huge die as well, which I don't see happening.

The one steaming turd is that the full-fledged GK104 is going to cost $300, this is not going to happen, but I do see a cut down model at that price point. This is nvidia, they put out a card that is X80 model line, you will be paying a fat price for it.