Lenzfire.com: Entire Nvidia Kepler Series Specifications, Price & Release Date

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
Wow! There is a rumor frenzy going on right now. It's really looking like a page hit game, atm. It's the wild west out there! At this point, the only thing I'll believe are demos (real ones) from NV or their AIBs. If NV is really running late, we should get an architectural update in May at the GPU Tech Conference. They'll need some bait on the hook by then.
 

ShadowOfMyself

Diamond Member
Jun 22, 2006
4,227
2
0
I actually think the 45% figure might be pretty accurate...

By the time it gets here, AMD will have its refresh available or close to, and we all know how easy it is to get 30% more performance just from OCing the 7970, nevermind additional architecture tweaks

So that would put the GTX680 at 10-15% faster than the refresh, which is in line with the last couple generations... Sounds about right to me
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
I actually think the 45% figure might be pretty accurate...

By the time it gets here, AMD will have its refresh available or close to, and we all know how easy it is to get 30% more performance just from OCing the 7970, nevermind additional architecture tweaks

So that would put the GTX680 at 10-15% faster than the refresh, which is in line with the last couple generations... Sounds about right to me

Based upon what we saw last shrink, I'll be impressed if NV beats 7970 by more than 30%. And the longer they take, the more gcn driver improvements amd can bring out.
 

Riek

Senior member
Dec 16, 2008
409
15
76
so nvidia can push ~30% performance by increasing sp by 16% and leaving the core clocks the same.

and about 60% with 33% more sp?

:rolleyes:
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Every time a new generation of GPU is about to come out, the rumor mill churns out that the card will have double the SIMD units, double the memory bandwidth and double the ROPs. And just about every time, we're disappointed.

A GTX 680 with 1024 streams and a 512-bit bus would be a GPU of unprecedented size - in the 700-750mm^2 range. That would be unheard of; like 40% bigger than Fermi! There's also just about no way to cool a GPU of that size; at 850 MHz we're talking something like 400W power consumption for one card.
 

BD231

Lifer
Feb 26, 2001
10,568
138
106
Its definitely true a 1000 core fermi derivative would wipe the floor with the 79xx series that much is certain, hopefully we get it.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
A GTX 680 with 1024 streams and a 512-bit bus would be a GPU of unprecedented size - in the 700-750mm^2 range. That would be unheard of; like 40% bigger than Fermi! There's also just about no way to cool a GPU of that size; at 850 MHz we're talking something like 400W power consumption for one card.

And you know that how? Every new generation since 2006 has almost doubled the SIMD count. 128->240->480
It's mostly other things that make the chip big and hungry like all the HPC stuff they've been concentrating on since GT200.
 
Last edited:
Feb 19, 2009
10,457
10
76
And you know that how? Every new generation since 2006 has almost doubled the SIMD count. 128->240->480
It's mostly other things that make the chip big and hungry like all the HPC stuff they've been concentrating on since GT200.

And did you look at the the die size comparing them factoring in the node?

Just doubling everything the same on a full shrink means the die size stays the same. Add any new features, you need more die space. Unless you believe Kepler is a straight Fermi shrink with no new features, doubling it will mean its a bigger die.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
And did you look at the the die size comparing them factoring in the node?

Just doubling everything the same on a full shrink means the die size stays the same. Add any new features, you need more die space. Unless you believe Kepler is a straight Fermi shrink with no new features, doubling it will mean its a bigger die.

You can also put some effort into making parts smaller. This aside from the fact that everything above 600mm2 is not manufacturable anywhere on this planet. Not with optical lithography anyways.

Fermi doubled the shader count of GTX280/285, added new features and was smaller. You don't necessarily have to double all the units. Afaik Tahitis ROPs are faster than Caymans, so they didn't need to double them.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Nvidia is keeping it secret of course, but i've seen it mentioned at least 2 dozen times. Shrug. Maybe not confirmed, I suppose.

I don't have much faith in the specs posted here. I do hope the date is correct, if not sooner. (April '12)

Ah, OK. I was wondering if I missed some announcement or something because such a change would be rather drastic and I'd like to read more about it.

Personally I don't see Kepler being a "threw the baby out with the bathwater" microarchitecture redesign over Fermi.

But if they are really going after power usage then it makes sense that they would eliminate hot-clocks since it takes a disproportionate amount of voltage to get those extra MHz.
 

thilanliyan

Lifer
Jun 21, 2005
12,039
2,250
126
We have to wait wait until APRIL to get this??!! And it will be ONLY 45% faster than 7970 AND COST MORE??!!

Well what a failure, disappointment, etc, etc. It should be 200% faster and cost only $299!!!!!! I mean nV is in the business of satisfying us enthusiasts and not making money like the evil empire AMD. Why are they charging sooo much??!!

:p
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Honestly, these are about the only specs that I would consider a worthy competitor. However, the 45% faster doesn't seem right. This thing is effectively a double 580 GTX. I think it was established that the 7970 is around 25% faster than the 580, right?

100 x 1.25 = 125 (7970)
100 x 2 = 200 (680)

200/125 = 60% faster.

I know there are other factors on performance and that doubling the specs does not always double the performance, but I sure as heck expect it to perform better than 45% faster if it has 1024 cuda cores and a 512mb bus.

That is, again, unless nVidia had to weaken the cores in order to fit more in a certain die size...
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
I wonder if we will be able to drive surround vision (3 x LCD) on the 1 GPU now it has enough memory?


I don't think memory was the issue, was it? 1.5GB and 3GB GTX580's should be able to pull it off (the 1.5GB may suffer with some resolutions I guess). Or you could buy an AMD 6xxx or 7xxx card and do it. :)
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
http://www.techpowerup.com/157039/NVIDIA-Kepler-To-Do-Away-with-Hotclocks.html

Since the days of NVIDIA's very first DirectX 10 GPUs, NVIDIA has been using different clock domains for the shaders and the rest of the GPU (geometry domain). Over the past few generations, the shader clock has been set 2x the geometry domain (the rest of the GPU). 3DCenter.org has learned that with the next-generation "Kepler" family of GPUs, NVIDIA will do away with this "Hotclock" principle. The heavy number-crunching parts of the GPU, the CUDA cores, will run at the same clock-speed as the rest of the GPU.

It is also learned that NVIDIA will have higher core speeds overall. The clock speed of the GK104, for example, is expected to be set "well above 1 GHz", yielding compute power "clearly over 2 TFLOPs" (3DCenter's words). It looks like NVIDIA too will have some significant architectural changes up its sleeve with Kepler.

lenzfire down for anyone else right now?
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
We have to wait wait until APRIL to get this??!! And it will be ONLY 45% faster than 7970 AND COST MORE??!!

Well what a failure, disappointment, etc, etc. It should be 200% faster and cost only $299!!!!!! I mean nV is in the business of satisfying us enthusiasts and not making money like the evil empire AMD. Why are they charging sooo much??!!

:p

Trying to wade through your sarcasm here, but am I to understand that you believe that 7970 performance over GTX 580 is, or isnt (or spot on) underwhelming for its price. Sorry but the screen was dripping with sarcasm. :)
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
Trying to wade through your sarcasm here, but am I to understand that you believe that 7970 performance over GTX 580 is, or isnt (or spot on) underwhelming for its price. Sorry but the screen was dripping with sarcasm. :)

He is mocking the unrealistic expectations people have for Kepler. "It's gonna cost 300 bucks, have double the performance of GTX 580 and come with a pony!"

Actually, Kepler getting rid of hotclocking is a huge deal if it's true. A flagship Kepler card with 1024 CUDA cores@1ghz would theoretically be only 25% faster than GTX 580's 512 cores @ 1.5ghz. 25% otherwise know as "7970 goes here".
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
And did you look at the the die size comparing them factoring in the node?

Just doubling everything the same on a full shrink means the die size stays the same. Add any new features, you need more die space. Unless you believe Kepler is a straight Fermi shrink with no new features, doubling it will mean its a bigger die.

You never get perfect scaling going down a full node. You might be able to fit 2x as many of the smallest transistors in the same space, but many transistors are of fixed size (like external IO transistors) and can't really be shrunk down. You could so a straight die shrink of a GTX580, but it wouldn't end up going from 520mm^2 to 260mm^2. Also, with smaller core transistors and a fixed size drive transistor for something like the GDDR5 bus, another buffer stage in each path might be needed to keep latency and loading under control.
 

nOOky

Diamond Member
Aug 17, 2004
3,221
2,274
136
Sounds like the HD7970 will be the new mid-range if this is all true. I hope the prices drop to $250 then :p
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
He is mocking the unrealistic expectations people have for Kepler. "It's gonna cost 300 bucks, have double the performance of GTX 580 and come with a pony!"

Actually, Kepler getting rid of hotclocking is a huge deal if it's true. A flagship Kepler card with 1024 CUDA cores@1ghz would theoretically be only 25% faster than GTX 580's 512 cores @ 1.5ghz. 25% otherwise know as "7970 goes here".

Those SoB's are doing away with the pony? Screw it, kill me now, my life is pointless.

Goodbye Mister Bigglesworth, I hardly knew thee.
 

MrTeal

Diamond Member
Dec 7, 2003
3,916
2,700
136
Those SoB's are doing away with the pony? Screw it, kill me now, my life is pointless.

Goodbye Mister Bigglesworth, I hardly knew thee.

How are we supposed to get glue and stew meat now? Curses.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
If they lose hotclock, shadercount will most likely more than double. How else would they achieve high compute power, especially for their Tesla products? They need at least 1TF DP, better 1.5TF. That means at least 3TF SP which you cannot do with only 1024 shaders at 1 GHz.