Lenzfire.com: Entire Nvidia Kepler Series Specifications, Price & Release Date

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
wtf, i don't know you are being sarcastic or not but if the hotclock really reduce the die are then why nvdia GPU is always bigger and more power hungry than amd in the same performance category ?

Same reason they are on the whole better at GPGPU. Different architecture.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Yup, but msi lightning and asus dcu models usually get them. Ek makes custom card blocks.

EK didn't make a block for the MSI Lightening GTX 580 or 6970. They did make one for the Asus Direct CU GTX 580, but not the 6970.
Look it up on the Configurator... http://www.coolingconfigurator.com/

You have to be mindful about what you can easily obtain when it comes water blocks because if can get really pricey if you have to get custom block or have to put together a system of universal blocks. I ran into this with my motherboard when I did water cooling a while back. There were no reasonably affordable (IMO) options for my motherboard.

Anyway, this it totally off topic. Just pointing this out so you don't end up buying a card and waiting for a full cover block forever.
 

Elfear

Diamond Member
May 30, 2004
7,163
819
126
EK didn't make a block for the MSI Lightening GTX 580 or 6970. They did make one for the Asus Direct CU GTX 580, but not the 6970.
Look it up on the Configurator... http://www.coolingconfigurator.com/

You have to be mindful about what you can easily obtain when it comes water blocks because if can get really pricey if you have to get custom block or have to put together a system of universal blocks. I ran into this with my motherboard when I did water cooling a while back. There were no reasonably affordable (IMO) options for my motherboard.

Anyway, this it totally off topic. Just pointing this out so you don't end up buying a card and waiting for a full cover block forever.

That's the tough choice it seems with full-cover blocks. Full-cover blocks are easy to find for the reference cards but they don't have the most robust PCB design. I'm really hoping the non-reference cards will just add another power phase.



I'm probably oversimplifying what needs to be done but if the choke and pad could be added it should allow the reference blocks to fit.

Sorry for the OT. Go Kepler!
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Well, that's a pretty staggering if. If they were true parts like the 660 would be great, but the numbers just don't make sense. Look at the 680 vs the 670; performance relative to the 7970 is claimed to be 21% higher (1.45/1.2), while the 680 is clocked the same and only has 14% more SPs, ROPs and bus width than the 670. Sure memory is clocked a little higher, but in effect this data claims that Kepler gets greater than unity increases in performance from added stream processors. Now that's amazing.

Actually at least this part does make sense assuming the 670 is bandwidth limited, since the 680 has an 26% increase in bandwidth, which lines up decently with the 21% increase in performance.

And seeing as the 670 according to these rumours will have approximately 45% higher performance than a 580 (assuming the 7970 is 20% faster than a 580), while also having 45% higher bandwidth (280 GB/s versus 192,4 GB/s), that might actually be the case.

Although that doesn't make the rumours true of course.
 

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,329
126
EK didn't make a block for the MSI Lightening GTX 580 or 6970. They did make one for the Asus Direct CU GTX 580, but not the 6970.
Look it up on the Configurator... http://www.coolingconfigurator.com/

You have to be mindful about what you can easily obtain when it comes water blocks because if can get really pricey if you have to get custom block or have to put together a system of universal blocks. I ran into this with my motherboard when I did water cooling a while back. There were no reasonably affordable (IMO) options for my motherboard.

Anyway, this it totally off topic. Just pointing this out so you don't end up buying a card and waiting for a full cover block forever.

Bitspower does lightning blocks http://www.bitspower.com.tw/index.php?main_page=product_info&cPath=6_20&products_id=2570. You have to shop around and be willing to order from overseas sometimes.

I won't buy until the block is also available. I'm not in a big rush this gen as I can run any game out there maxxed out on my current setup. I'm upgrading to move down to two cards and get another 20% to 30% at the same time.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
http://www.anandtech.com/show/5200/...-7000m-and-nvidias-geforce-600m-mobile-gpus/2

Getting sick of these complete BS rumours. Nothing but junk just like in the months leading up to Fermi's launch. Get your act together nvidia, catch up on 28nm, and put some cards out or do a paper launch at the least.

It's taking forever for the MSI Lightning 7970 to come out, which is the card I want, with a full cover waterblock. So I figure I have a few months until that is available. I was hoping nvidia would release by then so I could compare their offerings, but it's starting to look really bleak with nothing but these obvious fakes are obvious fantasy leaks.

Stay away from rumors threads then if they affect you so much...it's your own fault...no vendors owes you anything.

Their schedule don't care about you...so why do you care if there is no rumors about an unannounced product?
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
Understand the thermal requirements for ultrabooks is even more strict than for notebooks. While I'm aware that GK110 will not be in an ultrabook, the underlying architecture _must_ be super efficient for it to work in an ultrabook setting. AFAIK fermi was never put in an ultra book, even the mobile part is not efficient enough....the architecture is not efficient enough.

Charlie has hinted that kepler is super small and has great thermals, and i'm not sure everything he says is credible but i'm inclined to believe that Kepler will not be a large die solution. This has been stated numerous times on many websites, Kepler is supposed to be NV's "efficient" chip - their first one ever.

That is actually quite believable, especially in light of all the criticism that fermi 1 got at first. And if they can get great thermals and still outperform amd then they will almost get a free pass on being late IMHO.
 

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
That is actually quite believable, especially in light of all the criticism that fermi 1 got at first. And if they can get great thermals and still outperform amd then they will almost get a free pass on being late IMHO.

Its kind of interesting to think about. I mean, Kepler will for sure deliver on performance but I wonder how receptive users will be about kepler in ultrabooks? AMD has an advantage there, they make great APUs with cpu + gpu on 1 chip, while a producer would have to combine an intel cpu with kepler if they were to go nvidia route. AMD would have a huge price advantage. It makes sense that they made tegra now that I think about it, they're definitely being squeezed out of the ultra mobile x86 market in that respect. And also power usage + user experience are preferred on ultrabooks, not all out performance. Should be interesting to see what happens.

I'm kinda sad that discrete sales are down so much. Personally I don't give 2 craps about power consumption, but I understand why AMD + NV are more concerned with efficiency these days. I'm eager to see if NV has finally made an efficient chip, they haven't done that in many many years
 
Last edited:

lifeblood

Senior member
Oct 17, 2001
999
88
91
Stay away from rumors threads then if they affect you so much...it's your own fault...no vendors owes you anything.

Their schedule don't care about you...so why do you care if there is no rumors about an unannounced product?
I disagree. If they want my money, they do owe me something. And they do want my money, and yours too.
 

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
I disagree. If they want my money, they do
owe me something. And they do want my money, and yours too.

No, because you want their product dosn't mean that they owe you anything.
And again, in this context, there is no product.

All there is is a lot of static, combined with fanboys from both sides and people thinking they are important enough for companies to care.

This thread (read: a lot of posts in this thread) will look so silly after launch.
I recommend you do it...after launch...read the rumour threads....you learn to ignore the loud empty drums ;)
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,956
1,268
126
I'd totally get the 660 or 660ti if those rumors are correct. 580 performance from a 660?? I'll believe it when I see it.
 

blastingcap

Diamond Member
Sep 16, 2010
6,654
5
76
No, because you want their product dosn't mean that they owe you anything.
And again, in this context, there is no product.

All there is is a lot of static, combined with fanboys from both sides and people thinking they are important enough for companies to care.

This thread (read: a lot of posts in this thread) will look so silly after launch.
I recommend you do it...after launch...read the rumour threads....you learn to ignore the loud empty drums ;)

Out of curiosity, do you consider yourself a fanboy?
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
Of course he doesn't....just because all his posts are hardcore NVDA love, it's no indicator...it only applies to other people.:rolleyes:

That'll be enough out of you
-ViRGE
 
Last edited by a moderator:

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
Same reason they are on the whole better at GPGPU. Different architecture.
Thanks Lonyo.

It's also because NV is less efficient when it come to xtor layout on the die, something AMD/ATI is better at. One would think NV would be getting better at that now that they are implementing SOCs for phones.
 

Schmide

Diamond Member
Mar 7, 2002
5,712
978
126
Lets analyze this boys

- GTX 690 will be the flagship Kepler, on a single PCB with dual GPU - Chart has the GTX 690 using a 224 bit memory bus x 2 - check

- GTX 660 will have a (224 bit) memory bus. Not even possible - check

- GTX 680 has a 512 bit memory bus, while the dual GPU version the 690 (GTX 680x2) has a 224 bit memory bus - lolworthy

The kicker really is their GTX 690 specs. According to that chart it has a 224 bit memory bus with 1.75gb of memory times two. Hilarious.

Yep this sounds plausible.

EDIT: Also noticed the 224bit memory on some. WTF?!?

The funny part is that the dual GTX 680 (690) has a 224 bit memory bus on that chart. Whoever made it is a comedian for sure :D

Ok guys. Just to settle this issue. If nVidia is using active ECC you get a 256*(12.5%) allocation of bits which would give you a 256-32=224 bit bus.
 

Lonyo

Lifer
Aug 10, 2002
21,938
6
81
Ok guys. Just to settle this issue. If nVidia is using active ECC you get a 256*(12.5%) allocation of bits which would give you a 256-32=224 bit bus.

Yep, but that's never going to happen on a card like this.
It would just waste RAM and memory bandwidth in a situation where it's not useful.

If you were talking about cards for HPC/Workstation use, then you would have a point, but it's almost certainly not going to be the case for this level of card, so it's irrelevant.

EVEN IF it did support it, it wouldn't be enabled by default on this class of card, since it would basically be a "software" feature to use the extra RAM for ECC purposes, as has been done in the past.

http://www.anandtech.com/show/2977/...tx-470-6-months-late-was-it-worth-the-wait-/4


It would be more convincing to talk about the 8800 series, with the previously-unheard of 320-bit memory bus, or (even more obviously), the GTX260 with its 448-bit memory bus.
Hey, 448-bit? What's that? Oh, it's 2x224-bit! What the hell!

Remember people, for the specs to be remotely true, with 5.8GHz GDDR5, NV will almost certainly have had to ENTIRELY REDESIGN THEIR MEMORY CONTROLLERS PRETTY MUCH FROM SCRATCH.
They are currently limited to 4GHz GDDR5, these specs claim 5.8GHz GDDR5. That tells you to forget anything else about memory busses right there.
 
Last edited:

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
EDIT: Also noticed the 224bit memory on some. WTF?!?
That means that 1/8 of the chip is binned, so it is possible. What's not possible is a 292 bit wide memory bus since 292 isn't divisible by 8.
- Kepler does not have hotclocks - Chart has hotclocks for all kepler parts, when Kepler doesn't have hotclocks, check - GTX 690 will be the flagship Kepler, on a single PCB with dual GPU - Chart has the GTX 690 using a 224 bit memory bus x 2 - check - GTX 660 will have a (224 bit) memory bus. Not even possible - check - Stream processors? Really? Cuda core counts - wrong - check - GTX 680 has a 512 bit memory bus, while the dual GPU version the 690 (GTX 680x2) has a 224 bit memory bus - lolworthy The kicker really is their GTX 690 specs. According to that chart it has a 224 bit memory bus with 1.75gb of memory times two. Hilarious.
You've apparently never heard of a PCB with 2 GPU dies on it and a 224 bit bus is most definitely possible especially there will be (8) 32 bit memory channels. That means it will have 28 or 56 ROPs.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
You've apparently never heard of a PCB with 2 GPU dies on it and a 224 bit bus is most definitely possible especially there will be (8) 32 bit memory channels. That means it will have 28 or 56 ROPs.

I'm not saying this lends credence to these rumor specs, but a 32-bit memory controller could plausibly be a way that Nvidia tackled their memory controller problems they had with Fermi. Make smaller controllers that are inherently easier to design and debug.... the only downside I see is that there would be twice as many memory controllers communicating with the chip which could present a set of different problems on it's own. Still, if Nvidia is trying to maximize bandwidth on a 256-bit bus, making smaller, more aggressive controllers is one avenue.
 

Will Robinson

Golden Member
Dec 19, 2009
1,408
0
0
I'm not saying this lends credence to these rumor specs, but a 32-bit memory controller could plausibly be a way that Nvidia tackled their memory controller problems they had with Fermi. Make smaller controllers that are inherently easier to design and debug.... the only downside I see is that there would be twice as many memory controllers communicating with the chip which could present a set of different problems on it's own. Still, if Nvidia is trying to maximize bandwidth on a 256-bit bus, making smaller, more aggressive controllers is one avenue.
Pardon my ignorance but how does one make a more "aggressive" controller?:confused:
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Maybe it's explained in the arm chair engineer handbook right after the more efficient 32 ROP's in the design of Tahiti ?;)
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
Wonder if kepler will show its big head before April?

Got a birthday in April and basically i got the green thumb from the gf this evening to purchase any gpu for $600 for myself and i am hoping nvidia has a 7970 killer.

Not that i am a fanboy but damn nvidia has jack shit to offer right now and i wish for some competition.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Pardon my ignorance but how does one make a more "aggressive" controller?:confused:

Even though you are probably trying to be snide, I'll give you a straight up answer. I am referring to the controller's ability to handle faster memory speeds - something that Nvidia has had a hard time achieving in the recent history.