NVIDIA GeForce GTX 780 To Be Based on GK114 GPU

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
There are numerous sources that don't back that.

I'm not talking about GPU-Z or wikipedia either, but people actually analyzing the dies in one way or another.

I researched this a lot back in the day and I came away with the figures: 3.20 billion vs 3.00 billion, and 529mm2 vs 520mm2, both in favor of the GTX580. Approximate values of course.


So I should take your word (based on pure layman guess-work) over NVIDIA's?...:whiste:
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
So he has some higher demands than others, it happens. I'll be getting 3x GK110, too. I like my SSAA.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,002
126
So I should take your word (based on pure layman guess-work) over NVIDIA's?...:whiste:
Guess-work? There's no guess-work when we have pictures like these:

GeForce_GTX400_die_03.jpg


529mm2 for the GTX480, just like I said. Back then I also found multiple credible sources stating the GTX580 is 520mm2.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
NVIDIA's GeForce GTX 580: Fermi Refined

GF110 is a mix of old and new. To call it a brand-new design would be disingenuous, but to call it a fixed GF100 would be equally shortsighted. GF110 does have a lot in common with GF100, but as we’ll see when we get in to the design of GF110 it is its own GPU. In terms of physical attributes it’s very close to GF100; the transistor count remains at 3 billion (with NVIDIA undoubtedly taking advantage of the low precision of that number), while the die size is at 520mm2. NVIDIA never did give us the die size for GF100, but commonly accepted values put it at around 530mm2, meaning GF110 is a hair smaller.


Thus the trick to making a good GPU is to use leaky transistors where you must, and use slower transistors elsewhere. This is exactly what NVIDIA did for GF100, where they primarily used 2 types of transistors differentiated in this manner. At a functional unit level we’re not sure which units used what, but it’s a good bet that most devices operating on the shader clock used the leakier transistors, while devices attached to the base clock could use the slower transistors. Of course GF100 ended up being power hungry – and by extension we assume leaky anyhow – so that design didn’t necessarily work out well for NVIDIA.
For GF110, NVIDIA included a 3rd type of transistor, which they describe as having “properties between the two previous ones”. Or in other words, NVIDIA began using a transistor that was leakier than a slow transistor, but not as leaky as the leakiest transistors in GF100. Again we don’t know which types of transistors were used where, but in using all 3 types NVIDIA ultimately was able to lower power consumption without needing to slow any parts of the chip down. In fact this is where virtually all of NVIDIA’s power savings come from, as NVIDIA only outright removed few if any transistors considering that GF110 retains all of GF100’s functionality.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
So 780 with be fully enabled with 384 bit bus, while the 770 would probably have one SMX disabled and a 320 bit bus?
 

brandon888

Senior member
Jun 28, 2012
537
0
0
So 780 with be fully enabled with 384 bit bus, while the 770 would probably have one SMX disabled and a 320 bit bus?

dude as i understand ... if 770 will ahve 320 bit and 780 384 .... then 770 will have 2.5 GB Vram and 780 3 GB ? :)
 
Last edited:

Lonbjerg

Diamond Member
Dec 6, 2009
4,419
0
0
Guess-work? There's no guess-work when we have pictures like these:

GeForce_GTX400_die_03.jpg


529mm2 for the GTX480, just like I said. Back then I also found multiple credible sources stating the GTX580 is 520mm2.

Number of trannies....number of trannies...:rolleyes:
I don't care that you can use a ruler...
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
wow ? so nex yer we will get 670 for 220$ ? heh .... hope so .... coause 15% performance gain sounds toooooooo lame ;/

That graph might have quite a bit of truth to it, but I do not see GK114 going for $300 in it's top SKU, especially if the best GK110 card will be going for >=$599.
 

skipsneeky2

Diamond Member
May 21, 2011
5,035
1
71
I hope it's true. I'm ready for an upgrade. Three GK110 on an ASRock Extreme 11 with 32GB of PC3-19200 should do the trick.

LOL your awesome.

Half the guys in the forums would give their left nut for a 7970,you got 3 of them and here comes the next potential upgrade apparently.:biggrin:

Rock on!:cool:
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I wonder if they'll still be able to hit 6-7GHz on these bigger buses, and also what kind of OC headroom we'll see out of improved cooling methods such as water cooling or if they'll be voltage locked near their ship threshold like GK104 is.
 

brandon888

Senior member
Jun 28, 2012
537
0
0
sorry but will my i7 3770 handle Gtx 780 if it will be 40-50% faster then 680 ? :D note thati have non K cpu :D so so cpu bottleneck will be minor ? :D
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Even todays cards are beginning to be bottlenecked at 1080p, depending on the game and scene of course. With GK110 and following cards reviewers might want to drop 1080p altogether or at least use some SGSSAA or something.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Did you checked Dirt screens. There is no difference. I wonder how AMD takes a lead? Maybe "lock performancce on Nvidia GPU"? :whiste:

pay atention in the shadows...they a bit red, the shadow is "reflecting" the car ;)
anyway, i don't doubt that AMD is locking the performance
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
I wonder if they'll still be able to hit 6-7GHz on these bigger buses, and also what kind of OC headroom we'll see out of improved cooling methods such as water cooling or if they'll be voltage locked near their ship threshold like GK104 is.

What size bus? 384 bit, no problem I would think. 512 bit - big problem, IMO (unless Nvidia has really nailed it with the cross talk problems.