GeForce Titan coming end of February

Discussion in 'Video Cards and Graphics' started by Rikard, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. cmdrdredd

    cmdrdredd Lifer

    Joined:
    Dec 12, 2001
    Messages:
    25,012
    Likes Received:
    46
    The rumor is that the rumor is false. Sometimes I'd rather not read rumors, but around here sometimes it's unavoidable.
     
  2. Ferzerp

    Ferzerp Diamond Member

    Joined:
    Oct 12, 1999
    Messages:
    6,176
    Likes Received:
    7

    Would be absolutely no surprise.

    If playing at rumor mongering, one always need remember that until the product is in the hands of reviewers, and the NDA is lifted, most details are probably BS.
     
  3. exar333

    exar333 Diamond Member

    Joined:
    Feb 7, 2004
    Messages:
    8,513
    Likes Received:
    3
    I may grab this instead of another 670 if the performance is better.
     
  4. Rikard

    Rikard Senior member

    Joined:
    Apr 25, 2012
    Messages:
    428
    Likes Received:
    0
    ...and more information from the same sources:
    Titan is Titan
    I found it interesting that they mention a smaller version of GK110 at a (yet to be specified) more affordable price. Give us more, will you!
     
  5. vladicaris

    vladicaris Member

    Joined:
    Jan 23, 2013
    Messages:
    36
    Likes Received:
    0
    good news :) maybe I will change my ati for that new nvidia card when it come
     
  6. rituraj

    rituraj Member

    Joined:
    Nov 10, 2012
    Messages:
    97
    Likes Received:
    0
  7. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    It's a fake, it's an overclocked GTX690. Someone was able to partially remove the black stuff:

    [​IMG]
     
  8. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,458
    Likes Received:
    695
    I am amazed people actually believed that score was real. A stock GTX680 scores what X3400-3500 points? To hit > 7000, you'd need to at least double some of GK104's resources (texture and memory bandwidth) because Kepler does not scale linearly with CUDA cores/pixel fill-rate performance due to texture fill-rate and memory bandwidth bottlenecks.

    GTX680 vs. GTX660Ti
    Pixel fill-rate = 47% more
    Texture fill-rate = 26% more
    Memory bandwidth = 33% more

    GTX680 is roughly 25-27% faster than a GTX660Ti.

    To double the texture fill-rate, you would need full 15 SMX clusters with 240 TMUs clocked at 1130mhz against GK104's 128 TMUs clocked at 1058mhz. To double the memory bandwidth, you would need GDDR5 8Ghz.

    The laws of physics do not make any sense:

    GK104 (GTX680) = 1058mhz 294mm2 die, 256-bit GDDR5 7Ghz = 185-190W of power
    Tahiti XT2 (HD7970GE) = 1050mhz 365mm2 die, 384-bit GDDR5 6Ghz = 230-240W of power
    K20X (GK110 Tesla) = 732mhz 550mm2 die, 384-bit GDDR5 5.2ghz = 235W TDP <<<<< die size increases 51% from Tahiti XT, but GPU clocks drop 30% and GDDR5 speed. To run a GPU at 732mhz you can drop the voltage a lot lower, which exponentially reduces power consumption >>>>

    vs.

    Titan to reach X7100 = 1130mhz 550mm2 die, 384-bit GDDR5 8Ghz = XXX W <<<< To stabilize a GK110 at 1.1ghz with its 550mm2 die, you'd need to put a lot more voltage into it compared to a 732mhz K20X, which should exponentially increase power consumption. Alternatively comparing these specs to GTX680, die size grows 87% on the same 28nm node, much more power hungry 384-bit bus is added and power consumption only grows 60-65W? >>>>

    The fastest GDDR5 is 7Ghz as far as I am aware. If 1.05Ghz HD7970GE in reference form is already using 230W+ of power on 28nm with a 365mm2 die, how do people expect a >1Ghz 2880 SP 240 TMUs, 384-bit bus over GDDR5 7Ghz+ GK110 with a 550mm2 die to only use 250W of power? NV must have access to alien 28nm tech.

    Finally another comparison to put things in perspective:

    GTX480 had a 526mm2 die and it was clocked at just 700mhz over 384-bit bus with GDDR5 3.7Ghz. That card used 270W of power in games. GK110 supposedly grows to 550mm2 die and it would need clocks to increase to 1.13ghz and use at least GDDR5 7Ghz to double GTX680's performance. 28nm transistors at GlobalFoundries offer a 60% increase in performance than 40nm at comparable leakage. 1.13Ghz on a 28nm GK110 is a 61% increase in transistor speed over the 700mhz GTX480's 40nm node. There goes your entire benefit of 28nm node, but you still have not addressed the additional power consumption of 6GB GDDR5 7Ghz over 1.5GB GDDR5 3.7Ghz in the GTX480. Such a chip would use > 270W of power.
     
    #433 RussianSensation, Feb 1, 2013
    Last edited: Feb 1, 2013
  9. blackened23

    blackened23 Diamond Member

    Joined:
    Jul 26, 2011
    Messages:
    8,556
    Likes Received:
    0
    For 900$ the real titan BETTER score that high :p
     
  10. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    Fillrate is not so relevant anymore, it's rather computing power (and bandwidth of course). The 680 has 26% more computing power than the 660/660 Ti. That fits the average performance difference quite well.

    And of course Kepler scales linearly with more CUDA cores so long as all other factors increase, too. Bandwidth most of all. The 680 is a double 650 Ti in every regard with 14% bit higher clocks (including GTX680 boost). Which means, the 680 should have about 228% of the performance when the 650 Ti has 100%. And that is perfectly in line with benchmark results:

    224% in 1080p
    231% in 1600p
    http://www.techpowerup.com/reviews/AMD/Catalyst_12.11_Performance/23.html

    225% in 1080p
    http://www.computerbase.de/artikel/grafikkarten/2012/test-nvidia-geforce-gtx-650-ti/6/

    So GK110 will make perfect use of its increased unit count as long as sufficient bandwidth is provided (up to about 50% due to the 384bit interface with 6Gbps memory).
     
    #435 boxleitnerb, Feb 1, 2013
    Last edited: Feb 1, 2013
  11. Ferzerp

    Ferzerp Diamond Member

    Joined:
    Oct 12, 1999
    Messages:
    6,176
    Likes Received:
    7
    Redacted a moment while I check more.

    The edited image appears a fake.

    That doesn't mean that the original image is what it claims to be, but the edited image doesn't appear to be what it claims to be either.
     
    #436 Ferzerp, Feb 1, 2013
    Last edited: Feb 1, 2013
  12. brandon888

    brandon888 Senior member

    Joined:
    Jun 28, 2012
    Messages:
    537
    Likes Received:
    0
    sorry guys but who the *** needs 6GB VRAM ? even in sli :/ better make it with 3/4 GB Vram and 150-200$ cheaper ....
     
  13. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,458
    Likes Received:
    695
    I already addressed this in my analysis.

    2880 SPs @ 1.13ghz is 2-fold increase over 1536 SPs @ 1058mhz. You need this at minimum to go from X3400-3500 to X7100 score in 3DMark11 assuming the benchmark works linearly.

    You haven't explained how it is physically possible to get to that GPU clock speed on 28nm with a 550mm2 die. We are 100% back to the discussion we all had before where I said you cannot have a full >1Ghz GK110 2880 SP 240 TMU 384-bit GDDR5 7Ghz chip without blowing past 250W of power and going back to Fermi days. I see that people still believe NV can overcome the laws of physics, or are people actually seriously thinking NV will design a 300W TDP single GPU card?

    We already see what happens if you blow up the die size just a bit from 294mm2, add many double precision transistors (which GK110 has in spades over GK104), 384-bit bus and maintain > 1.0Ghz clocks ==> 365mm2 HD7970GE chip that's already using GDDR5 6Ghz and is clocked at 1.05Ghz and its power consumption is at 230-240W. So people here think NV can just increase the die size from 365mm2 to 550mm2, bump GPU clock from 1.05ghz to 1.13ghz on GK110 and use just 10-20W more power on the same 28nm node that HD7970GE uses??? :biggrin:

    Never in the history of AMD/ATI or NV has anyone ever doubled the GPU processing performance on the same node when the previous flagship GPU was already using 180-190W of power (GTX680). You cannot double the GPU performance (VRAM bottlenecks excluded) on the same node with just a 32-39% power consumption penalty from 180-190W of the GTX680 to 250W of GK110. This is not physics - it's fanboys wet-dream fantasies.

    GTX650Ti = 75W
    GTX680 at least doubles everything in the GTX650Ti = 2x the performance on the same 28nm = 186W
    http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_650_Ti_Amp_Edition/26.html

    GK110 (essentially performance of 4x GTX650Tis) needs to double nearly everything (shader performance, ROP/TMU speed, memory bandwidth) in the GTX680 to get 2x the performance:

    Assuming at least same efficiency of going from GTX650Ti to GTX680, Titan with 2x the speed of a GTX680 would need to use 186W + (186W - 75W) = 297W
     
    #438 RussianSensation, Feb 1, 2013
    Last edited: Feb 1, 2013
  14. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    I haven't said at all that I believe this score is possible ;)
    I just contested your claim that Kepler doesn't scale well with more units or is fillrate (pixel/texel) bottlenecked. The TMUs/ALU ratio is the same on GK104 and GK110 (16 TMUs per SMX), so even if there were a bottleneck, it would not change anything.

    My personal speculation for Titan is a 50% speed bump over the GTX680 with a TDP of 250-270W.

    No. Why do you insist on linking TPU all the time for power consumption? And always max values at that? You should know by know that
    1. single values like max and min are more prone to errors/fluctuations than average values
    2. more samples give a better representation. In game A things might be different than in game B etc.

    Honestly, enough with TPU. As part of an average they are fine, but not as the only value.

    GTX 650 Ti=72W
    GTX 680 = 169W
    http://www.3dcenter.org/artikel/ein...auchs/eine-neubetrachtung-des-grafikkarten-st

    Btw that means a 1:1 correlation between power consumption and performance between the two.
    For GK110 that could mean: 169W*1.5=254W, so maybe 270W TDP.
    Sounds doable to me.
     
    #439 boxleitnerb, Feb 1, 2013
    Last edited: Feb 1, 2013
  15. RussianSensation

    RussianSensation Elite Member

    Joined:
    Sep 5, 2003
    Messages:
    19,458
    Likes Received:
    695
    I think I didn't explain it well :). At any one point, something is bottlenecking a GPU architecture. To ensure that no bottleneck exists, to guarantee doubling in performance, you have to double at least Kepler's most weak areas (texture fill-rate and memory bandwidth). You cannot double those 2 areas since GDDR5 8Ghz is not available and to double texture fill-rate you need 1.1ghz+ 240 TMU fully unlocked 15 SMX part.

    The claim of 50-60% faster than GTX680 at 240-250W is a much more reasonable one. I could buy into the X5300-5500 from Titan. :thumbsup:
     
    #440 RussianSensation, Feb 1, 2013
    Last edited: Feb 1, 2013
  16. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    Please explain to me why texture fill-rate is among Keplers most weak areas.
    If anything, Kepler has too much fillrate. The GTX680 has more than twice the texel fill-rate of GTX580, yet the factors determining performance lie entirely elsewhere, namely compute power and bandwidth.

    I would agree that it is impossible to achieve a score of X7000 on the count of bandwidth. However, keep in mind that GK110 has a more sophisticated cache system than GK104. That could allow for more efficient use of available bandwidth. But certainly it would not boost effective bandwidth that much. I won't speculate on numbers as I just don't know.
     
    #441 boxleitnerb, Feb 1, 2013
    Last edited: Feb 1, 2013
  17. notty22

    notty22 Diamond Member

    Joined:
    Jan 1, 2010
    Messages:
    3,376
    Likes Received:
    0
    I think this is within possibility of a 900mhz Titan.
    score of tri-gtx580 sli @925mhz
    X6183 with NVIDIA GeForce GTX 580(3x) and Intel Core i7-960 Processor


    http://www.3dmark.com/3dm11/1206923...206923%3Fkey%3DYn2Vj5vhmpjQksQz9pQ0qenCD0e636

    All back to Nvidia wanting 3x performance of Fermi. I'm hoping GK110 unique features allow that to happen. 3dmark 11 scores may also be best case compared to game performance. One of the mysteries we shall see.
     
  18. tviceman

    tviceman Diamond Member

    Joined:
    Mar 25, 2008
    Messages:
    6,229
    Likes Received:
    103
    Just like all the Fermi refreshed parts demonstrated improved perf/watt over there original counterparts, I think GK110's perf/watt efficiency would be fairly compared to Kepler's refreshed parts since it came out last and node improvements may have occured since GK104's tape out. With hotclocks gone and much of GK110's compute-focused transistors fused off (or dormant), hopefully a Geforce-based GK110 can get that 50-60% performance improvement and stay <= 250 watts. I think it's possible.

    Still though, $900 is outrageous under any circumstance, IMO. An hd8970 (non-oem) with a 15-20% speed bump over the current 7970GE should make it about (give or take 5%) the same performance delta between the two as the hd6970 and gtx580 had between each other. Even if AMD comes out with the same $550 initial MSRP that the hd7970 had, $900 would be a laughing joke at that point and would force Nvidia into another GTX280 price cut situation.
     
  19. Rvenger

    Rvenger Elite Member <br> Super Moderator <br> Video Cards
    Super Moderator

    Joined:
    Apr 6, 2004
    Messages:
    6,293
    Likes Received:
    4

    People will still buy it because its GK110, a year old videocard and architecture that was supposed to be sold as the GTX 680 for ~ $500 - 600, for the low price of $899.99. I can hear Nvidia licking their chops from my desk right now.
     
  20. AdamK47

    AdamK47 Lifer

    Joined:
    Oct 9, 1999
    Messages:
    12,111
    Likes Received:
    68
  21. blackened23

    blackened23 Diamond Member

    Joined:
    Jul 26, 2011
    Messages:
    8,556
    Likes Received:
    0
  22. PowerK

    PowerK Member

    Joined:
    May 29, 2012
    Messages:
    146
    Likes Received:
    5
  23. Celeryman

    Celeryman Senior member

    Joined:
    Oct 9, 1999
    Messages:
    310
    Likes Received:
    0
    Nobody needs it, but I sure do want it. Blender Cycles could eat 6GB up pretty quick with the right scene.
     
  24. blackened23

    blackened23 Diamond Member

    Joined:
    Jul 26, 2011
    Messages:
    8,556
    Likes Received:
    0
    What games would need 6GB? I still find 6GB hard to believe...
     
  25. boxleitnerb

    boxleitnerb Platinum Member

    Joined:
    Nov 1, 2011
    Messages:
    2,596
    Likes Received:
    1
    Well, I saw some gameplay videos in 4K with MSAA over at PCGH.de where 3GB was occasionally surpassed. A 7970 6GB was used in these tests, but the question is: Would it have stuttered with only 3GB?

    In 1-2 years I can see some titles begin to exceed 3GB at the right settings, but honestly - who buys a Titan, has the money to buy its sucessor, and that would have more memory and certainly more power to use it properly.
     
Thread Status:
Not open for further replies.