GeForce Titan coming end of February

Page 22 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
It made an absolutely huge increase for me. MSI lightnings 680s:

GTX-680-LIGHTNING-90.jpg



This takes a game that is sluggish on stock 680s and makes it smooth. Absolutely worth it.

Overvoltage is a value added feature and nvidia absolutely should include it with the Titan. Remember: software over-voltage can be strictly controlled to be safe; I see no reason to not include it. I don't see 50mV being dangerous - this is often all it takes to increase clocks by a hefty amount. I see the "warranty" argument being thrown around a lot, but keep in mind - it isn't like you can just recklessly increase your voltage by 500mV. Software overvoltage doesn't work like that - as mentioned earlier, it is strictly controllable and can be limited to reasonable amounts (such as 100mV).

As I said earlier, voltage lock wouldn't necessarily prevent my purchase - indeed my interest is piqued with Titan. However, it is an added value that would sway many others.
 
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,108
1,260
126
It's game specific on GTX 680 with overclocking. The 680 is not the typical nvidia flagship; 256 bit bus, not all that much faster than the GTX 580 etc. The rumoured specs for Titan are what you would of expected to see in the GTX 680, not a 'special' card. Because of that GTX 680 winds up starved for memory bandwidth in games where you need it and core clock increases do nothing to give you more performance.

Battlefield 3 is not bandwidth hungry so it scales well with a core overclock. Try Crysis/Warhead or Metro 2033 and you see little to no improvement increasing core clock speed.

All this should be out the window with Titan, which has a 384bit bus with a lot more bandwidth, it should scale well with overclocking more consistently unlike the GTX 680.
 

ICDP

Senior member
Nov 15, 2012
707
0
0
Nobody is suggesting overvolting should not be an option, just highlighting that overvolting to gain an extra 50-70 Mhz over an already decent overclock on a GTX 680 is not really worth it.

Of course when taking this in context of a "stock" GTX 680 it looks great, but taken in the context of an already very good OC of 1250ish another 50 MHz is not a big deal.
 

Tempered81

Diamond Member
Jan 29, 2007
6,374
1
81
It made an absolutely huge increase for me. MSI lightnings 680s:

GTX-680-LIGHTNING-90.jpg

The 7970ghz is beating the 680 in the recent reviews over at HardwareCanucks. Still massive performance from that 680 Lightning on the LN2 Bios. Titan will have voltage control with a hardmod for sure, can't wait to see what kinda clocks show up on hwbot

HD7970-MATRIX-91.jpg
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
All this should be out the window with Titan, which has a 384bit bus with a lot more bandwidth, it should scale well with overclocking more consistently unlike the GTX 680.

For sure. With 384-bit bus, the memory overclocking could be even more rewarding since the bus is 50% wider. GDDR5 @ 7Ghz overclock would give the Titan 336 GB/sec memory bandwidth or 75% more over GTX680's 192 GB/sec. The performance increase at 1600P should be excellent. Groove you better put up your 680s for sale now. You know you want 2 Titans :p
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
For sure. With 384-bit bus, the memory overclocking could be even more rewarding since the bus is 50% wider. GDDR5 @ 7Ghz overclock would give the Titan 336 GB/sec memory bandwidth or 75% more over GTX680's 192 GB/sec. The performance increase at 1600P should be excellent. Groove you better put up your 680s for sale now. You know you want 2 Titans :p

I think over clocking will (and should) be a big draw with this product. If this chip can hit 1000mhz with a small voltage bump, and 7ghz on the ram like you pointed out, pretty sure that will outperform sli'd 680's (at stock). Nvidia has to let up on the overvolt restriction first.

But regardless, $900 is still too high. A meager $100 price decrease makes this product so much more marketable (still not for me). But perhaps nvidia is going $900 for the 6gb model, followed by $800 for the 3gb model a month or two later, followed by $700 for the first cut down GK110 sku, then $600 for the most cut down sku. That would allow gk114 to maintain gk104's initial $500 msrp. Only way this gets disrupted is if AMD pulls a 4800 series pricing smack on nvidia. But I don't see that happening as nvidia would drop prices and bottom lines for both companies would suffer.

I think I am 100% content in waiting for 20nm mid range cards at high end prices. :p
 

Ibra

Member
Oct 17, 2012
184
0
0
For people who ain't thinking.

Fellix:

Well, this image is a fake, or at less it´s a more probable fake than the original image.


Why? Because if you use a very poorly method for hide the graphic card model in a screenshot, like the one in the original image (something simmilar to “Paint” and the use of “Pencil”), then you destroy ALL the information behind this manipulation (probably a bmp or jpeg, no layers, no “transparency”, etc).


But, it´s something more in the link that you post thar reveals that is a fake:


It´s posible see the graphic card model behind the manipulation of the image, but you can´t see nothing about the “Time” of the test. So, it´s clearly suspicious.


PD: I think that the original one is a fake, but this second image is a more obvious fake.


A “contra”-fake.
http://www.geeks3d.com/20130201/geforce-titan-twice-faster-than-gtx-680/
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

Using mathematics:

K20X has 732mhz GPU clock, 2688 shaders, 5.2ghz GDDR5 and a 235W TDP.

To get > 2x the score of 1058mhz GTX680 x 2 (X3500 per GPU) in 3DMark11, you likely need some combination of at least 2x the shader core power, 2x the texture power, 2x the ROP power and 2x the memory bandwidth. Let's assume you don't need all those 4. What we can say is at least 1 of these 4 conditions must be satisfied to double the score because a GPU bottleneck has to exist in 1 of 4 of these parameters. Which ones to pick? We at least have to include memory bandwidth since we know Kepler is memory bandwidth bottlenecked. To address all possibilities, let's look at all of them:

- You can't achieve 2x the memory bandwidth even on 384-bit bus without needing GDDR5 8Ghz. Impossible since no such GDDR5 exists.
- You can't achieve 2x the shader core power without 2688 cores clocked at 1209mhz
- You can't achieve 2x the texture fillrate without 240 TMUs clocked at 1130mhz
- You can't achieve 2x the pixel fillrate without 56 ROPs clocked at 1209mhz

Whether we pick TMU, ROP or Shader core as the most limiting GPU clock factor doesn't really matter since to satisfy the least of these 3 conditions you need to hit 1130mhz on the GPU core.

How can you increase GPU clock from 732mhz to at least 1130mhz (+54%) and increase GDDR5 from 5.2ghz to 8Ghz (even if that GDDR5 was available) and maintain a TDP of 235W? Impossible. None of these conditions is plausible within a TDP limit of 235W. If this score includes the Titan overclocked on water with overvoltage and power consumption is > 275W, then it's barely plausible because it still doesn't explain how to remove the memory bandwidth bottleneck with non-existent GDDR5 8Ghz. That screenshot is fake given the rumoured specs, unless 3DMark11's scores do not follow a straight-forward formula. If I am wrong, I will eat my hat (a chocolate one).

2941%20Chocolate%20Hat.jpg
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
I believe the sweclockers article which said Geforce Titan is 85% of GTX 690 performance is realistically possible.

http://translate.google.com/transla...402-nvidia-gor-geforce-titan-med-kepler-gk110

GTX 690 = GTX 680 perf x 1.8
Titan = GTX 680 perf x 0.85 x 1.8 = GTX 680 perf x 1.53

I would say 50% over GTX 680 is possible. with 2688 shaders at 825 mhz core, 900 mhz (Kepler boost) compared to GTX 690 ( 3072 shaders (2 x 1536) at 915 mhz core, 1019 mhz kepler boost) the GTX 690 would have 30% more shading power and perform close to 20% faster (1.5 x 1.2 x GTX 680 perf = 1.8 x GTX 680 perf = GTX 690)

At 1600p the Titan could perform upto 55 - 60% because the extra bandwidth would allow the Geforce Titan to gain more at those resolutions
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I've been waiting for this card for so long...

However I don't believe anything in the rumors.. The rumored specs are a K20X, from clock speed to TDP, even it's redic base memory doesn't make sense.

Unless Nvidia is planning to ship a fully enabled K20X to GeForce users with uggles and uggles of DP performance it's highly unlikely any of the info leaked so far has any merit what so ever.

Also at $800+ AMD will get my money instead.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
7.1b transistors = 2X :)

Doesn't work like that for 2 reasons:

1) Assumes the composition of those transistors is similar to GK104 in both physical and functional characteristics:
- doesn't take into account excess transistors used for dynamic scheduler, double precision, and that these might run hotter than transistors used for shaders, TMUs, etc.
- doesn't take into account that doubling of transistors doesn't imply doubling of shader, texture, pixel, geometry, cache, and other such resources required for rasterization

2) Assumes the 7.1B chip is clocked at the same GPU speeds as half-as-small chip.

Point #2 is even more into question since a 294mm2 GTX680 draws ~185W at peak in games like Crysis 2. GPU makers do not just design VRM/cooling only around average power consumption since it's very possible that some games like Crysis 3 or Metro Last Light, etc. could have a much higher peak load power due to excessive GPU loads. Therefore using a peak load rate from Crysis 2 and GTX680's peak load power consumption of 185W in games doesn't leave that much headroom to double the transistor count and maintain GPU clocks without blowing way past 250W of power.

Do you actually believe NV can double the performance on the same 28nm node when the 28nm GTX680 pulls 180W+? Isn't that breaking the laws of physics? ATI/AMD/NV never doubled the performance on the same node when the previous flagship GPU they had was using 180-185W of power in games! Actually even the reduction in power consumption from 40nm to 28nm implies a 60% reduction in leakage/power consumption at the same transistor switching rates. 1130mhz GPU clock on GK110 is more than a 50% increase in transistor performance. How does power consumption only rise from 185W to 235W only if we are still on the same 28nm node?

Every time there is hype around the next NV flagship GPU, people tend to overestimate its performance. GTX480/580/680 all suffered from rumors that skewed performance way above those GPU's actual real world performance measurements.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Normally I would think anything outside of a 1:1 increase in TDP percent per performance gained, unless eliminating a present bottleneck is expecting too much.

Let's say GK104 is 195w TDP, for GK110 to be 30% faster it will need a TDP budget of 250w...


However, there could be some more improvements though, since this is gk104 vs gk110, even the 690 saw marked improvement in perf/watt over 680 SLI.

So assuming we get some process maturing, and some binned chips we might hit 50% best case w/o bottlenecking being the limiting factor vs the 680.

The key with Nvidia's big die is not the stock performance, but what's left in the tank due to TDP constraints. GK110 should in all likeness share the same clock rate thresholds as GK104 does, meaning if it's coming in at 800MHz 50% or greater from it, which is where the value in a card like will be found. That is of course assuming Nvidia allows it to happen.

That's the only reason I'm excited for the card, the potential gain from an OC... It should be incredibly beefy when it comes to hardware, even more so than the 7970 is compared to the 680 already.


The real numbers will probably be closer to 850MHz, @ 250w TDP. Nvidia will gain clocks from the conversation from workstation to GeForce, and they'll increase TDP to flagship levels. There is no reason to assume they'll market it as the "efficient" flagship, meaning all they have to counter AMD right now is this mid-range card we've pre-overclocked 45w higher than the 460, and 20w higher than the 560 Ti.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
Doesn't work like that for 2 reasons:
There are many things in the chips unique architectures that are out of the field of the arm chair video card engineers. No matter how many games/cards you have examined. It's very easy and probably the safer bet to state they will miss the lofty performance goals.
Good things for a possible performance surprise is these cards have been live for a while now. Can perform with a low TDP, in certain settings, and do some compute tasks at the stated goals from Fermi.


You can look back at past card launches and look at the basic specs. Like Cayman launch. The 6970 ran @880mhz VS 5870@850mhz, many of the obvious specs are fairly close. It ended up at launch using 60 watts more than the 5870 according to AMD's numbers (a extra 1gb of faster gddr5). Why did they bother making it (bigger die), with such close specs? Because there's more under the hood there, maybe.

table.png

http://www.xbitlabs.com/articles/graphics/display/radeon-hd6970-hd6950_9.html#sect3
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126

Balla you are laying down some well-thought out posts, 2nd in a row. :thumbsup:

It ended up at launch using 60 watts more than the 5870 according to AMD's numbers (a extra 1gb of faster gddr5). Why did they bother making it (bigger die), with such close specs? Because there's more under the hood there, maybe.

HD6970 vs. HD5870 completely contradicts your entire reasoning. That actually proves mine.

HD6970 added only 23% more transistors, 3.5% higher GPU clocks but the # of shaders fell from 1600 to 1536, 14.5% higher GDDR5 memory clock but the same 256-bit bus remained, 20% more TMUs, ROPs stayed unchanged, went from 1GB of VRAM to 2GB and on the same node the real world power consumption increased by 41W, despite HD6970 being just 14-15% faster on average.

vs.

According to you, it's not unrealistic for Titan to double the transistor count, up clocks to 1130mhz (at least keep them at 680's level), increase the memory bandwidth more than 50% by widening the bus to 384-bit and increasing GDDR5 speed from 6Ghz to at least 7Ghz, increase VRAM from 2GB to 6GB, grow TMUs by 87.5% (128 to 240), grow ROPs by 75% (32 to 56), grow shaders by 75% (1536 to 2688), and on the same node real world power consumption will only increase by 50W (185W of 680 to 235W of the Titan)?

Can you please elaborate how NV can nearly double the overall performance and the size of the chip, increase VRAM size 3x and yet on the same node pay just a 50W real world power consumption penalty, while AMD couldn't even increase transistors by 1/4 and performance by more than 15% on the same node?

Just to double the performance from GTX650Ti to GTX680 costs 110W of power and the die size only increases from 214mm2 to 294mm2. Yet suddenly NV can increase the die size from 294mm2 to 550mm2, not have to lower GPU clocks of GTX680 and power consumption only goes up 50W?

What kind of voodoo physics/alien 28nm transistors does NV have?

If NV has a 275-300W card ready to go, I would be more inclined to believe it can on average double the performance of a GTX680.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Just to double the performance from GTX650Ti to GTX680 costs 110W of power and the die size only increases from 214mm2 to 294mm2. Yet suddenly NV can increase the die size from 294mm2 to 550mm2, not have to lower GPU clocks of GTX680 and power consumption only goes up 50W?

I agree with what you are saying, but you can't use that comparison. The gtx650ti is a cut down chip, trying to compare die sizes and power draws of a cutdown chip vs. a completely different chip that is not cut down does not compute. It is probably only ever fair, or somewhat accurate, to compare full chips vs. full chips. If the gtx650ti physically contained only the specs that it ran at, it would be smaller and most likely have a slightly lower TDP since as it is it "wasn't good enough" to be a gtx660.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I agree with what you are saying, but you can't use that comparison. The gtx650ti is a cut down chip, trying to compare die sizes and power draws of a cutdown chip vs. a completely different chip that is not cut down does not compute. It is probably only ever fair, or somewhat accurate, to compare full chips vs. full chips. If the gtx650ti physically contained only the specs that it ran at, it would be smaller and most likely have a slightly lower TDP since as it is it "wasn't good enough" to be a gtx660.

You make a good point. I am saying there are still plenty of ways to reason why Titan doubling GTX680 at 235W on average can't happen.

Remember when ATI and NV nearly doubled the performance when they introduced new flagship GPUs on new nodes? Let's for a sec ignore the existence of GTX600 entirely and assume we are back to pre-2006 era of doubling GPU speeds on new nodes. Then going from a 230W GTX580 40nm to a 230W "28nm flagship" NV would have been the expected outcome assuming the ideal scenario we experienced in the good olden days.

40nm GTX580 = 100%
28nm Next gen flagship = 200%

Let's introduce the GTX680 back for reference. TPU's charts show GTX680 being 135% of GTX580 at 1200P and 1600P.

That would make the true 28nm Next gen Kepler flagship 200% / 135% = 48% faster than the GTX680. If you look at where a 200% card would land on those charts, it's extremely close to the performance of a GTX690. These #s are in-line with the 85% of GTX690 rumour and with what we've seen in terms of historical transitions between full nodes. ~50% faster than a GTX680 at 230-235W actually sounds somewhat reasonable. Yet another way to look at it. If Titan doubles the performance of a GTX680, then it would be 2.7x faster than a GTX580. The ONLY time in the history of NV this happened is going from 7900GTX to 8800GTX because 7900GTX was awful and at the same time 8800GTX was revolutionary. GTX580 is not an awful architecture and Titan is made on Fermi 2.0 (aka Kepler) which means it's not a revolutionary change as was going from a fixed pixel pipeline of the 7900GTX to a unified shader one in the G80.

Also, Computerbase has GTX690 outperforming the GTX680 by 85% at 2560x1600 8AA (a very GPU limited case). 85% of 185% GTX690 gives us about 57% faster than a single GTX680, again going back to that same figure of roughly 50% faster than a GTX680, not 100% faster.

I find it amusing that the closer we get to the NV's next flagship GPU launch, the faster the mystery card gets. Wasn't GTX580 rumored to be a 768 SPs part and GTX680 a 2304 SP part? Every time in the last 5 years the performance of the next NV flagship card has been exaggerated on our forum.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
You make a good point. I am saying there are still plenty of ways to reason why Titan doubling GTX680 at 235W on average can't happen.

If Titan is only going to be 235w then Nvidia had better play the overclock card VERY hard, because even with a 10% perf/watt improvement due to architecture improvements and node process, 235 watts will translate into like 35-40 faster AKA fail at $900.

Also, Computerbase has GTX690 outperforming the GTX680 by 85% at 2560x1600 8AA (a very GPU limited case). 85% of 185% GTX690 gives us about 57% faster than a single GTX680, again going back to that same figure of roughly 50% faster than a GTX680, not 100% faster.

I find it amusing that the closer we get to the NV's next flagship GPU launch, the faster the mystery card gets. Wasn't GTX580 rumored to be a 768 SPs part and GTX680 a 2304 SP part? Every time in the last 5 years the performance of the next NV flagship card has been exaggerated on our forum.

I don't remember gtx580 rumors clearly, but I do recall the 768 shader part and I think it was smartly dismissed by the community fairly quickly. I thought GK104's shader physical specs were nailed down well in advance of it's release, though. I don't remember 2304 cores being thrown around. Titan is going to end up right around 50% faster at 2560x1440 & higher resolutions than GK104. I think that much is expected. If Titan has 20% OC headroom (could be a lot to ask in a 500+ mm^2 chip, depends on how well it can withstand the heat!), and it's vram can also hit 7ghz about as often as GK104 chips can hit, then it should pretty much tie a stock gtx690 or even surpass it in some instances.
 
Last edited:

VulgarDisplay

Diamond Member
Apr 3, 2009
6,193
2
76
If Titan is only going to be 235w then Nvidia had better play the overclock card VERY hard, because even with a 10% perf/watt improvement due to architecture improvements and node process, 235 watts will translate into like 35-40 faster AKA fail at $900.

Which is why I said earlier in this thread that AMD doesn't have to push very hard for more performance to make this card at $900 a joke. I understand why they need to try and sell it for that much, but I just don't see 20-30% more performance than an OC'ed 7970 worth the $550 price increase. Some people will still buy these things which is insane to me.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
One other thing to keep in mind is that the GTX 680 is clocked past optimal Perf/Watt levels. It's a mid-range die from Nvidia that had it's TDP budget pushed past ideal levels to compete with the 7970.

We should probably switch to the 670 to get a better indication of Perf/Watt of Nvidia's current node, as GK110 will not need clock speeds to make performance nearly as much as the 680 does. It should also gain a bit of perf/watt through lower clocks (many small lower freq > less at higher freq - which is why they dropped hot clocks).

I think the 670 is more ideal for this purpose, the 690 might just be insanely binned and perhaps taking the middle ground between the three is the best way to approach any thoughts on the perf/watt capabilities of GK110.




On that note I think both Nvidia and AMD will release ~250w parts, and I feel outside of engine optimizations they'll both be pretty similar in performance. The only real differentiating qualities that will probably be present is which offers more hardware and thus more OC headroom compared to the other.

Say 580 vs 6970, AMD delivered a card that was pretty close to the 580, however it was pretty much maxed out from the factory. 6970 wasn't known for it's overclocking. In this case AMD delivered a card where it was clocked higher to make up for a lack of hardware. With the 680 vs 7970 we pretty much see the exact opposite, it's the 7970 from AMD that has the additional hardware to take clocks and scale considerably upward past 250w TDP, whereas the 680 pretty much starts to dog itself after about 220w, it simply doesn't have the hardware to support additional clock speeds and turn it into performance like the 7970.
 
Status
Not open for further replies.