GeForce Titan coming end of February

Page 80 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

BoFox

Senior member
May 10, 2008
689
0
0
(From: http://www.xtremesystems.org/forums...onsumer-part&p=5171316&viewfull=1#post5171316 ):

===============================

"
Crysis 2
Radeon HD 7970 GHz Edition: 68 %
GeForce GTX 680: 65 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/68)*100 = 147 % = 47 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/65)*100 = 154 % = 54 % faster

3DMark 2013 X Firestrike
Radeon HD 7970 GHz Edition: 77 %
GeForce GTX 680: 67 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/77)*100 = 130 % = 30 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/67)*100 = 149 % = 49 % faster

3DMark Vantage GPU
Radeon HD 7970 GHz Edition: 76 %
GeForce GTX 680: 81 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/76)*100 = 132 % = 32 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/81)*100 = 124 % = 24 % faster

Battlefield 3
Radeon HD 7970 GHz Edition: 74 %
GeForce GTX 680: 65 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/74)*100 = 135 % = 35 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/65)*100 = 154 % = 54 % faster

Far Cry 3
Radeon HD 7970 GHz Edition: 70 %
GeForce GTX 680: 73 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/70)*100 = 143 % = 43 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/73)*100 = 137 % = 37 % faster

Hitman
Radeon HD 7970 GHz Edition: 81 %
GeForce GTX 680: 73 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/81)*100 = 124 % = 24 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/73)*100 = 137 % = 37 % faster

Conclusion
GeForce GTX Titan average increase over Radeon HD 7970 GHz Edition: (47 + 30 + 32 + 35 + 43 + 24) / 6 = 35 %
GeForce GTX Titan average increase over GeForce GTX 680: (54 + 49 + 24 + 54 + 37 + 37) / 6 = 42.5 %


All benchmarks done with drivers so premature they aren't even launch day drivers... Either 3DMark is way off the ball, or drivers are going to show some massive improvements as the scores I've seen in 3DMark paint a much more favourable picture. As I said, I haven't got game performance info, so I can't comment on the accuracy of the above."
 

sensiballfeel

Junior Member
Feb 17, 2013
7
0
0
(From: http://www.xtremesystems.org/forums...onsumer-part&p=5171316&viewfull=1#post5171316 ):

===============================

"
Crysis 2
Radeon HD 7970 GHz Edition: 68 %
GeForce GTX 680: 65 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/68)*100 = 147 % = 47 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/65)*100 = 154 % = 54 % faster

3DMark 2013 X Firestrike
Radeon HD 7970 GHz Edition: 77 %
GeForce GTX 680: 67 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/77)*100 = 130 % = 30 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/67)*100 = 149 % = 49 % faster

3DMark Vantage GPU
Radeon HD 7970 GHz Edition: 76 %
GeForce GTX 680: 81 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/76)*100 = 132 % = 32 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/81)*100 = 124 % = 24 % faster

Battlefield 3
Radeon HD 7970 GHz Edition: 74 %
GeForce GTX 680: 65 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/74)*100 = 135 % = 35 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/65)*100 = 154 % = 54 % faster

Far Cry 3
Radeon HD 7970 GHz Edition: 70 %
GeForce GTX 680: 73 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/70)*100 = 143 % = 43 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/73)*100 = 137 % = 37 % faster

Hitman
Radeon HD 7970 GHz Edition: 81 %
GeForce GTX 680: 73 %
GeForce GTX Titan: 100 %
----------------
GeForce GTX Titan vs Radeon HD 7970 GHz Edition: (100/81)*100 = 124 % = 24 % faster
GeForce GTX Titan vs GeForce GTX 680: (100/73)*100 = 137 % = 37 % faster

Conclusion
GeForce GTX Titan average increase over Radeon HD 7970 GHz Edition: (47 + 30 + 32 + 35 + 43 + 24) / 6 = 35 %
GeForce GTX Titan average increase over GeForce GTX 680: (54 + 49 + 24 + 54 + 37 + 37) / 6 = 42.5 %


All benchmarks done with drivers so premature they aren't even launch day drivers... Either 3DMark is way off the ball, or drivers are going to show some massive improvements as the scores I've seen in 3DMark paint a much more favourable picture. As I said, I haven't got game performance info, so I can't comment on the accuracy of the above."

These benchies fake. Took the fake graph http://i.imgur.com/1ds2B5J.jpg and type them out, oh my! :biggrin: Hope soon real benchies are coming to find out if really lots faster or just little like nvidia benchies.
 

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
NV is really pushing it into the stratosphere like those 280 days.

Imho,

Absolutely! Why would this be surprising? No free rides and may feel their products are worth it considering the hard work and risk they have!

Vote with one's wallet, voice a view, but it depends if there is acceptance by the market to me.
 

BoFox

Senior member
May 10, 2008
689
0
0
GTX680's texture fill-rate and compute (shader performance) are being wasted against the 670 which points to either a memory bandwidth or an ROP bottleneck, or both.

1058mhz GTX680 vs. 980mhz GTX670

Shader performance / Gflops = +23%
Texture fill-rate = +23%
vs.
Memory bandwidth = 0%
Pixel fill-rate = +8%

2560x1600 4AA
GTX680 vs. 670 = +10%

2560x1600 8AA
GTX680 vs. 670 = +7%
Source

The increase in the Titan's performance could be greater than the increase in its theoretical functional units against 680 because the 680's functional shaders and texture units are underutilized as a result of 1-2 bottlenecks (ROP and/or memory bandwidth). This is how you may get > 50% over 680 with the Titan in some games at 1600P. <Just my 2 cents>



If you looked at 20+ reviews when HD7970GE launched, it won the majority of them. The conclusion related to noise levels and power consumption was also questionable since AMD stated from the beginning that no retail HD7970GE cards would use the reference blower. It's not AT's fault since AMD didn't send them retail HD7970Ge cards but at the same time the noise levels and power consumption characteristics of those cards had little to do with retail 7970GEs. The launch cards were cooler, quieter and used less power. Additionally, when HD7970GE launched, they were going for $469.99 not $499.99. The review didn't cover any of those points. Not blaming AT directly since other websites also made similar conclusions. It's AMD's fault there for sending reference 925mhz 7970 with flashed BIOS instead of waiting 1 more month to send reviewers actual retail 7970GE cards (which are all after-market).

Other websites did follow-up with proper HD7970 GE reviews testing actual retail versions. HD7970GE used less power than HD7970 925mhz reference card, ran way cooler and quieter.
http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7970_X-Edition/26.html
Interesting - bandwidth seems to really help a lot with higher resolutions.. while ROPs helps a lot with high AA (especially 8x AA). I guess when the rez is already very high, the bandwidth is already starved so applying additional AA it just hurts even more?

Titan seems to have about 5% more "RELATIVE" bandwidth for its GPU processing power than GTX 680 does, that's all (if you look at how it has exactly 50% bandwidth, but only about 45% more GFLOPs and GT/s theoretical limits).

However, Titan will be considerably more "ROP-starved" than GTX 680, relatively. That means it will perform much better with 4x AA than 8x AA at higher resolutions, than is the case with GTX 680, "relatively"...

I wouldn't be surprised if 7970GE comes a bit closer to Titan at super-high resolutions like 7680x4320.. only without AA ( 0x AA ) - where memory bandwidth becomes a really limiting factor here.. while 32 ROPs don't hurt 7970GE too badly. However, I do not consider Tahiti to be 100% efficient with its "asymmetrical" memory bus, with the limited crossbar access in between.

Personally, I find it a bit disappointing that Nvidia didn't increase the ROPs from GF100..., rather keeping the same count at 48 ROPs. That means Titan's max pixel fillrate is only slightly higher than that of GTX 580. It would've been nice to see NV go for 64 ROPs and 512-bit bus. Nvidia was able to do 512-bit with GT200 about 4.5 years ago with GDDR3. ATI did 512-bit nearly 6 years ago with HD 2900XT. Now that the GPUs are more bandwidth-hungry than ever before, with decreasing ratios, architecture optimizations simply failed to overcome the simple, basic rule: sheer, primary need for bandwidth. That is why HD 4870 was vastly better than HD 4850, thanks to GDDR5. Yet, GTX 680 (and Titan) have far less bandwidth ratios than HD 4870 or even 4850. Exactly how much more die space would be needed for only 16 more ROPs (for a total of 64 ROPs) and 512-bit bus on the Titan GK110 chip? Perhaps Nvidia would not be able to clock the memory as high on a 512-bit bus (probably 5000-5500MHz), but they could have stayed with 4GB while being around 8-10% faster thanks to precious additional 25% bandwidth. 4GB would be pretty much the sweet spot for these cards nowadays (with Hitman: Absolution being the only game to need more than 4GB at surround resolutions with AA - there could even be a custom 8GB version just for kicks and giggles). I'd guess just about 30-50 mm^2 extra required for 64 ROPs/512-bit bus????



BTW, that link you provided (http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7970_X-Edition/26.html ) is actually an oc'ed version of 7970 that came out before the GHz Edition. I think it uses even lower voltage than original 7970's, which would explain lower power consumption than even the original 7970 at 925MHz vs much higher 1050MHz (or equal voltage, but with better cooling to reduce leakage). Otherwise, why could it only be overclocked to 1100MHz with much better cooling, when most vanilla 7970s could o/c to around 1125-1175Mhz or higher on stock voltage?
 
Last edited:

SirPauly

Diamond Member
Apr 28, 2009
5,187
1
0
What does this statement even mean?:


Imho,

When companies have a competitive advantage or opportunity -- usually this is where their predator fangs start to show as they desire to feed and devour value. Why not maximize revenue and margins? No free rides.

AMD had an opportunity to maximize revenue and margins by setting 28nm pricing with a year old 40nm, heavily premium priced sku -- GTX 580 as its competitor. The market price.

People desire to place blame and based on no 28nm competition from nVidia helped greatly to AMD's opportunity to raise MSRP prices by 50 percent -- so one can make the point that nVidia was to blame -- not AMD.

AMD doesn't really have a single GPU that can compete with Titan -- the nVidia 28nm pricing has been consistent with the GTX 690 at 999. The lack of competition for Titan has helped greatly to nVidia's opportunity to raise prices on a single GPU -- so one can make the point that AMD is to blame -- not nVidia.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Higher margins != higher revenue all the time. You reach equilibrium, then there's diminishing returns. Titan is going to be very limited supply, so nVidia can be price it to the moon. All they need to worry about is pricing it low enough that they'll still sell out of them.
 

MisterMac

Senior member
Sep 16, 2011
777
0
0
Where are the bloody benchies.

Let's get this fucking whine fest over - and see performance.



We all know this is going to take cake - and NVidia is pricing it after said cake.

Then you can whine and dine - about it's not "good enough".
But just like the 79xx first released - they're pricing it after harder markets than previous generations.


Also Lonbjerg is like Shintai on crack in VCG?
 

BoFox

Senior member
May 10, 2008
689
0
0
Another thing is that while Titan has nearly 45% more shader and texturing "power" than GTX 680, the clocks are lower. GTX 680's boost clock is 1058, vs Titan's 876 Mhz.

Say, there's a 1000Mhz card with only 500 shaders, and a 500MHz card with 1000 shaders. Which one would be faster?
The 1000Mhz card would undoubtedly be faster - perhaps by 2%, or even much more than 10% faster, according to how poorly the scheduling distribution optimizations are done for a certain game. Faster nearly always > wider lanes with slower tasks.

So, you could as well subtract 1-2% from Titan's overall "GFLOPs" advantage over GTX 680, to be modest - for most games or workloads, as a modest rule of thumb. That is what I've discovered with massive research for the Voodoopower ratings, but I might be wrong if NV in fact has some "surprise" scheduler/workload distribution optimizations for Titan.

That's why HD 5870 failed to perform as much faster than HD 5850 as the GFLOPs+bandwidth would suggest, despite having massive 1600sp's. It was not very well optimized to use all of its 1600sp for most games. As games are becoming more and more shader-intensive, the 5870 is starting to distance itself just a little bit more from 5850. The VLIW4 Caymans are starting to appear a bit weaker, with lower number of shaders (6950 sometimes losing to 5870), despite all the optimizations that AMD did for VLIW4 in some of the newer games that are just overwhelming in shader demands. Back then, Barts XT (HD 6870) with its higher clocks came so close to HD 5870, despite the 5870 having nearly 43% more shaders based on the same VLIW5 arch, but now we see the Barts cards losing a little bit of steam in the newer games, where HD 5850 starts to distance itself more from HD 6850, and HD 7770 trading punches with HD 6850 also.

Of #1 importance for long-term success with games (other than well-maintained driver support) is shader/texture power.
#2 is bandwidth.. depending on how much bandwidth there is, relatively. If it is using only 128-bit DDR3, then of course it will be a hard ceiling to break through no matter how powerful the GPU is.
#3 (the least important) is ROPs. In the future, as more demanding games force you to turn off AA, reduce the settings/resolution, etc., that is why NV made GT 630 with only 4 ROPs, but with more shader power than a 9600GT that had 16 ROPs -- because the primary focus was on reducing die size and power consumption (with the ROPs taking a huge amount of die). With modern games, the settings would have to be pretty low, and most gamers with such low-end cards would not ever bother turning on AA, so they did not need the ROPs. They only needed the shaders for these games to be playable, along with GDDR5 memory being the next most important thing (as long as the bus isn't 64-bit.. if it is, then the bandwidth becomes most important since anything below 20-30GB/s is such a definitive ceiling).
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Russian, tell us how you really feel now.

I think you should get 3 Titans and give away 2 of your 670s to people who can't afford GPU upgrades because you are a very kind/generous person. :awe:

87c.jpg


Nerd fact of the day: GTX Titan is like 2 GTX660Tis with slightly lower clocks.

GTX660Ti -> Titan

1344 CUDA cores x 2
24 ROPs x 2
112 TMUs x 2
Memory bandwidth 144GB/sec x 2

Go to this chart = 2x GTX660Ti * (837 / 915) = 83% faster than GTX660Ti
http://www.computerbase.de/artikel/grafikkarten/2013/test-17-grafikkarten-im-vergleich/3/

This is what we'd get with Titan at 837mhz:

1.920 × 1.080 4xAA/16xA
GTX690 = 204%
Titan = 183%
HD7970GE = 132%
GTX680 = 128%
GTX660Ti = 100%

Rating - 2.560 × 1.600 4xAA/16xAF
GTX690 = 217%
Titan = 183%
HD7970GE = 143%
GTX680 = 130%
GTX660Ti = 100%

Average performance compared to GTX690 ~ 87% of GTX690
Average performance increase over HD7970 GE ~ 33%
Average performance increase over GTX680 ~ 42%

This is what we'd get with Titan at 915 Base / 975mhz Boost:

1.920 × 1.080 4xAA/16xA
GTX690 = 204%
Titan = 200%
HD7970GE = 132%
GTX680 = 128%
GTX660Ti = 100%

Rating - 2.560 × 1.600 4xAA/16xAF
GTX690 = 217%
Titan = 200%
HD7970GE = 143%
GTX680 = 130%
GTX660Ti = 100%

Average performance compared to GTX690 ~ 95% of GTX690
Average performance increase over HD7970 GE ~ 46%
Average performance increase over GTX680 ~ 55%
 
Last edited:

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Apparently not all SKUs will be the same when it comes to clock speed. And it's not only EVGA, MSI and Asus:

GALAXY GeForce GTX Titan is still part of the Kepler family based on the GK110 the core, based on 28nm process technology, built 2688 stream processors, PCIe 3.0 x16 bus interface, will be equipped with 384 6GB GDDR5 memory, support for DirectX 11.1, Shader Model 5.0 , supports OpenGL 4.3, support Open CL 1.2.
GALAXY GeForce GTX Titan core frequency of 875MHz, but did not use the GPU Boost technology. Aspects of memory, GTX Titan used GDDR5 memory equivalent frequency 6008MHz, Memory Interface 384bit memory capacity will reach 6GB.
The card uses a dual-slot design, while the TDP is designed for 235W, 8pin +6 pin power connector.
Interfaces, GeForce GTX Titan will be equipped with two DVI, one HDMI and one DisplayPort interface.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Bo
"Say, there's a 1000Mhz card with only 500 shaders, and a 500MHz card with 1000 shaders. Which one would be faster?
The 1000Mhz card would undoubtedly be faster - perhaps by 2%, or even much more than 10% faster, according to how poorly the scheduling distribution optimizations are done for a certain game. Faster nearly always > wider lanes with slower tasks." this is not guaranteed.You can always build application which can utilize all the shaders and be faster.Also the 1000 MHz(as it is already clocked higher) one can never make the deficit with overclocking while the other can do that for most part.
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
Back to 235w again are we?

No boost, unlocked voltage? What's the voltage table? Nobody knows? Why am I asking then? I dunno!
 

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
The one leak had an Asus card at 915/975 or so (can't remember exact clocks) but its likely an OC model so clocks may not be 100% set in stone. So far there is no indication about unlocked voltages though.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
Voltages can be adjusted per bios in a very comfortable range. A little birdy told me ;)
Think kingpin...darn, I've said too much already.
 

BoFox

Senior member
May 10, 2008
689
0
0
Bo
"Say, there's a 1000Mhz card with only 500 shaders, and a 500MHz card with 1000 shaders. Which one would be faster?
The 1000Mhz card would undoubtedly be faster - perhaps by 2%, or even much more than 10% faster, according to how poorly the scheduling distribution optimizations are done for a certain game. Faster nearly always > wider lanes with slower tasks." this is not guaranteed.You can always build application which can utilize all the shaders and be faster.Also the 1000 MHz(as it is already clocked higher) one can never make the deficit with overclocking while the other can do that for most part.
True, that's true, but that would rarely, rarely be the case. Just look at how HD 5870 really "sucked", failing to fully utilize all of its 1600sp as one would have expected on paper. HD 6870 with only 1120sp was only about 9% slower, while the 5870 still had about 15% more bandwidth, and gobs more texturing power also.
I just found this correlation with pretty much all other applicable video cards in my research, by comparing the specs and seeing where the actual performance is at. Speed usually > number of shaders.

That's why overclocking it would pretty much give a rather linear gain, (assuming that the bandwidth is also increased linearly).
On the other hand, doubling the number of shaders (along with bandwidth) might not give as linear of a gain, while keeping the core at the same clock.
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
I think you should get 3 Titans and give away 2 of your 670s to people who can't afford GPU upgrades because you are a very kind/generous person. :awe:

87c.jpg


Nerd fact of the day: GTX Titan is like 2 GTX660Tis with slightly lower clocks.

GTX660Ti -> Titan

1344 CUDA cores x 2
24 ROPs x 2
112 TMUs x 2
Memory bandwidth 144GB/sec x 2

Go to this chart = 2x GTX660Ti * (837 / 915) = 83% faster than GTX660Ti
http://www.computerbase.de/artikel/grafikkarten/2013/test-17-grafikkarten-im-vergleich/3/

This is what we'd get with Titan at 837mhz:

1.920 × 1.080 4xAA/16xA
GTX690 = 204%
Titan = 183%
HD7970GE = 132%
GTX680 = 128%
GTX660Ti = 100%

Rating - 2.560 × 1.600 4xAA/16xAF
GTX690 = 217%
Titan = 183%
HD7970GE = 143%
GTX680 = 130%
GTX660Ti = 100%

Average performance compared to GTX690 ~ 87% of GTX690
Average performance increase over HD7970 GE ~ 33%
Average performance increase over GTX680 ~ 42%

This is what we'd get with Titan at 915 Base / 975mhz Boost:

1.920 × 1.080 4xAA/16xA
GTX690 = 204%
Titan = 200%
HD7970GE = 132%
GTX680 = 128%
GTX660Ti = 100%

Rating - 2.560 × 1.600 4xAA/16xAF
GTX690 = 217%
Titan = 200%
HD7970GE = 143%
GTX680 = 130%
GTX660Ti = 100%

Average performance compared to GTX690 ~ 95% of GTX690
Average performance increase over HD7970 GE ~ 46%
Average performance increase over GTX680 ~ 55%
Computer Nerd FAIL OF THE DAY:

Your link isn't showing GTX 660 Ti SLI results..

Just kidding! Must be a wrong link?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Your link isn't showing GTX 660 Ti SLI results..
Just kidding! Must be a wrong link?

The Titan doubles the functional resources of a single GTX660Ti. I am not comparing GTX660Ti in SLI to the Titan. I am looking at GTX660Ti as a base 100% and doubling that score because I do not want SLI inefficiencies. I am essentially making an imaginary GTX660Ti x 2 single die GPU. After that I adjust any difference between the speed of all those functional units by the differential in the GPU clocks between a reference GTX660Ti (915mhz) and the Titan (837mhz). This assumes perfect architectural efficiency moving from GTX660Ti to the Titan. I think I am pretty close. ^_^
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
Ah, I see!

So, about 42% over GTX 680 would yield about 333 Voodoopower!

Also, according to your estimations (based on computerbase), 83% of GTX 690's 383 VP is also 333 VP! (Quite consistent, yay!)

A let-down for most who were guessing higher!!!

Well, lets see if the reviews match that.
 
Last edited:
Status
Not open for further replies.