Go Back   AnandTech Forums > Hardware and Technology > Video Cards and Graphics

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Home and Garden
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 02-22-2013, 06:30 PM   #51
BallaTheFeared
Diamond Member
 
BallaTheFeared's Avatar
 
Join Date: Nov 2010
Posts: 8,128
Default

It's going to take a while before other sites pick this up, they've been developing it for over a year.
BallaTheFeared is offline   Reply With Quote
Old 02-22-2013, 07:40 PM   #52
BoFox
Senior Member
 
BoFox's Avatar
 
Join Date: May 2008
Posts: 689
Default

Interesting things about Titan, helping to further boost its performance over GK104:

GK110 vs GK104:
255 MAX Registers / Thread, vs 63.
1536KB L2 Cache, vs 512KB.
(From Anand
Quote:
Bandwidth to those register files has in turn been doubled, allowing GK110 to read from those register files faster than ever before. As for the L2 cache, it has received a very similar treatment. GK110 uses an L2 cache up to 1.5MB, twice as big as GF110; and that L2 cache bandwidth has also been doubled.
Plus more, see: http://www.anandtech.com/show/6446/n...ives-at-last/3

Perhaps that explains how Titan can use all of its 2688 shaders so well, which is quite extraordinary, with super scheduling thanks to large cache, etc..
BoFox is offline   Reply With Quote
Old 02-22-2013, 09:06 PM   #53
BoFox
Senior Member
 
BoFox's Avatar
 
Join Date: May 2008
Posts: 689
Default

Overall, it is a bit more than than 44% faster than GTX 680 overall at 1440p and higher, but since most benchmarks out there are done when the card is cooler (being more temperature-relative than 680 ever was, dropping Boost2.0 by 30-80MHz and reducing about 0.08 volts total after 20 minutes - most during the first couple minutes), MORE INVESTIGATION NEEDS TO BE DONE.

Hurry up Xbitlabs, HT4U, etc..!

44% faster than GTX 680 is 338 Voodoopower!

Geforce GTX TITAN 6GB (DX11.1) -- 338 VP --- (* NEW ENTRY! *)

Last edited by BoFox; 02-23-2013 at 07:53 AM.
BoFox is offline   Reply With Quote
Old 02-23-2013, 08:33 AM   #54
BoFox
Senior Member
 
BoFox's Avatar
 
Join Date: May 2008
Posts: 689
Default

***PROPOSAL*** for further investigation into real-world performance of Titan, WITHOUT allowing GPU Boost 2.0 to interfere with preliminary benchmarking when the card is cool for the first 1-2 minutes:

(Google translated: http://translate.google.com/translat...659%2F&act=url )
Quote:
For our benchmarks, the GPU Boost 2.0 means but an important limitation. We test usually on an open test bench, in which the graphics card is getting enough cool air in room temperature available - ideal conditions if you will. But within enclosures often significantly different conditions prevail, especially in the summer are also in well-ventilated enclosures significantly higher values ​​than our reach around 22 ° Celsius.

Since the GTX Titan the temperature play a key role, we have to test driven a lot of effort and recorded the achieved clock speeds at 28 ° C inlet air is warm for each benchmark game separately in any resolution and enforced for the benchmark runs constantly by Nvidia Inspector . For another point is added:
Current benchmark sequences are 30 to 60 seconds long gameplay snippets, which usually precedes a loading operation.
Here a GPU Boost Technology 2.0 "gain momentum" as it were, for the benchmark and go through the cooler by the idle-charge phase GPU part of the test, with higher clock speeds. That is not up to what the player is experiencing in everyday life, because longer playing phases arise in which the temperature rises more and more according to the clock sinks. Therefore, would such a "standard test" will hardly do justice to our claim to deliver meaningful game benchmarks.

In summary, we have shooed the GTX Titan in about four settings through our course to cover every possible scenario meaningful:

• Standard method with artificially on the "guaranteed" by Nvidia boost rate of 876 MHz, limited-clock. Similarly, we handle it since the GTX 670th These values ​​also represent the basis of our tests ("@ 876 MHz")
• Free boost development on our open test bench with enough cooler air ("dyn. Boost")
Individually applied to the minimum clock rate at 28 ° C inlet air is warm board set. This corresponds to the housing operation in summer temperatures ("28 ° C")
PCGH includes these settings in their benchmarks to account for REAL-WORLD gameplay where the card heats itself up first and then the ambient after a few minutes.

Also, from HardOCP:
Quote:
In our game testing we noticed three different primary levels of clock speed our video card liked to hover between. We tested this in a demanding game, Far Cry 3. At first, our video card started out at 1019MHz actual clock speed in-game at 1.162v. As the game went on, and the GPU heated up, the clock speed dropped to 967MHz and 1.125v. After 20 minutes of gaming, the GPU clock speed settled at 941MHz and 1.087v. Therefore, our real-world starting point for clock speed was actually 941MHz at 81c max.
How long is an average benchmark run for most review sites? 1 minute long? When loading up a benchmark, the card is usually cooled off a good deal already. At the very beginning, the card is likely to be running about 80MHz higher than a few minutes afterwards.

The question is, how much does that really affect the benchmark scoring?

PCGH's benches show there to be between 5-10% difference between "dynamic boost" (open-air rig) and "28 degrees Celsius" at 2560x1600, with average fps (except for Skyrim which shows a 19% difference). Sometimes, the 28C test is slower than the 876MHz result, and sometimes it is faster.

Computerbase.de is saying the same thing (translated):
Quote:
Moreover, we have the graphics card before each test run "warmed up" for a few minutes so that the temperature rises to a realistic level and the GPU Boost clock speeds to accommodate it. We do this because at lower temperatures, the GTX titanium clocked higher and thus provides better FPS values ​​after a few minutes, however, no longer be reproduced.
This explains Computerbase.de's relatively low scores for Titan (default, not "MAX") compared to most other review sites.

The same goes for Hardware.fr (translated page explaining such) and its "relatively" low scores for Titan.
Quote:
We had to take the time to observe in detail the behavior of the Titan GTX in each game and on each resolution to ensure we do performance measures in representative conditions.

Here are 2 examples with Anno 2070 and Battlefield 3 with a rapid test, test temperature stabilized after 5 minutes and the same test but with the latter two 120mm fans positioned around the map:

Anno 2070: 75 fps -> 63 fps -> 68 fps
Battlefield 3: 115 fps -> 107 fps -> 114 fps
Quote:
When the GeForce GTX Titan is very cold (line "1006 MHz Sample"), much more than normal, it displays an advance of 40% to nearly 50% compared to the GeForce GTX 680, the gains are highest located at extreme resolutions.
...
Roughly, if you have a well cooled case, the result is likely to be between the two examples tested. Without efficient cooling performance will be against by the example at the least efficient, while a watercooling should allow the GeForce GTX Titan permanently remain almost at its maximum frequency.
As to the PCGH tests, the differences can be astounding, upwards of 10% (averaging 5-10% for these few games). The benchmark videos can be viewed at PCGH, to judge its length, in how long the run actually is, keeping in mind how much GPU Boost2.0 could potentially affect the entire result, with a large "boost impact" coming from the first 30-60 seconds especially.

Let's request American reviewers to also look into this, and to try to account for it. ~342-343 Voodoopower (after excluding PCGH, CB.de, HW.fr, TPU's CPU-bottlenecked benches at 25x16, etc..) then reduced by 5-10% would be a drastically reduced:
~311-327 Voodoopower

What do y'all think about this? It's something serious reviewers need to beware of, in the future. Would AnandTech look into this?

Last edited by BoFox; 02-23-2013 at 10:06 AM.
BoFox is offline   Reply With Quote
Old 02-23-2013, 12:37 PM   #55
wand3r3r
Platinum Member
 
wand3r3r's Avatar
 
Join Date: May 2008
Posts: 2,948
Default

Them and their boost and now boost 2 along with TDP limits and then the card is hard limited to 265w, none of this is good for overclocking, and even skew the results. Very interesting find!

If Titanics real scores are all the artificially boosted 5-10%, it looks worse with each detail uncovered *for the price*.

It will be fun to see the NV PR/shills try to spin this.
wand3r3r is offline   Reply With Quote
Old 02-23-2013, 12:58 PM   #56
Keysplayr
Elite Member
 
Keysplayr's Avatar
 
Join Date: Jan 2003
Posts: 20,000
Default

Quote:
Originally Posted by wand3r3r View Post
Them and their boost and now boost 2 along with TDP limits and then the card is hard limited to 265w, none of this is good for overclocking, and even skew the results. Very interesting find!

If Titanics real scores are all the artificially boosted 5-10%, it looks worse with each detail uncovered *for the price*.

It will be fun to see the NV PR/shills try to spin this.
You're probably doing all the spinning this thread needs. Thanks.
__________________
Member of Nvidia Focus Group
NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time
to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the Members.

i5 2500K Asus P-Z68-V/Gen3 GTX980 SLI
Keysplayr is offline   Reply With Quote
Old 02-23-2013, 01:31 PM   #57
wand3r3r
Platinum Member
 
wand3r3r's Avatar
 
Join Date: May 2008
Posts: 2,948
Cool

Quote:
Originally Posted by Keysplayr View Post
You're probably doing all the spinning this thread needs. Thanks.
Sure avoid the issue, tar and feather the messenger who wants further investigation. Typical strategy apparently.

I didn't make anything up, just brought up what's been discovered and added my opinion that the titanics price is absurd. They only spinning is you trying to deflect the issue.

What do you have to say about the issue, benchmarks are allegedly skewed if they don't last long enough for the card to warmup.
wand3r3r is offline   Reply With Quote
Old 02-23-2013, 01:52 PM   #58
VulgarDisplay
Diamond Member
 
Join Date: Apr 2009
Location: Chicago
Posts: 5,926
Default

So the Titan in real world scenarios is only roughly 20-25% faster than the 7970ghz edition if I'm reading that post correctly.

Didn't we run into this same problem with the gtx680 reviews where a hot card would perform worse in benchmarks? I thought other sites reported this back then.
VulgarDisplay is online now   Reply With Quote
Old 02-23-2013, 01:53 PM   #59
Keysplayr
Elite Member
 
Keysplayr's Avatar
 
Join Date: Jan 2003
Posts: 20,000
Default

Quote:
Originally Posted by VulgarDisplay View Post
So the Titan in real world scenarios is only roughly 20-25% faster than the 7970ghz edition if I'm reading that post correctly.

Didn't we run into this same problem with the gtx680 reviews where a hot card would perform worse in benchmarks? I thought other sites reported this back then.
A dozen or so posts from now, it'll be down to 15-20 and 10-15 isn't too much further. You can do it!! Watch.
High-Larious.
__________________
Member of Nvidia Focus Group
NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time
to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the Members.

i5 2500K Asus P-Z68-V/Gen3 GTX980 SLI
Keysplayr is offline   Reply With Quote
Old 02-23-2013, 02:03 PM   #60
VulgarDisplay
Diamond Member
 
Join Date: Apr 2009
Location: Chicago
Posts: 5,926
Default

Quote:
Originally Posted by Keysplayr View Post
A dozen or so posts from now, it'll be down to 15-20 and 10-15 isn't too much further. You can do it!! Watch.
High-Larious.
Enough.
VulgarDisplay is online now   Reply With Quote
Old 02-23-2013, 02:52 PM   #61
BoFox
Senior Member
 
BoFox's Avatar
 
Join Date: May 2008
Posts: 689
Wink "Effective" Microstutter-Adjusted Frame Rate:

With this chart from Toms:

Quote:
We continue to see tiny gaps between frames from our single-GPU cards, though the GeForce GTX 690 consecutive frame time difference more than triples, on average. However, the latencies are still so small, and the frame rates so high, that we would still consider this a good result.
Well, judging from this chart (using the same data as the above bar chart) below:

Where GTX 690 is in fact averaging 85fps, or 11.67 ms.

The "average" consecutive frame time difference is 14.1ms (from the first chart), which just for illusion's sake, translates to 70.9 fps if taken by itself. (Actual average is 85fps according to the blue line above.) So, with the ideal frame being only 11.67ms long (85fps), that means the average "slow" and "fast" frames would have to be alternating somewhere in between 23.3 (2 x 11.67) ms long and 0 ms long.

Somebody who graduated at the top of Algebra 2++, who isn't so rusty, please calculate the consecutive max and minimum frametimes for the two frames, if the consecutive frame time difference is 14.1ms, in order for the total to average 85fps! Show your work!


Answer:

You don't really need algebra - just subtract half of 14.1ms (7ms) from the overall fps average to get the average "fast" frame time, and add half of 14.1ms (7ms) to the overall fps average to get the average "slow" frame time.

Since the average fps is 85fps, that means 11.76ms average frame time.

"Fast" alternating frame times would be 11.76 - 7 = 4.76ms
"Slow" alternating frame times would be 11.76 + 7 = 18.76ms

Welcome to the method for my new patented "EFFECTIVE MICROSTUTTER-ADJUSTED" FRAME RATE (TM):
Translate the "slow" alternating frame time into FPS : 18.76ms = 53.3 fps


That is because the "slow" alternating frames will be what we see EVERY other frame, so it is the actual "LAG" of what we are seeing. If both frames are equally fast, then it would have been a full 85fps. If every other frame is skipped (0ms), then the "slow" frame will be exactly twice as slow as the average, therefore effectively halving the frame rate.

53.3 EFFECTIVE MICROSTUTTER-ADJUSTED fps is why Tomshardware didn't see a problem with it at all, even though microstuttering existed in its "micro" form (since 53.3 is still pretty smooth)!

Compared against the actual 85 frames per second "COUNT".. 53.3 fps is about 37% less than 85 fps.
It is due to microstuttering in this case that 37% of "effective" performance is lost.
Even with such microstuttering, GTX 690 is still "effectively" 16% faster than GTX 680's 46 fps.

However, I think I have figured thus far that FRAPs "cannot" measure consistent microstuttering frame times much less than 5ms in such graphs with severe microstuttering. It could be as low as 0ms, which would effectively "halve" the frame rate count in terms of effectiveness - which is what PCPer's contention was all about.

Half of 85fps is still 42.5fps, so even with ABSOLUTE microstuttering, Tomshardware probably would still perceive that as "smooth".

Last edited by BoFox; 02-23-2013 at 03:08 PM.
BoFox is offline   Reply With Quote
Old 02-23-2013, 03:04 PM   #62
rich_
Junior Member
 
Join Date: Feb 2013
Posts: 6
Default

I think you guys are missing the point of the titan. And no, with boost 2 it's very overclock friendly, it doesnt limit the power to the card it in fact unlimited it for the user's discretion at the user's own risk.
The point of titan:
Titan is a single gpu, when clocked properly it is effectively giving the power of two gtx 670's (slightly lower clock speed however) because it has exactly double the cuda cores, on a single gpu.
Tri-SLI is the most you can use and get your $ worth for performance increase.
Yes the titan is expensive, but this is because it is the ONLY sli option that will give you full compliment of SIX 670's worth of performance. With a 690 you're gonna put in two of those, and you'll get a 75% scale approximately, with a titan, you put in 2 and you get 100% scale, you put in three and you still get that 100% scale.
rich_ is offline   Reply With Quote
Old 02-23-2013, 03:08 PM   #63
VulgarDisplay
Diamond Member
 
Join Date: Apr 2009
Location: Chicago
Posts: 5,926
Default

Quote:
Originally Posted by rich_ View Post
I think you guys are missing the point of the titan. And no, with boost 2 it's very overclock friendly, it doesnt limit the power to the card it in fact unlimited it for the user's discretion at the user's own risk.
The point of titan:
Titan is a single gpu, when clocked properly it is effectively giving the power of two gtx 670's (slightly lower clock speed however) because it has exactly double the cuda cores, on a single gpu.
Tri-SLI is the most you can use and get your $ worth for performance increase.
Yes the titan is expensive, but this is because it is the ONLY sli option that will give you full compliment of SIX 670's worth of performance. With a 690 you're gonna put in two of those, and you'll get a 75% scale approximately, with a titan, you put in 2 and you get 100% scale, you put in three and you still get that 100% scale.
You can overclock it but it's my understanding that you have a hard limit of 265w. It throttles once you hit that limit. Someone may release a hacked bios for the card eventually.
VulgarDisplay is online now   Reply With Quote
Old 02-23-2013, 03:17 PM   #64
rich_
Junior Member
 
Join Date: Feb 2013
Posts: 6
Default

Quote:
Originally Posted by VulgarDisplay View Post
You can overclock it but it's my understanding that you have a hard limit of 265w. It throttles once you hit that limit. Someone may release a hacked bios for the card eventually.
You might be right, 265w overall, because the limit they removed was volt to core.
rich_ is offline   Reply With Quote
Old 02-23-2013, 03:20 PM   #65
Elfear
VC&G Moderator
 
Elfear's Avatar
 
Join Date: May 2004
Posts: 6,158
Default

Quote:
Originally Posted by BoFox View Post
***PROPOSAL*** for further investigation into real-world performance of Titan, WITHOUT allowing GPU Boost 2.0 to interfere with preliminary benchmarking when the card is cool for the first 1-2 minutes:

(Google translated: http://translate.google.com/translat...659%2F&act=url )

PCGH includes these settings in their benchmarks to account for REAL-WORLD gameplay where the card heats itself up first and then the ambient after a few minutes.

Also, from HardOCP:

How long is an average benchmark run for most review sites? 1 minute long? When loading up a benchmark, the card is usually cooled off a good deal already. At the very beginning, the card is likely to be running about 80MHz higher than a few minutes afterwards.

The question is, how much does that really affect the benchmark scoring?

PCGH's benches show there to be between 5-10% difference between "dynamic boost" (open-air rig) and "28 degrees Celsius" at 2560x1600, with average fps (except for Skyrim which shows a 19% difference). Sometimes, the 28C test is slower than the 876MHz result, and sometimes it is faster.

Computerbase.de is saying the same thing (translated):

This explains Computerbase.de's relatively low scores for Titan (default, not "MAX") compared to most other review sites.

The same goes for Hardware.fr (translated page explaining such) and its "relatively" low scores for Titan.



As to the PCGH tests, the differences can be astounding, upwards of 10% (averaging 5-10% for these few games). The benchmark videos can be viewed at PCGH, to judge its length, in how long the run actually is, keeping in mind how much GPU Boost2.0 could potentially affect the entire result, with a large "boost impact" coming from the first 30-60 seconds especially.

Let's request American reviewers to also look into this, and to try to account for it. ~342-343 Voodoopower (after excluding PCGH, CB.de, HW.fr, TPU's CPU-bottlenecked benches at 25x16, etc..) then reduced by 5-10% would be a drastically reduced:
~311-327 Voodoopower

What do y'all think about this? It's something serious reviewers need to beware of, in the future. Would AnandTech look into this?
Looks like multiple websites are reporting the degradation in performance over time so I think it warrants some further investigation.


Quote:
Originally Posted by VulgarDisplay View Post
So the Titan in real world scenarios is only roughly 20-25% faster than the 7970ghz edition if I'm reading that post correctly.

Didn't we run into this same problem with the gtx680 reviews where a hot card would perform worse in benchmarks? I thought other sites reported this back then.
That actually makes a lot of sense because the average performance gain of Titan over the 680 and 7970 varies widely between review sites. Some of that is normal because the games and settings tested are different but it seems like the swing with Titan is greater than average.

So the best case results we've seen in reviews are probably representative of someone living in Alaska with all the windows open or someone with liquid cooling. Otherwise performance drops off after 5-10 minutes of gaming.
__________________
4770k@4.7Ghz | Maximus VI Hero | 2x290@1150/1450 | 16GB DDR3 | Custom H20
Elfear is online now   Reply With Quote
Old 02-23-2013, 03:27 PM   #66
notty22
Diamond Member
 
notty22's Avatar
 
Join Date: Jan 2010
Location: Beantown
Posts: 3,325
Default

And then there are posters that tell us they run two and three cards stacked-multi-gpu, and run higher o/c's than reported from tech sites, supposedly for 24/7 gaming. You know those people are dreaming.

Here at Anands, Ryan noted that Nvidia is again leading in the AAA game of the year.
Battlefield 3

Quote:
AMD and NVIDIA have gone back and forth in this game over the past year, and as of late NVIDIA has held a very slight edge with the GTX 680. That means Titan has ample opportunity to push well past the 7970GE, besting AMD’s single-GPU contender by 52% at 2560. Even the GTX 680 is left well behind, with Titan clearing it by 48%.


Wow.
__________________
i5 4670K@4100mhz, 32GB Kingston 1600,H50
MSI GTX 970 gaming Seasonic SS-760XP2

240gb SSD, Win 8.1
Let's make sure history never forgets... the name... 'Enterprise'. Picard out.
notty22 is offline   Reply With Quote
Old 02-23-2013, 03:27 PM   #67
VulgarDisplay
Diamond Member
 
Join Date: Apr 2009
Location: Chicago
Posts: 5,926
Default

Quote:
Originally Posted by Elfear View Post
That actually makes a lot of sense because the average performance gain of Titan over the 680 and 7970 varies widely between review sites. Some of that is normal because the games and settings tested are different but it seems like the swing with Titan is greater than average.

So the best case results we've seen in reviews are probably representative of someone living in Alaska with all the windows open or someone with liquid cooling. Otherwise performance drops off after 5-10 minutes of gaming.
This also sucks for those trying to fit this card into a small enclosure to get a lot of performance. The smaller hotter enclosure will make your investment slower. Pair it up with a good case.
VulgarDisplay is online now   Reply With Quote
Old 02-23-2013, 03:28 PM   #68
BoFox
Senior Member
 
BoFox's Avatar
 
Join Date: May 2008
Posts: 689
Red face

Quote:
Originally Posted by rich_ View Post
I think you guys are missing the point of the titan. And no, with boost 2 it's very overclock friendly, it doesnt limit the power to the card it in fact unlimited it for the user's discretion at the user's own risk.
The point of titan:
Titan is a single gpu, when clocked properly it is effectively giving the power of two gtx 670's (slightly lower clock speed however) because it has exactly double the cuda cores, on a single gpu.
Yeah, 2x 660 Ti specs, pretty much.
The low default temperature target was set with SLI / Tri-SLI, and with longevity in mind. However, I'm pretty sure that the majority of those who have only 1 Titan would definitely move the sliders to the right.

What should I call it then? Titan "Boost" or Titan "Chill Boost"? "Cool Boost"? "Cold Boost"? 265W Boost? It's mainly the temperature that affects benchmark runs for review sites out there, not the power target, but then voltage control is linked with the temperature. So, I'll not call it 265W - since these sites do not adjust the power slider past 250W for non-overclocked tests.

Therefore I shall call it "Temp Boost" - it can mean either "Temporary Boost", or "Temperature Boost". 342 Voodoopower

Without the Temp Boost, it'd be at least 5% lower, or no more than 327 VP.

Last edited by BoFox; 02-23-2013 at 11:07 PM.
BoFox is offline   Reply With Quote
Old 02-23-2013, 04:03 PM   #69
wand3r3r
Platinum Member
 
wand3r3r's Avatar
 
Join Date: May 2008
Posts: 2,948
Default

Quote:
Originally Posted by rich_ View Post
I think you guys are missing the point of the titan. And no, with boost 2 it's very overclock friendly, it doesnt limit the power to the card it in fact unlimited it for the user's discretion at the user's own risk.
The point of titan:
Titan is a single gpu, when clocked properly it is effectively giving the power of two gtx 670's (slightly lower clock speed however) because it has exactly double the cuda cores, on a single gpu.
Tri-SLI is the most you can use and get your $ worth for performance increase.
Yes the titan is expensive, but this is because it is the ONLY sli option that will give you full compliment of SIX 670's worth of performance. With a 690 you're gonna put in two of those, and you'll get a 75% scale approximately, with a titan, you put in 2 and you get 100% scale, you put in three and you still get that 100% scale.
Ok, lots of FUD & BS there, your whole post is FUD. Did you just innocently miss all of these details or was your marketing handbook misleading you?

Go read an SLI review (and a Titan review).

SLI scaling varies. You will almost never see 3x gains with tri-SLI.

http://www.guru3d.com/articles_pages...review,11.html

Far Cry 3.
First game in the review, results vary, but already proves your post wrong. This is not the best scaling seen by SLI, but your claims was that you will see 100%, 200%, 300% results which is certainly not true.

Titan 53
2x 90 = 1.7x scaling (70% scaling, not 100%)
3x 97 = 1.83x scaling (83% scaling, 13% for the third card, 70 for the second.)

In the best cases you will see significant gains especially for 2 cards, but after that it's generally diminishing returns.

Next up, overclocking.

Quote:
Originally Posted by rich_ View Post
I think you guys are missing the point of the titan. And no, with boost 2 it's very overclock friendly, it doesnt limit the power to the card it in fact unlimited it for the user's discretion at the user's own risk.
Um, I guess you didn't read the reviews.

Quote:
First and foremost, Titan still has a hard TDP limit, just like GTX 680 cards. Titan cannot and will not cross this limit, as it’s built into the firmware of the card and essentially enforced by NVIDIA through their agreements with their partners. This TDP limit is 106% of Titan’s base TDP of 250W, or 265W. No matter what you throw at Titan or how you cool it, it will not let itself pull more than 265W sustained.
http://www.anandtech.com/show/6774/n...nce-unveiled/2

Last edited by wand3r3r; 02-23-2013 at 04:06 PM.
wand3r3r is offline   Reply With Quote
Old 02-23-2013, 04:25 PM   #70
Elfear
VC&G Moderator
 
Elfear's Avatar
 
Join Date: May 2004
Posts: 6,158
Default

Quote:
Originally Posted by notty22 View Post
And then there are posters that tell us they run two and three cards stacked-multi-gpu, and run higher o/c's than reported from tech sites, supposedly for 24/7 gaming. You know those people are dreaming.
Uhhh. Not sure who that is in reference to...
__________________
4770k@4.7Ghz | Maximus VI Hero | 2x290@1150/1450 | 16GB DDR3 | Custom H20
Elfear is online now   Reply With Quote
Old 02-23-2013, 04:39 PM   #71
VulgarDisplay
Diamond Member
 
Join Date: Apr 2009
Location: Chicago
Posts: 5,926
Default

Quote:
Originally Posted by Elfear View Post
Uhhh. Not sure who that is in reference to...
Yes all the posters who have this GPU that hasn't even been made available at retail yet...
VulgarDisplay is online now   Reply With Quote
Old 02-24-2013, 03:47 PM   #72
The Alias
Senior Member
 
The Alias's Avatar
 
Join Date: Aug 2012
Posts: 508
Default

Quote:
Originally Posted by notty22 View Post
And then there are posters that tell us they run two and three cards stacked-multi-gpu, and run higher o/c's than reported from tech sites, supposedly for 24/7 gaming. You know those people are dreaming.

Here at Anands, Ryan noted that Nvidia is again leading in the AAA game of the year.
Battlefield 3

Wow.
just one review versus every other one posted lol
The Alias is offline   Reply With Quote
Old 02-24-2013, 06:03 PM   #73
BallaTheFeared
Diamond Member
 
BallaTheFeared's Avatar
 
Join Date: Nov 2010
Posts: 8,128
Default

I say leave it alone, the fan is what is causing the reduced clocks, it's set at stupid low noise levels.

Unless we're taking into account fan noise on reference cards vs performance, I don't think in the overall scheme of things it matters.
BallaTheFeared is offline   Reply With Quote
Old 02-24-2013, 06:20 PM   #74
blackened23
Diamond Member
 
Join Date: Jul 2011
Posts: 8,556
Default

Was "voodoopower" inspired by the original 3dfx brand? Anyway, you've put a lot of work into this and it shows. I like it, kudos.
blackened23 is offline   Reply With Quote
Old 02-24-2013, 11:00 PM   #75
Elfear
VC&G Moderator
 
Elfear's Avatar
 
Join Date: May 2004
Posts: 6,158
Default

Quote:
Originally Posted by BallaTheFeared View Post
I say leave it alone, the fan is what is causing the reduced clocks, it's set at stupid low noise levels.

Unless we're taking into account fan noise on reference cards vs performance, I don't think in the overall scheme of things it matters.
If the performance delta were small I would agree with you, but sites are reporting anywhere from 5-19% difference depending on how long they let the card warm up. That's worth noting.
__________________
4770k@4.7Ghz | Maximus VI Hero | 2x290@1150/1450 | 16GB DDR3 | Custom H20
Elfear is online now   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:42 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.