Updated List of Video Card GPU OVERALL Performance VP Ratings - TITAN update!

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I guess we'll just disagree, just call it boost like they do for the GHz and 7950 and be done with it. No reason for the extra leg work in a sample/case depended situation like this where the cards will downclock because of low fan speed after heating up.

There would be no variation if fan ramped properly to maintain max boost or power target instead of C was prioritized.
 

BoFox

Senior member
May 10, 2008
689
0
0
Was "voodoopower" inspired by the original 3dfx brand? Anyway, you've put a lot of work into this and it shows. I like it, kudos.
Thanks man - I originally called it "GPU-power" or "GPU Power".. it's in the description in the 3rd post.. well, anyway, after a short while, I realized that Voodoo5 6000 (3dfx's last card ever made, with very few working samples around today) was really really close to 1 GPU power. So, I thought "Voodoopower" sounded way cooler, while paying homage to 3dfx's Voodoo brand that pretty much started it all.

I personally had a Voodoo3 3000, 3500 (with TV tuner - I still have it as a collectible), and a Voodoo5 5500. Loved them!!!!! One of my rich college friends had dual Voodoo2 in SLI! It blew me away in how well it played Half Life at 1024x768! The legacy of Voodoo is still very much alive in Nvidia and AMD (ATi), with engineers that used to work for 3Dfx. Even though the Voodoo brand was limited to 16-bit (quasi "22-bit") color, 3dfx remained to be my favorite by far even when I had nVidia Riva TNT, etc.. I ultimately had to move on when the $200 GF4 Ti4200 proved to be a whopping FOUR times faster than my quickly aging Voodoo5 5500 in Q3 Arena, within less than 2 years since the 5500 was launched at $300. It was time to move on from the Glide games anyway - I never liked the inefficient Glide wrappers and the quirks associated with them.
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
Update - Added HD 7750 2GB, 4GB DDR3 version, rated at 44 VP

http://ht4u.net/reviews/2012/budget_grafikkarten_benchmarks_gt_640_gtx_650_hd_7750_ddr3/
and
http://www.hardware.fr/focus/76/amd-radeon-hd-7750-ddr3-test-cape-verde-etouffe.html

I already rated GT 640 DDR5 OEM version at 53% faster than the retail DDR3 version (although the DDR5 version had slightly higher clocks).
For the 7750, the bandwidth is even lower, with the ceiling at a paltry 25.6 GB/s rather than 28.5 GB/s for GT 640 (which is 11% higher).
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
(Re: "Microstutter-Adjusted Frame Rate" from post # 61 )

For Crysis 3, here are the charts from Toms :
1920-VH.png
1920-high-ver.png


We can see that GTX TITAN which is churning out 53.8fps average (from the first chart) has an AVERAGE (not 75th or 95th percentile) frame time variance of 6.4ms (shown on the 2nd chart).

53.8 fps = 18.5ms

I'd add half of the AVERAGE frame time variance to 18.5ms, to get the "upper" end of the average (the average time of the average "slow" frame).
18.5ms + 3.2ms = 21.7ms (which translates to 46 fps)

So, rather than 53.8 fps, GTX TITAN is giving an average "feel" of 46 fps, when taking the average frame time variance into account.

Also, let's compare GTX 680 against HD 7970:

GTX 680's 45.1fps, with average frame time variance of 4.6ms:
22.2ms (45.1fps) + 2.3ms upper average variance = 25.5ms
25.5ms translates to 39.2fps "effective"

HD 7970's 42.0fps, with avg variance of 1.2ms:
23.8ms (42fps) + 0.6ms upper avg variance = 24.4ms
24.4ms translates to 41.0fps "effective" average

Of course, that's not accounting for 75% or 95% frame time variance at all. If it were just consistent variance where 75% and 95% were same as average, then that would have given us much more of a complete picture by just looking at the "effective" average fps. Beside GTX 680 effectively performing slower than HD 7970 on average, we know that the 680 stutters worse at 75% and 95%.

Do you think that hardware review sites should adopt this method of "effective" average fps, based upon average frame time variance (at least for single-GPU cards where FRAPs is more accurate than with multi-GPUs)? Should it at least be displayed next to the actual average fps count?

What do you guys think?
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Tom's saw that everyone is picking up on frametimes and said WTH how hard can it be
Me, I am pretty sure they are clueless when it comes to their new methodology.

How can I be so sure:
They never bothered to explain what they mean by "consecutive frametime latency", 50,70 or 95 percentile.
Furthermore I am positive they've got the idea from few threads on this very forum (consecutive frame latency!), then used God-knows-what formula to come up with... "interesting" results to say the least.

There are plenty of cases where slower cards beat their faster and more expensive sistercards that are situated further up in hierarchy, so I am not even going to bother listing them.

Oops looks like they have done some explaining --> http://www.tomshardware.com/reviews/gaming-processor-frame-rate-performance,3427-2.html
Perhaps they know what they are doing after all
 
Last edited:

wand3r3r

Diamond Member
May 16, 2008
3,180
0
0
Tom's saw that everyone is picking up on frametimes and said WTH how hard can it be
Me, I am pretty sure they are clueless when it comes to their new methodology.

How can I be so sure:
They never bothered to explain what they mean by "consecutive frametime latency", 50,70 or 95 percentile.
Furthermore I am positive they've got the idea from few threads on this very forum (consecutive frame latency!), then used God-knows-what formula to come up with... "interesting" results to say the least.

There are plenty of cases where slower cards beat their faster and more expensive sistercards that are situated further up in hierarchy, so I am not even going to bother listing them.

Oops looks like they have done some explaining --> http://www.tomshardware.com/reviews/gaming-processor-frame-rate-performance,3427-2.html
Perhaps they know what they are doing after all

So what's "wrong" with their results being you're so skeptical?
 

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I guess my problem is that they're averaging everything, even the spikes.

I don't care about avg when looking at frame times, it's not a fps chart. What I want to see is abnormal, not normalized. The spikes are what interest me, they represent a momentary pause in output, taking all the spikes and then averaging them across a large sample size seems contradictory to the point of looking at frame times within the second.

What toms seems to be doing is trying to create a avg fps chart out of frame times, and to me that makes no sense.
 

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
So what's "wrong" with their results being you're so skeptical?

If slower cards from both card camps are "winning" ahead of their faster, more capable and more expensive cousins, I think that there's something wrong with Tom's racing rules.
 

Haserath

Senior member
Sep 12, 2010
793
1
81
I don't care about averages either. I care for every second having no hiccup.

Tom's is only making averages of frame variance over the entire run, the worst 25%, and the worst 5%. What I'd like to know is if the worst variance will affect the gameplay.

As in, if a there smoke and several soldiers unloading bullets, will I lag for that moment. Who cares if I don't lag while sitting on the balcony looking over a valley of trees?

But this sort of testing is much more difficult to nail down. There is a point where good enough testing should be sufficient to help you choose a card.

I believe this is good enough.
 

loccothan

Senior member
Mar 8, 2013
268
2
81
loccothan.blogspot.com
VP = Voodoopower


___Tier AA Ultra High-End --(For somebody who cannot get enough out of it!)
  1. 142%-- Radeon Ares II HD 7970 GHz Edition X2 6GB (DX11.1) -- 407 VP --- (* NEW ENTRY! *)
  2. 132%-- Geforce GTX 690 4GB (DX11.1) -- 378 VP
  3. 129%-- Radeon HD 7990 Devil 13 (Turbo) 6GB (DX11.1) -- 370 VP
  4. 119%-- Geforce GTX TITAN (Temp. Boost*) 6GB (DX11.1) -- 342 VP --- (* NEW ENTRY! *)

    Tier A Super High-End --(Plays like a wicked beast)
  5. 100%-- Geforce Mars II GTX 580 x2 3GB (DX11) -- 287 VP
    (times 27% over Tier B's 100%)

    Tier B Upper High-End --(Plays like a tamed beast)
  6. 112%-- Radeon HD 7970 GHz Edition 3GB (DX11.1) -- 253 VP --- (* NEW ENTRY! *)
  7. 104%-- Geforce GTX 680 2GB (DX11.1) -- 235 VP
  8. 103%-- Radeon HD 6990 4GB (DX11) -- 232 VP
  9. 102%-- Geforce GTX 590 3GB (DX11) -- 230 VP
  10. 100%-- Radeon HD 7970 3GB (DX11.1) -- 226 VP
    (times 17% over Tier C's 100%)

    Tier C Mid High-End --(Plays most games at maxed-out settings @ 2560x1600 w/ 4x AA)
  11. 111%-- Radeon HD 7950 Boost Edition 3GB (DX11.1) -- 214 VP --- (* NEW ENTRY! *)
  12. 110%-- Geforce GTX 670 2GB (DX11.1) -- 213 VP
  13. 109%-- Radeon Ares HD 5870X2 4GB (DX11) -- 210 VP
  14. 107%-- Geforce GTX 560 Ti 2Win 2GB (DX11) -- 207 VP
  15. 100%-- Radeon HD 7950 3GB (DX11.1) -- 193 VP
    (times 12% over Tier D's 100%)

    Tier D Lower High-End --(Plays most games at maxed-out settings @ 2560x1600)
  16. 110%-- Radeon HD 7870 ("XT" / Tahiti LE / Boost Edition) 2GB (DX11.1) -- 190 VP --- (* NEW ENTRY! *)
  17. 109%-- Radeon HD 6870X2 2GB (DX11) -- 188 VP
  18. 107%-- Geforce GTX 660 Ti 2GB (DX11.1) -- 185 VP --- (* NEW ENTRY! *)
  19. 105%-- Geforce GTX 580 3GB (DX11) -- 181 VP
  20. 104%-- Geforce GTX 580 1.5GB (DX11) -- 180 VP
  21. 104%-- Radeon HD 5970 4GB (DX11) -- 180 VP
  22. 101%-- Radeon HD 5970 2GB (DX11) -- 175 VP
  23. 100%-- Radeon HD 7870 GHz Edition 2GB (DX11.1) -- 173 VP
    (times 23% over Tier E's 100%)
So i see that nVidia GTX6xx has like the AMD\Radeon 7xxx the DX11.1 features, no they has not GTX6xx only has Basic DX11 SUPPORT only GTX 7xx has Full Suport of DX11 or DX11.x BUT RADEON 7xxx has FULL Support of DX 11.x or DX11.1 only Avaible in Windows 8 !!!!!
Please make that Table Right !
Here: http://msdn.microsoft.com/en-us/library/windows/desktop/hh404562(v=vs.85).aspx
and as u can see there's a lot of New Features ! and Cappable Hardware IS ESSENTIAL (AMD 7xxx or nV 7xx) look ahead and see how DX9 evolves first DX9.a 9.b 9.c now 9.2 9.3 !!
http://blogs.amd.com/play/2012/10/26/reminder-amd-radeon-is-ready-for-windows-8/


And about Service Pack DX11.1 for Win7 there's only Software Emulation of HTML5 for IE10 (New in Win8 Internet Explorer) so that's untruth the GTX line 6xx has IT ! NO They Don't AMD line 7xxx HAS FULL DX11.x only in Windows 8 !!!



Uploaded with ImageShack.us
as u can see Titan do not support DX11.1 only old DX11 !!!!

http://www.heise.de/newsticker/meld...endig-zu-DirectX-11-1-kompatibel-1754119.html

http://www.tomshardware.com/news/Ke...sco-Feature-Set-Graphics-Core-Next,19839.html
 
Last edited:

Elfear

Diamond Member
May 30, 2004
7,081
596
126
Maybe I'm not understanding how Voodoopower works, but why is the Titan listed as 35% better than the 7970Ghz (342VP vs 253VP)? Seems like the reviews put it 15-30% faster on average. Similar case with Titan's gains over the 680, it's listed at the very high end of the range.

GTX-TITAN-93.jpg

Source

perfrel_2560.gif

Source

IMG0040574.gif

Source

Rating - 2.560 × 1.600 4xAA/16xAF

Nvidia GeForce GTX Titan SLI 206%
Nvidia GeForce GTX 690 153%
Nvidia GeForce GTX Titan (Max)136%
Nvidia GeForce GTX Titan 122%
AMD Radeon HD 7970 GHz 100%
AMD Radeon HD 7970 91%
Nvidia GeForce GTX 680 90%
Nvidia GeForce GTX 670 83%
AMD Radeon HD 7950 Boost 80%
AMD Radeon HD 7950 76%
Nvidia GeForce GTX 660 Ti 70%
AMD Radeon HD 7870 68%
Nvidia GeForce GTX 660 63%
AMD Radeon HD 7850 55%
Source
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
[/B]So i see that nVidia GTX6xx has like the AMD\Radeon 7xxx the DX11.1 features, no they has not GTX6xx only has Basic DX11 SUPPORT only GTX 7xx has Full Suport of DX11 or DX11.x BUT RADEON 7xxx has FULL Support of DX 11.x or DX11.1 only Avaible in Windows 8 !!!!!
Please make that Table Right !
Here: http://msdn.microsoft.com/en-us/library/windows/desktop/hh404562(v=vs.85).aspx
and as u can see there's a lot of New Features ! and Cappable Hardware IS ESSENTIAL (AMD 7xxx or nV 7xx) look ahead and see how DX9 evolves first DX9.a 9.b 9.c now 9.2 9.3 !!
http://blogs.amd.com/play/2012/10/26/reminder-amd-radeon-is-ready-for-windows-8/


And about Service Pack DX11.1 for Win7 there's only Software Emulation of HTML5 for IE10 (New in Win8 Internet Explorer) so that's untruth the GTX line 6xx has IT ! NO They Don't AMD line 7xxx HAS FULL DX11.x only in Windows 8 !!!



Uploaded with ImageShack.us
as u can see Titan do not support DX11.1 only old DX11 !!!!

http://www.heise.de/newsticker/meld...endig-zu-DirectX-11-1-kompatibel-1754119.html

http://www.tomshardware.com/news/Ke...sco-Feature-Set-Graphics-Core-Next,19839.html

Well, in the Heise.de link, it says:
"GeForce 600 series: The NVIDIA Control Panel incorrectly reports DirectX support as DirectX 11.0 instead of DirectX 11.1."
But it looks like you are right that NV does not support 4 features in DX11.1:
Not supported DX11_1 features:

- Target-Independent Rasterization (2D-Rendering)
- 16xMSAA Rasterization (2D-Rendering)
- Orthogonal Line Rendering Mode
- UAV in non-pixel-shader stages
So I'll be editing it.
 
Last edited:

BallaTheFeared

Diamond Member
Nov 15, 2010
8,115
0
71
I thought none of those four things affecting gaming, this is a gaming list and not a compute list right?
 

BoFox

Senior member
May 10, 2008
689
0
0
Maybe I'm not understanding how Voodoopower works, but why is the Titan listed as 35% better than the 7970Ghz (342VP vs 253VP)? Seems like the reviews put it 15-30% faster on average. Similar case with Titan's gains over the 680, it's listed at the very high end of the range.

GTX-TITAN-93.jpg

Source

perfrel_2560.gif

Source

IMG0040574.gif

Source

Rating - 2.560 × 1.600 4xAA/16xAF

Nvidia GeForce GTX Titan SLI 206%
Nvidia GeForce GTX 690 153%
Nvidia GeForce GTX Titan (Max)136%
Nvidia GeForce GTX Titan 122%
AMD Radeon HD 7970 GHz 100%
AMD Radeon HD 7970 91%
Nvidia GeForce GTX 680 90%
Nvidia GeForce GTX 670 83%
AMD Radeon HD 7950 Boost 80%
AMD Radeon HD 7950 76%
Nvidia GeForce GTX 660 Ti 70%
AMD Radeon HD 7870 68%
Nvidia GeForce GTX 660 63%
AMD Radeon HD 7850 55%
Source
First of all, GTX Titan was rated at "Temp Boost" clocks, since that is what most review sites did with Titan. That is what all American review sites most likely did.

Your 3rd source (hardware.fr) and 4th source (computerbase.de) ensured that GTX Titan was already running warm before doing benches. It was explained in post 54
Here are 2 examples with Anno 2070 and Battlefield 3 with a rapid test, test temperature stabilized after 5 minutes and the same test but with the latter two 120mm fans positioned around the map:

Anno 2070: 75 fps -> 63 fps -> 68 fps
Battlefield 3: 115 fps -> 107 fps -> 114 fps
For Anno 2070, that is a 19% reduction in performance from "rapid test" to "temp stabilized after 5 minutes". PCGamesHardware.de also found the same 19% reduction with Skyrim.

The same Computerbase.de article still shows a "warmed up" Titan being:
37% faster than 7970GE @ 1920x1080 w/ 8xAA
36% faster than 7970GE @ 5760x1080 w/ 8xAA
(which is more than 35% for the "Temp Boost" rating of 342VP)

However, I would rate the "warmed up" Titan at least 5-10% lower than 342VP. It should be in between 311-327 VP.
Say, 320 VP is only 26% faster than HD 7970GE's 253VP. For games with 19% lower performance than initial rapid test (due to Titan being "warmed up") - think about how much lower it would be (342VP -19% = 287VP). 287VP would be only 13% faster than HD 7970GE's 253VP.

Sadly, only 4 European sites have ensured that Titan was warmed up first when doing benchmark runs, to reflect real-world gameplay. We do not know exactly how much other sites have tested it using temperatures that would be normal after 10-20 minutes of real-world gameplay. Anandtech shows a 34% performance increase over 7970GE. We do not even know how many times Anandtech repeated benchmark runs one after the other to ensure consistency with the results. Some sites repeat benchmark runs 2-3 times. AlienBabelTech, which did 89 tests per card (with 30 games plus synthetics), showed a very similar result to Anandtech's overall average. Several other reputable sites (list of Titan reviews: http://forums.anandtech.com/showthread.php?t=2302828 ) are also showing similar overall averages.

I will be going over HD 7970GE's overall results again to see if they need to be higher with the latest drivers. What kept me from upping the ratings further than what I already did with the Never Settle results was the frametime measurements that really affect gameplay. Geforce cards also suffer a degree of consecutive frametime variance in some games, but it seems that Radeon cards suffer in more games than Geforce so far. As more review sites touch upon this, we will know better for sure.

Thanks- your input is appreciated. If there's more, then please feel free to share! Especially new articles!
 
Last edited:

BoFox

Senior member
May 10, 2008
689
0
0
I thought none of those four things affecting gaming, this is a gaming list and not a compute list right?
Well... is it really fair to say DX11.1?
http://forums.anandtech.com/showthread.php?p=34277712#post34277712
http://forums.anandtech.com/showpost.php?p=34281116&postcount=41
Kepler supports UAVs in pixel and compute shaders. But not in tessellation related (hull and domain shader) or any other shader type before the rasterizers. And in my opinion, that is gaming relevant. Use of UAVs in vertex, geometry, hull and domain shaders have no potential use for anything outside of gaming. Where else does one use this shader types? So nV's claim they support all gaming relevant DX11.1 features is plain wrong in my opinion.
If I'm not mistaken, they don't support a single feature of level 11_1 which is not optional for level 11_0.
It might just be that Microsoft didn't want UAV acces by all shader types to be a new option for level 11_0. If Kepler doesn't support 64 UAV or TIR or anything else from level 11_1, there is no way for Nvidia to expose UAV access by all shader types.
UAV is a random access (read | write) view on a buffer. You can do scattered writes to an UAV fe. You can already render out into UAVs if you want in 11.0
I think you can not go without RT _and_ DS (+UAV) in 11.0, because then the whole pipeline goes to sleep basically. I'd say the "feature" it's really only a guarantee that nothing just turns off, you should be able to do the feature itself without problems if your hardware isn't a bit inflexible.
nVidias problem is likely the 64 UAVs, not that you can't turn off RT & DS together.

Edit: You also loose the resolution-information when you don't have a RT&DS bound, there it goes hand in hand with the "Target-Independent Rasterization" feature of DirectX 11.1.
http://forum.beyond3d.com/showthread.php?t=58668&page=233

Ahh, Nvidia is now admitting that Kepler runs only feature level DX11_0:
http://www.nvidia.com/titan-graphics-card/features-technology
 
Last edited:

blackened23

Diamond Member
Jul 26, 2011
8,548
2
0
Yeah I've seen several mentions of GK110 also being DX11_0.

I don't think it matters though - DX11.1 doesn't really bring anything to the table at all to my knowledge. Most importantly, doesn't it require windows 8? Something that many PC gamers will jeer at :p

I certainly don't see it as something to complain about. I haven't seen any compelling evidence showing DX11.1 to bring anything worthwhile to the table.
 

Elfear

Diamond Member
May 30, 2004
7,081
596
126
First of all, GTX Titan was rated at "Temp Boost" clocks, since that is what most review sites did with Titan. That is what all American review sites most likely did.

Your 3rd source (hardware.fr) and 4th source (computerbase.de) ensured that GTX Titan was already running warm before doing benches. It was explained in post 54

For Anno 2070, that is a 19% reduction in performance from "rapid test" to "temp stabilized after 5 minutes". PCGamesHardware.de also found the same 19% reduction with Skyrim.

The same Computerbase.de article still shows a "warmed up" Titan being:
37% faster than 7970GE @ 1920x1080 w/ 8xAA
36% faster than 7970GE @ 5760x1080 w/ 8xAA
(which is more than 35% for the "Temp Boost" rating of 342VP)

However, I would rate the "warmed up" Titan at least 5-10% lower than 342VP. It should be in between 311-327 VP.
Say, 320 VP is only 26% faster than HD 7970GE's 253VP. For games with 19% lower performance than initial rapid test (due to Titan being "warmed up") - think about how much lower it would be (342VP -19% = 287VP). 287VP would be only 13% faster than HD 7970GE's 253VP.

Sadly, only 4 European sites have ensured that Titan was warmed up first when doing benchmark runs, to reflect real-world gameplay. We do not know exactly how much other sites have tested it using temperatures that would be normal after 10-20 minutes of real-world gameplay. Anandtech shows a 34% performance increase over 7970GE. We do not even know how many times Anandtech repeated benchmark runs one after the other to ensure consistency with the results. Some sites repeat benchmark runs 2-3 times. AlienBabelTech, which did 89 tests per card (with 30 games plus synthetics), showed a very similar result to Anandtech's overall average. Several other reputable sites (list of Titan reviews: http://forums.anandtech.com/showthread.php?t=2302828 ) are also showing similar overall averages.

I will be going over HD 7970GE's overall results again to see if they need to be higher with the latest drivers. What kept me from upping the ratings further than what I already did with the Never Settle results was the frametime measurements that really affect gameplay. Geforce cards also suffer a degree of consecutive frametime variance in some games, but it seems that Radeon cards suffer in more games than Geforce so far. As more review sites touch upon this, we will know better for sure.

Thanks- your input is appreciated. If there's more, then please feel free to share! Especially new articles!

Well you kind of solidified my point. ABT and Anandtech are at the very high end of the scale (in regards to Titan's average perf. increase over last gen cards) yet the VP rating is even higher than that. Shouldn't real world running conditions be factored into the performance rating? I can't think of too many people who would game in 5-min increments to allow the card to run at full boost. Sure individual owners will adjust settings to try and minimize the throttling but that opens up another can of worms in how you factor that into the VP score.

The throttling is probably an issue with Boost 2.0 and will hopefully be fixed in the near future. Until then though you have to go along with what reviewers have found while benching. If a good number of sites have found that Titan downclocks after 5+ minutes of playing, I'd think that should be factored into the VP score.

Much thanks by the way for consolidating all the info you have. The VP is a nice resource when comparing older cards to the current crop.
 

loccothan

Senior member
Mar 8, 2013
268
2
81
loccothan.blogspot.com
Well, in the Heise.de link, it says:
But it looks like you are right that NV does not support 4 features in DX11.1:

So I'll be editing it.

Thx :cool: because when New GTX 7xx arrives will be Fully Hardware DX11.1 !!

But AMD series 7xxx has it already (AMD\Radeon 5xxx , 6xxx has only DX11.0)
So brothers of nV will have to wait for GTX7xx or buy Radeon 7xxx !
 

BoFox

Senior member
May 10, 2008
689
0
0
Done - all Geforce 6xx/Titan changed from DX11.1 to DX11 .. tough love, Nvidia! :wub: DX11.1 does affect tessellation, and that might give Radeon 7xxx series a bit more lee-way with DX11.1 games in the future, but that remains to be seen.. it probably would be the case with some of the new console ports from PS4 and XboxNext that have AMD's DX11.1-capable GPUs.
Well you kind of solidified my point. ABT and Anandtech are at the very high end of the scale (in regards to Titan's average perf. increase over last gen cards) yet the VP rating is even higher than that. Shouldn't real world running conditions be factored into the performance rating? I can't think of too many people who would game in 5-min increments to allow the card to run at full boost. Sure individual owners will adjust settings to try and minimize the throttling but that opens up another can of worms in how you factor that into the VP score.

The throttling is probably an issue with Boost 2.0 and will hopefully be fixed in the near future. Until then though you have to go along with what reviewers have found while benching. If a good number of sites have found that Titan downclocks after 5+ minutes of playing, I'd think that should be factored into the VP score.

Much thanks by the way for consolidating all the info you have. The VP is a nice resource when comparing older cards to the current crop.
It's not just ABT and Anand at the high end of the scale - some others show even more, by a couple percentage points. Take a look at all of the reviews.

But your point of contention is a very reasonable one - so, I'll look at the European sites that tested Titan in normal warmed-up gaming conditions, and derive a rating from these results alone.

Until more evaluations are done on Titan's practical performance, there will be 2 different VP results for Titan - one from review sites overall, and one from the few sites that took practical conditions into consideration.

BtW, when is Anandtech going to do a review on tri-SLI watercooled Titans? Xbitlabs still doesn't have anything yet.. I'm hoping that an American site would really look into this - even Anand. Of course this is not what Nvidia would want, but..
 

loccothan

Senior member
Mar 8, 2013
268
2
81
loccothan.blogspot.com
Done - all Geforce 6xx/Titan changed from DX11.1 to DX11 .. tough love, Nvidia! :wub: DX11.1 does affect tessellation, and that might give Radeon 7xxx series a bit more lee-way with DX11.1 games in the future, but that remains to be seen..

... in New Games: Ureal Engine 4, Cry Engine 3 upd.xx, Luminous Engine,Chrome 4, FrostBite 2.1 and More ! I like nV too ;-) but if AMD is More efficient and is-Ready for Next Gen ...TIME To Move ON ! :D
 

x3sphere

Senior member
Jul 22, 2009
722
24
81
www.exophase.com
They had DX10.1 too, which only AMD supported at one point, and it was a total dud. Also, even if NV's hardware isn't in full compliance they can still support the implementation to a certain degree, and for games it may be enough.