• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[HardOCP] Tomb Raider Performance Review w/ new drivers

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
But I could be wrong in what I'm using to assess the situation.

don't you find odd, that titan have better perf/watt than 680?....

or that 7970 uses an insane 97 more watts than a 7870 ?

the truth is, amd or nvidia learn about theyr chips, when they are almost ready to produce... since titan came almost 1 year later than any other card... is kinda unfair to all other cards

hence, i agree with you about the 680 vs 7970 😉
 
don't you find odd, that titan have better perf/watt than 680?....

or that 7970 uses an insane 97 more watts than a 7870 ?

the truth is, amd or nvidia learn about theyr chips, when they are almost ready to produce... since titan came almost 1 year later than any other card... is kinda unfair to all other cards

hence, i agree with you about the 680 vs 7970 😉

No, actually I don't.

680 was overclocked to beat Tahiti at release, it would be much better suited at 170w than 195w, the performance difference is minimal for the power difference.

I don't find 97w to be insane, it provides a decent amount of performance over the 7870 for that power allotment.


😉
 
Is there any chance that you could also do IQ testing of both brands side by side and not just of game quality settings on 1 brand?

We use to, it's just a matter of workload, already including both review methods is like doing two reviews in one, in every review. On game specific evaluations, we will always include IQ testing.
 
Is it representative of overall gameplay experience or just at a particular stage, area or scene? ie. Anand used to do a waterfall scene that perform much worse on AMD cards, but it only occurs in that spot.

We pick areas that utilize all of the game's graphical features supported, such as in this game, tessellation, tressfx, ambient occlusion, shadowing, etc.... we want our run-throughs to represent everything the game is capable of.

Whereas, the "benchmark" built into the game is built to stress TressFX, and not the other effects the game is capable of. We look for a balanced run-through area that represents the whole game. Certain "benchmarks" don't do that.
 
No, actually I don't.

680 was overclocked to beat Tahiti at release, it would be much better suited at 170w than 195w, the performance difference is minimal for the power difference.

I don't find 97w to be insane, it provides a decent amount of performance over the 7870 for that power allotment.

let's put this way...
from 7870 to 7970....we are trading 3 watts for 1% performance
from 7770 to 7870... it's 1 watt for 2% performance

the ratios are not insane?
 
I dunno what this talk about having a dual GPU setup being "required" to play it at 2560x resolution. My single 680 is handling it just fine at 2560x1440. I haven't benchmarked it, but it's smooth gameplay which is all that should matter for a game like this (IMO). I'm just using FXAA of course but everything else is on. This plays better with a 360 controller too IMO.
 
I dunno what this talk about having a dual GPU setup being "required" to play it at 2560x resolution. My single 680 is handling it just fine at 2560x1440. I haven't benchmarked it, but it's smooth gameplay which is all that should matter for a game like this (IMO). I'm just using FXAA of course but everything else is on. This plays better with a 360 controller too IMO.

Generally if you are busy wrestling with a controller during a game you are less likely to notice that 18-20 fps is "semi-choppy".
 
We pick areas that utilize all of the game's graphical features supported, such as in this game, tessellation, tressfx, ambient occlusion, shadowing, etc.... we want our run-throughs to represent everything the game is capable of.

Whereas, the "benchmark" built into the game is built to stress TressFX, and not the other effects the game is capable of. We look for a balanced run-through area that represents the whole game. Certain "benchmarks" don't do that.

I'm confused here. You claim that you look for a balanced run-through area that represents the whole game, but at the same time the article states that you "looked for scenes, levels, or areas which produced lower framerates than others", i.e. basically the hardest areas in the game, and as such not a balanced run-through representative of the whole game. So which is it, a balanced area or the hardest area?

Both approaches certainly have their merits, I'm just a bit confused about the exact criteria your using here.
 
I'm confused here. You claim that you look for a balanced run-through area that represents the whole game, but at the same time the article states that you "looked for scenes, levels, or areas which produced lower framerates than others", i.e. basically the hardest areas in the game, and as such not a balanced run-through representative of the whole game. So which is it, a balanced area or the hardest area?

Both approaches certainly have their merits, I'm just a bit confused about the exact criteria your using here.

Both actually, often times the hardest areas are ones that employ all the effects in intensity. We use both criteria.
 
I like your reviews Brent, but feel you place too much emphasis on your subjective "playable" settings, as I often find your settings and frame rates completely unplayable, making the benches worthless to me.
Personally think more apples to apples benches would be more useful, at different graphical settings, then users can see for themselves the performance at different graphical levels and decide which performs better at what they consider playable settings.
 
I have a question about Tomb Raider performance...there is a setting to put out 24hz. My projector handles 24hz. Would it be better to change it to that setting (instead of 60) and turn on vsync and beef up the settings? How playable is a game at 24fps if it is rock solid? I know film looks good at 24, but i dont know about gaming....
 
I have a question about Tomb Raider performance...there is a setting to put out 24hz. My projector handles 24hz. Would it be better to change it to that setting (instead of 60) and turn on vsync and beef up the settings? How playable is a game at 24fps if it is rock solid? I know film looks good at 24, but i dont know about gaming....

Try it. Some games do well in 24 Hz, others do not. If you can hold it at a steady 24, it may work well.

I think TR does well even at lower frame rates, so it could work...
 
I have a question about Tomb Raider performance...there is a setting to put out 24hz. My projector handles 24hz. Would it be better to change it to that setting (instead of 60) and turn on vsync and beef up the settings? How playable is a game at 24fps if it is rock solid? I know film looks good at 24, but i dont know about gaming....

I'd think it would be choppy....You could always try it and see tho.

I played the game a little bit at 24hz and it was better than I thought it would be. Might try again with everything maxed out to see.
 
Last edited:
A short but sweet email from AMD informs us that a patch for Tomb Raider is due tomorrow, giving a nice performance update for Radeon GPU owners - up to 25% more performance in some cases.

Quote:
Crystal Dynamics is releasing a new patch this Friday, that will enhance AMD Radeons performance significantly (up to 25%).
No details on the specifics of the increase, but it sounds promising.
http://www.rage3d.com/board/showthread.php?threadid=33999519
 
I like your reviews Brent, but feel you place too much emphasis on your subjective "playable" settings, as I often find your settings and frame rates completely unplayable, making the benches worthless to me.

WTH?
Benching Subjective "playable" settings is the whole point of Hardocp testing procedure.

Personally think more apples to apples benches would be more useful, at different graphical settings, then users can see for themselves the performance at different graphical levels and decide which performs better at what they consider playable settings.

yeah... you have THAT only at every other review site 🙄
 
WTH?
Benching Subjective "playable" settings is the whole point of Hardocp testing procedure.

I know the point of it, but its no good if the subjective settings are not deemed by many as playable, nor is it a direct comparison.

Its no good to me knowing that card A get 30fps at 8xAA with HBAO, and card B gets 45fps at 4xAA and SSAO in a direct comparison. That gives me no info on how card A performs at 4xAA with SSAO for example. Show me what they both get at both 4 and 8 xAA, and both HBAO and SSAO. I will decide whats playable.

What I'm saying is, they should expand the apples to apples comparisons to cover a wider range of settings, as I feel more can be gained by such results.
 
I'm judging the uarch based on the best sku's.

Do you think its a fair comparison given Titan is an updated design thats released what, nearly 1 year after AMD's GCN? That's a long time.

Likewise, if AMD release GCN 2.0 in a top sku later this year and it beats Titan in these metrics, is it a fair comparison to then say Titan is crap?

ps. answer = no, in both cases.

The "best" uarch is not just about perf/w, if it needs a massive perf/mm2 deficit to attain a small perf/w advantage (referring to your own linked chart, 7970 and 7800 series isn't far behind and its even winning), it adds to the final perf/$, which is what the consumers care about. $350 vs $1000 is not a small difference, i hope we can agree on that.
 
Do you think its a fair comparison given Titan is an updated design thats released what, nearly 1 year after AMD's GCN? That's a long time.

Likewise, if AMD release GCN 2.0 in a top sku later this year and it beats Titan in these metrics, is it a fair comparison to then say Titan is crap?

ps. answer = no, in both cases.

The "best" uarch is not just about perf/w, if it needs a massive perf/mm2 deficit to attain a small perf/w advantage (referring to your own linked chart, 7970 and 7800 series isn't far behind and its even winning), it adds to the final perf/$, which is what the consumers care about. $350 vs $1000 is not a small difference, i hope we can agree on that.

I think it's fair since they're both Kepler, and unlike GF110 vs GF114 their SMX structure is far more alike. It's basically GK104 suped up for DP workstation performance.

If AMD released anything else on 28nm I'd consider it a fair comparison to Titan. Though I wouldn't call it crap, just like I never called the 7970 crap.

Best uarch imo is based on perf, perf/w, and future game performance. I care very little about the lower end chips, if you want to base your opinion on lower tier sku's be my guest though.

We aren't discussing purchasing, we're discussing technology.
 
Game runs really well on max settings at a constant 50-60 fps @ 2560x1440 with a single 680 as long as you use FXAA & turn off tressfx. Tressfx really nails the shit out of the fps...
 
They are not promoting OGSSAA, just giving it as an option and free of any down-sampling hassle. 4xSSAA is by far the single best IQ setting in Tomb Raider, and proly the most expensive too.
I appreciate the support for OGSSAA out of the box on these titles. (Sleeping Dogs, Witcher 2, latest build of Project CARS and Metro Last Light also seems to have built-in OGSSAA support). I love them. They're better than nothing. However, I think SGSSAA could be better.

That being said, AMD may not be promoting OGSSAA but they're certainly promoting more use of DirectCompute. Which I think is a good thing and this is where AMD's Tahiti seems to comfortably outperform Nvidia's GK104.

Looking at some of the recent AMD's Gaming Evolved titles...
DIRT : Showdown
- Advanced Lighting = use of DirectCompute.
- Contact hardening shadows = Shader Model 5.0 based. If I'm not mistaken, it uses DirectCompute to compute the average distance of a shadow pixel from the object casting the shadow.

Sleeping Dogs
- Extreme AA = use of DirectCompute to do another AA pass on top of SSAA. (whatever that means)
- Contact hardening shadows

Tomb Raider 2013
TressFX and Contact hardening shadows.
 
Last edited:
Since Nvidia switched to non hot clocked, smaller cores it's probably not as true as it was when Nvidia had fat shaders running twice the core speed.

If there is any questions left over which is better, Thaiti or Kepler, I think GK110 pretty much summed it up.

There really should be no question as to who has the best high performance uarch on the market currently...

The uarchs are the same, in low or high-end skus, at least for gaming. With the recent HD7790 release AMD showed some nice perf/watt gains. Even more, now we know for real how Curacao die (8970) would perform:

7790 8970 (3x7790)
shaders: 896 2688
bus w. : 128bits 384bits
rops: 16 48
tmus: 56 168

perf/watt should stay the same, better than Titan, and perf ~5% faster (@1075mhz, like 7790 DCII). Being a high-end sku, it will probably clock like ~90% of this, to have a 250watt TDP or lower (or reduce TDP of added DP or compute stuff), so final performance would be like 90-95% of titan, @ better perf/watt.
I bet this sku is ready waiting to embarrass some GK114 stuff being labeled as GTX780.
 
I guess well cross that bridge when we get there.

Though I disagree about them being the same, lower sku's are cut in ways to make them fit a certain performance criteria. Or have different design aspects that make them more or less efficient than the top chips.
 
Back
Top