• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[H]Rise of the Tomb Raider Video Card Performance Review

Final8ty

Golden Member
Rise of the Tomb Raider Video Card Performance Review

A new Tomb Raider game is out, Rise of the Tomb Raider. We take RoTR and find out how it performs and compares on no less than 14 of today's latest video card offerings from the AMD and NVIDIA GPU lineups from top-to-bottom using the latest drivers and game patch v1.0 build 610.1_64.

Patches and Performance

One thing is for sure and that is the Rise of the Tomb Raider developers are not wasting time sitting around on their hands. In the last couple of weeks, we have seen two performance and feature patches issued for Rise of the Tomb Raider. The first patch on February 5th was significant enough in terms of performance that we had to scrap all of our original data and start over. Three days ago we did see another patch as well that addressed, "Various performance improvements for GPU-bound situations." We did go back and test with this patch and found that it primarily impacts SLI at 4K resolution. We saw SLI at 4K increase framerates from 5% to 7%. This however did not influence gameplay to where we could further image quality settings. Single GTX 980 performance increased 1% to 2% at 1440p resolution, so nothing significant.
http://www.hardocp.com/article/2016..._video_card_performance_review/1#.VsG734S3yHs

This is what impressed me.
We are comparing the percentage increase from GTX 980 to GTX 980 SLI and then from R9 390X to R9 390X CrossFire.

Adding a second GeForce GTX 980 improves performance by 83.4%. That means we are seeing scaling efficiency of 83.4% out of GTX 980 SLI.

Adding a second Radeon R9 390X improves performance by 99.7%. That means we are seeing scaling efficiency of 99.7% out of R9 390X CrossFire.

We checked and re-tested this test three times after we saw that kind of scaling out of R9 390X CrossFire. Every run was consistent and every time this is the kind of scaling we experienced out of R9 390X CrossFire. This is by far the best and highest scaling efficiency we've ever come across for any video card in any game. This kind of scaling is what allows the R9 390X CrossFire to compete well at 4K with the competition. We aren't sure if the reason for this is due to the doubled 8GB of VRAM, or what, but it is what it is and it is incredible.
 
Last edited:
Hmm, nice boost for Radeons with first patch.

I'm curious what performance optimizations have been done in second patch.
 
1455189919EDyKUcGV8E_10_2_l.gif

Now we are comparing the percentage increase from GTX 980 Ti to GTX 980 Ti SLI and then Radeon R9 Fury to Radeon R9 Fury CrossFire. Even at 1440p, you can see Fury CrossFire had a lot of stuttering and spikes in performance, that 4GB of VRAM is biting it in the butt even at 1440p.

Adding a second GeForce GTX 980 Ti improves performance by 72%. That means we are seeing scaling efficiency of 72% out of GTX 980 Ti SLI. For some reason this is lower scaling than we saw with GTX 980 SLI. VRAM is higher on these GPUs so it must not be related to that.

Adding a second Radeon R9 Fury improves performance by 89%. That means we are seeing scaling efficiency of 89% out of R9 Fury CrossFire.

Even with the stutter we once again see AMD CrossFire having the better scaling in this game, near 90% out of Radeon R9 Fury CrossFire. That is really incredible. If Fury had more VRAM it may have done even better.

Is Brent losing it? Same settings and the single card has no VRAM limitations but dual cards does? Obvious driver or game issue in Crossfire. There's no way it's the 4 gig of VRAM.

1455189919EDyKUcGV8E_8_3_l.gif

Even max settings at 4K Fiji maintains higher mins.
 
This game is one that restored my faith in the GFE app. I run at 3440x1440 on a pretty well OC'ed 980ti (3800 Mhz memory, 1540ish Mhz core) and clicking optimize put me right at low 70s in fps (playing on a 100 Hz gsync monitor). When I had dual 680s it always seemed to have silly recommendations at least when it first came out.

Here is what Nvidia suggests for my card.

I guess I could turn more things up since I've noticed that under 60 fps still looks smooth with this set up, but for ease of use, just clicking optimize was nice.

rNvMSxD.png
 
Last edited:
Crossfire performs so much better than SLI nowadays. Anyone going multi-GPU would be silly not to consider 8gb 390's or 390x's.
 
The one thing H has over many other tech sites is the benchmark run length, 8 minutes eliminates a lot of variables. If you look at the first 40 or so seconds of their framerate/time graph, you cannot be blamed for coming away with the impression that the geforces are a lot faster and yet that is exactly what you might get from other sites with their 20 secs to a minute benchmark run. H just needs to quantify somehow, the subjective anecdotal as relates to stutters and smoothness for a more complete review.
 
So basically, next time someone recommends a 960 or 970 I can respond with a link to this topic. Good thing I went with a 390 last summer.
 
Crossfire performs so much better than SLI nowadays. Anyone going multi-GPU would be silly not to consider 8gb 390's or 390x's.

Was just a few months ago that the Xfire stutter was so bad it was to be avoided at all costs - [H] were the ones reporting this. Has that all been fixed, or have they just managed to tweak this game to get it to work a better?
 
Was just a few months ago that the Xfire stutter was so bad it was to be avoided at all costs - [H] were the ones reporting this. Has that all been fixed, or have they just managed to tweak this game to get it to work a better?

Best viewed on a game by game basis.
 
Is Brent losing it? Same settings and the single card has no VRAM limitations but dual cards does? Obvious driver or game issue in Crossfire. There's no way it's the 4 gig of VRAM.

Even max settings at 4K Fiji maintains higher mins.

There is some memory overhead involved in CF/SLI. but I agree this sounds weird.
 
Are the 970 and 390 really designed for 1440p, played on the newest games and at the highest possible settings?

Would be great to see more 1080p results as well as how much VRAM is getting used.
 
A theory: Furyx2 delayed because they are doing hard driver work, to enable better crossfire scaling/experience.
 
Good results for AMD across the board. 960 gets turned down two notches to 'normal' settings and still can't beat the 380X 😵
 
A theory: Furyx2 delayed because they are doing hard driver work, to enable better crossfire scaling/experience.

Silly theory IMO because Crossfire Scaling/Experience has already been good. The issue has been having support ready for games on time.
 
Too much power draw IMO. Unless you're going under water about the only cards I'd consider is Nano.

Completely disagree. The cost to performance ratio of the R9 390 crossfire is WAY too good to ignore.

Nano is expensive as $900 to crossfire. There is NO WAY I would tell someone to buy Crossfire Nano right now. I would actively recommend them to go against it as it's a horrendous value proposition.

If you have a R9 390, or want decent 4K experence, the R9 390 crossfire is the way to go. It's the cheapest way to get a lot of performance before Polaris and Pascal drop.

With how fast the R9 390x is, and how cheap it is, Nano really isn't worth the added $100 for less VRAM.
 
Silly theory IMO because Crossfire Scaling/Experience has already been good. The issue has been having support ready for games on time.

Been good by today Multi-GPU support standards. What AMD seeks, IMO, is pushing the Multi-GPU support to a newer level.
 
Seen this bench only show me that we still need a lot more single gpu power to do 4K 60fps max settings.
 
A theory: Furyx2 delayed because they are doing hard driver work, to enable better crossfire scaling/experience.

Could be. This isn't the first time I've seen glitches reported with Fiji Crossfire.

Although we do see in the patch notes improved SLI but nothing for Crossfire. AMD might require some game patching too for better results.

Completely disagree. The cost to performance ratio of the R9 390 crossfire is WAY too good to ignore.

Nano is expensive as $900 to crossfire. There is NO WAY I would tell someone to buy Crossfire Nano right now. I would actively recommend them to go against it as it's a horrendous value proposition.

If you have a R9 390, or want decent 4K experence, the R9 390 crossfire is the way to go. It's the cheapest way to get a lot of performance before Polaris and Pascal drop.

With how fast the R9 390x is, and how cheap it is, Nano really isn't worth the added $100 for less VRAM.

Notice I said what I would do. 😉 I'm not recommending it for others (not that I wouldn't). I just wouldn't want ~500W dumped into my case. 350W from Nano would be about my comfortable limit.

This doesn't discount any of your points. Hawaii is great value and of course if someone already owned one then the cost is dramatically less.

Extremely arbitrary

See above. 🙂
 
In Rise of the Tomb Raider Crystal Dynamics went down its own path this time and created PureHair. PureHair can add 30,000 strands of hair to a character model, in this case Lara. The individual hairs can react to physics in the game and based on the materials they are moving through like air or water. There are three options, you can turn it off, turn it on, or turn it on "very high" quality setting. With the "on" option this is the recommended setting for most players, and those needing more performance. This is a lower quality version. The "very high" option is the one that can in some cases up-close create 30,000 strands of hair and naturally is the most performance demanding.

Purehair is based off of TressFx and they worked with AMD on TressFx 3.0 which Purehair is forked form.

http://motherboard.vice.com/read/glimpse-of-the-purehair-hair-rendering-engine-at-gdc

Also async compute support for Deus Ex's version:

http://gearnuke.com/deus-ex-mankind-divided-use-async-compute-enhance-pure-hair-simulation/

Why of why did they leave SMAA on for 4k tests and instead lower everything else down a bunch?

http://www.hardocp.com/article/2016/02/15/rise_tomb_raider_video_card_performance_review/8

Note sure if that is just a typo in their graph, but that makes no sense to me.

Amazing to see the Fury (non-X!) in CFX performing the same as 980 TI SLI.

Really wish they would have turned off SMAA for some of the 1440 / 4k tests and seen how that effected the cards.
 
What I'm noticing is that in this game Fiji is matching GM200. I said from the start that Fiji was close enough that before it was over it would be the faster card. I still believe that.
 
Back
Top