• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Rise of the Tomb Raider DX12 patch released

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Actually, according to the benchmarks Termie posted, the 390X sees some major gains in minimum FPS, but the 290 does not. That implies that the game actually makes use of 390/390X's 8GB of RAM, which would also explain Fiji's poor performance. It might not be an architecture issue at all; possibly this game is just a gigantic memory hog.

I added a third page to the benches looking more closely at the 390X. It was no fluke. The minimums shoot way up under DX12, and it definitely seems related to VRAM.
 
I added a third page to the benches looking more closely at the 390X. It was no fluke. The minimums shoot way up under DX12, and it definitely seems related to VRAM.

Play in-game to compare rather than the benchmark. I have never gotten such low min FPS during gameplay.

This could be like Ryse, where the constant camera view changes cause min fps to drop but in gameplay its smooth.
 
Play in-game to compare rather than the benchmark. I have never gotten such low min FPS during gameplay.

This could be like Ryse, where the constant camera view changes cause min fps to drop but in gameplay its smooth.

From when I was reading neogaf, gameplay and the benchmark were 2 different things.
 
I added a third page to the benches looking more closely at the 390X. It was no fluke. The minimums shoot way up under DX12, and it definitely seems related to VRAM.
While I don't disagree that those results for the 390x are no fluke using the in-game benchmark.
As I get get the same results as you as I stated in my previous post when using the in-game benchmark.

But as I stated, there is something wrong with the DX11 minimums using the in-game benchmark with the 390x.
At no time during gameplay does the 390x dip to single digits.

So while there "may" be an increase of minimums in DX12 vs DX11 on the 390x, it certainly isn't as great as the benchmark results would have one believe.
 
I added a third page to the benches looking more closely at the 390X. It was no fluke. The minimums shoot way up under DX12, and it definitely seems related to VRAM.

That is crazy. I wonder how a 12GB Titan X would do. The more VRAM the better minimums.

RoTR%20VRAM%20fix.jpg

RoTR%20RAM%20Fix.jpg
 
RoTR%20DX12.jpg


Why does GTX 970 have higher FPS than R9 290? really Ridiculous.we know that R9 290 shines when uses proper in DX12.
 
wonder how much this has to do with developers needing to learn to better code for DX12

they went from DX11 where they need to optimize their engine in general then AMD / Nvidia provide their own specific driver optimizations

to DX12, where developers have much more closer to the metal control over resource allocation. this helps a lot if they are developing for a standard configuration (consoles) but could be much more difficult for different types of GPU architectures.
 
It took quite some time for DX10 and DX11 to shine due to that. Will take much longer for DX12 due to increased complexity. And there will be casualties always with DX12.
 
wonder how much this has to do with developers needing to learn to better code for DX12

they went from DX11 where they need to optimize their engine in general then AMD / Nvidia provide their own specific driver optimizations

to DX12, where developers have much more closer to the metal control over resource allocation. this helps a lot if they are developing for a standard configuration (consoles) but could be much more difficult for different types of GPU architectures.
Dan baker and co have claimed that they algorithmically optimize their engine, so i dont think you have to hand write code for each gpu configuration.
 
And, should games using this need optimising for each architecture, what happens when cards with different architectures get released after games? Could be dramatically different over the lifespan of a game.

Not even the most talented developers are mind readers 🙂 Can't quite see people going back to reoptimise 2/3 year old games.
 
Performs worse under DX12 for me and no SLI working. What was the point in releasing this if it is bringing negatives to the table ? So far both the DX12 games have been a disaster. Gears of War is broken crap with no multi-gpu, no exclusive fullscreen and vsync locked to on, plus it's locked to the Windows store. Probably has an install base of of sub 10K on PC. Now this ?

Just checked and Hitman does not support SLI in DX12 either. Benchmarks don't show that game's performance going in the toilet from DX12 like it does in this game. Would be nice to try a new DX12 title and not get a significant drop in performance from bad optimization and/or no multi gpu support.
 
AMD CPUs, i3s, and perhaps Nehalem can see some gains here. That's the only practical purpose to the user.
 
I was happily sure about just needing an 8GB 14/16nm card. After this, I am starting to think on 16GB 😱

And this right here is how these companies can sell you garbage and make you smile at the same time.

They took a game that used much less VRAM and system RAM made it take more, run slower, and look the same and you are championing for them?

How can anyone honestly say this patch is anything but garbage?
 
And this right here is how these companies can sell you garbage and make you smile at the same time.

They took a game that used much less VRAM and system RAM made it take more, run slower, and look the same and you are championing for them?

How can anyone honestly say this patch is anything but garbage?

Like I always say in VC&G, The GPU nerd crowd are proud of getting ripped off at the wallet from left, right and center because reasons.
 
Has any of you actually tested DX12 in game, instead of just relying on benchmark results. Reports I hear are that dx12 is actually better to play with, but these are ofc just reports based on feelings, not anything based on raw numbers.

Would love to see site like digitalfoundry to make a gameplay comparision instead of synthetic benchmark.
 
Back
Top