ComputerBase & GameGPURise of the Tomb Raider: DX11 vs DX12 + VXAO Tested

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Doesn't NV had actually poor support for CR, it's more political save face? The spec is most supported by Intel.
 

flynnsk

Member
Sep 24, 2005
98
0
0
Doesn't NV had actually poor support for CR, it's more political save face? The spec is most supported by Intel.

nv provides basic/minimal support for Conservative Rasterization.. best resource is here: https://msdn.microsoft.com/en-us/library/windows/desktop/dn903791(v=vs.85).aspx

Code:
•Tier 1 enforces a maximum 1/2 pixel uncertainty region and does not support post-snap degenerates. This is good for tiled rendering, a texture atlas, light map generation and sub-pixel shadow maps.
--MISSING
•Tier 2 reduces the maximum uncertainty region to 1/256 and requires post-snap degenerates not be culled. This tier is helpful for CPU-based algorithm acceleration (such as voxelization).
•Tier 3 maintains a maximum 1/256 uncertainty region and adds support for inner input coverage. Inner input coverage adds the new value SV_InnerCoverage to High Level Shading Language (HLSL). This is a 32-bit scalar integer that can be specified on input to a pixel shader, and represents the underestimated Conservative Rasterization information (that is, whether a pixel is guaranteed-to-be-fully covered). This tier is helpful for occlusion culling.
 
Last edited:

Head1985

Golden Member
Jul 8, 2014
1,864
686
136
DX12 is mess now.You cant use it with very high textures unless you have 6GB card.So why so much spam here?(you can but with crap performance and stutering)
Dx11 is way better and you can actually use very high textures even on GTX970(if you have win10 and 16GB fast DDR4 ram)
Even GTX670 2GB can manage very high textures under dx11.
Just dont use dx12 and be happy.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,355
642
121
DX12 is mess now.You cant use it with very high textures unless you have 6GB card.So why so much spam here?(you can but with crap performance and stutering)
Dx11 is way better and you can actually use very high textures even on GTX970(if you have win10 and 16GB fast DDR4 ram)
Even GTX670 2GB can manage very high textures under dx11.
Just dont use dx12 and be happy.

And here we finally have it.
Under Nvidia, DX12 sponsored games, just use DX11....
Until games are DX12 only like GOW and you're screwed
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
If you mean computerbase's bench , they're are reference , both running around 1K ~ 1050Mhz while pcgameshardware's bench shows boost for Asus GTX 980 Ti.all of GTX 980Ti cards are overclocking version and they're still faster than R9 390 by around 15 ~ 30 depend on card frequency , but it's AMD's Title and you don't see that they lose to DX11.All cards(except some) in Hitman get more fps than DX11.

So is it fine that Ti outperforms 390 by only 15%?
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
So is it fine that Ti outperforms 390 by only 15%?

It all depends on your perspective.

If you think that the 980Ti is and always will be the top card of this generation, then no.

If you have a 980Ti no.

If you have a 390 or another Hawaii card, then yes.

If you have an AMD card and are counting on it to keep improving and beating its original competitors, then yes.

Hawaii just keeps on trucking, getting better against each generation the NV puts up against it. If Pascal wasn't on a node shrink, somehow it wouldn't surprise me to see Hawaii challenge the 1070 or 1080. It's the little chip that doesn't know how to quit.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
It all depends on your perspective.

If you think that the 980Ti is and always will be the top card of this generation, then no.

If you have a 980Ti no.

If you have a 390 or another Hawaii card, then yes.

If you have an AMD card and are counting on it to keep improving and beating its original competitors, then yes.

Hawaii just keeps on trucking, getting better against each generation the NV puts up against it. If Pascal wasn't on a node shrink, somehow it wouldn't surprise me to see Hawaii challenge the 1070 or 1080. It's the little chip that doesn't know how to quit.

So 970 doing well is also fine by me..
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
So is it fine that Ti outperforms 390 by only 15%?

Look at this :

http://tpucdn.com/reviews/ASUS/GTX_980_Ti_Matrix/images/perfrel_1920_1080.png

GTX 980Ti Reference = 86%
R9 390 stock = 62%
So If GTX 980Ti is 100% then R9 390 (100*62 / 86 = 72.09 ).this is DX11, now add DX12 you get massive boost by removing API overhead , add 5% or more to to R9 390 Performance by optimizing Game engine's source code.we know that GTx980Ti doesn't get benefit more from DX12 because if Efficiency Is 90 % then in DX12 it won't be near 100% or less on other hand there is little room for optimizing source code to get full utilization , while GCN Efficiency is around 70% or less.96 ROP doesn't mean this should be God like.

Just look at Tomb riader , despite having a Radeon R9 390 with 8GB, why is it slower than GTX 970 ?

again , look at Techpowerup's image.for Resolution 1920x1080 :
R9 390 = 62%
GTX 970 = 64%

so if R9 390 is 100% then GTX 970 is 3% (64*100 / 62 ) faster than Radeon one.but in Tomb raider it's much much worse than before even if you disable Gameworks ,It still slower than GTX970.Isn't DX12 supposed to removing API overhead ? If yes , Then what happened here?

Really is it normal to you that in DX12 bench , GTX 970 with 4GB is 21% faster than Radeon R9 390 8GB? I agree for DX11.everyone know AMD GCN doesn't get a good Utilization from DX11 API.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Look at this :

http://tpucdn.com/reviews/ASUS/GTX_980_Ti_Matrix/images/perfrel_1920_1080.png

GTX 980Ti Reference = 86%
R9 390 stock = 62%
So If GTX 980Ti is 100% then R9 390 (100*62 / 86 = 72.09 ).this is DX11, now add DX12 you get massive boost by removing API overhead , add 5% or more to to R9 390 Performance by optimizing Game engine's source code.we know that GTx980Ti doesn't get benefit more from DX12 because if Efficiency Is 90 % then in DX12 it won't be near 100% or less on other hand there is little room for optimizing source code to get full utilization , while GCN Efficiency is around 70% or less.96 ROP doesn't mean this should be God like.

Just look at Tomb riader , despite having a Radeon R9 390 with 8GB, why is it slower than GTX 970 ?

again , look at Techpowerup's image.for Resolution 1920x1080 :
R9 390 = 62%
GTX 970 = 64%

so if R9 390 is 100% then GTX 970 is 3% (64*100 / 62 ) faster than Radeon one.but in Tomb raider it's much much worse than before even if you disable Gameworks ,It still slower than GTX970.Isn't DX12 supposed to removing API overhead ? If yes , Then what happened here?

Really is it normal to you that in DX12 bench , GTX 970 with 4GB is 21% faster than Radeon R9 390 8GB? I agree for DX11.everyone know AMD GCN doesn't get a good Utilization from DX11 API.

See the chart
http://forums.anandtech.com/showthread.php?t=2466535

980Ti @ DX 11 72.5
390X @ DX 11 68.6

According to the TPU chart you linked a 980Ti should be 28% faster than 390X in 1080P.
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
See the chart
http://forums.anandtech.com/showthread.php?t=2466535

980Ti @ DX 11 72.5
390X @ DX 11 68.6

According to the TPU chart you linked a 980Ti should be 28% faster than 390X in 1080P.

I totally agree. Hitman and Rise of the Tomber Raider are pole opposites. They are outliers. The only striking thing is one is Nviidia's sponsored while the other is AMD sponsored. You can guess which vendor is the performance leader in the respective game based on their sponsorship.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
See the chart
http://forums.anandtech.com/showthread.php?t=2466535

980Ti @ DX 11 72.5
390X @ DX 11 68.6

According to the TPU chart you linked a 980Ti should be 28% faster than 390X in 1080P.

oh I get it so you mean DX11 Not DX12 , my link was for DX11 now with DX12 , R9 390x should be close to reference 980Ti.for next DX12 game in future, if R9 390X surpasses reference GTX 980Ti , don't surprise!
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I personally like extra features that differentiate the PC version from a mere console port running at higher-res with better textures. Having only used Radeon graphics cards over the years I missed some of that. Bring on GameWorks. Fanboys should be asking AMD to come up with a similar solution (open or not) instead of bashing it.
icon10.gif

You don't even realize what you are asking for.

First you declare you strive for extra features that consoles don't have. And later you support closed features.
Tell you what. Consoles have GCN GPUs. If developers and amd go crazy one day and start making features that only GCN can do, those games could not be played with anything other on PC but AMD GPU, or would be a slideshow.
What would you do?

If you didn't noticed, AMD implementation of DX12 bring improvement. Not for every GPU, but those capable get quite a boost.
NV implementation hurt everything. But it hurts amd gpus more so they can cut their fingers off, if it makes amd loose its arm.

What if AMD starts to cripple everything as long as it hurts nv more? Do you want games to have async flags all the time just to make a useless GPU pipeline flush on those that can't do AC?

Think about what you wish for. Because one day your $650 nv GPU may not be able to deliver XBone graphics effect.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
NV implementation hurt everything. But it hurts amd gpus more so they can cut their fingers off, if it makes amd loose its arm.

No it doesn't. >30% better performance using Core i7-2600K according to the developer, Core i3-4330 and FX6300 seeing improved performance @ Soviet Camp (with Fury X) according to CB and even Haswell-E is faster in CPU bound scenarios now. Needs some polish on high-end systems (shouldn't regress performance at all), but I'm sure it will improve over time.

This shows how the CPU bottleneck is alleviated with DX12. Running at 1440P I get more fps with DX11 over DX12 but then switching resolution to 720P, I then get more frames with DX12 over DX11. I assume my CPU (3930K @ 4.4) isn't being pushed at 1440P but is when it is at 720P.

System used is a 3930K - 16GB Ram (2133Mhz) - RIVF motherboard - Titan X - ROG Swift G-Sync monitor.

I can see the benefits for DX12 and especially for those that run multi GPU or older/slower CPUs and I am sure when SLI and Crossfire are patched into DX12, people will start to see some nice gains.

www.youtube.com/watch?v=CoKmLvjxSnE

Next time do your research before you post.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
No it doesn't. >30% better performance using Core i7-2600K according to the developer, Core i3-4330 and FX6300 seeing improved performance @ Soviet Camp (with Fury X) according to CB and even Haswell-E is faster in CPU bound scenarios now. Needs some polish on high-end systems (shouldn't regress performance at all), but I'm sure it will improve over time.



www.youtube.com/watch?v=CoKmLvjxSnE

Next time do your research before you post.

Mind you we are in GPU Forum:
1080ptb_zpsrapchxss.jpg


No improvements.

1440ptr_zpsmpas3uio.jpg


nil, nada, null, нуль, ゼロ, صفر, zero.

So next time, you stick to the topic at hand which is DX12 and GPUs.

Also, I did my research, and I know that if you have 8GB or more VRAM it actually is playable on DX12. It just needs double the memory DX11 patch needs. No biggie. Is that why you linked to youtube test with 12GB VRAM GPU?
 
Last edited:

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
With a Core i7-6700K, which means nothing for slower CPUs.
The fact that your favourite brand doesn't see huge improvements here is not my fault. Have a nice day.

The fact that DX12 is slower than DX11 across the board in this game is not your fault aswell, assuming you were not involved in development of this game.
Good day to you aswell.
 

Sweepr

Diamond Member
May 12, 2006
5,148
1,142
131
The fact that DX12 is slower than DX11 across the board in this game is not your fault aswell, assuming you were not involved in development of this game.
Good day to you aswell.

It isn't for slow CPUs, but keep spinning. If the game doesn't cripple NVIDIA while dramatically improving AMD cards performance it's not a real DX12 title for some people. And yes, I wouldn't feel safe with 4GB VRAM going forward.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
And, in all honesty, the only thing that's showing as genuinely slower on those DX 11 vs 12 charts seems to be fury.

I would not be hugely surprised if these sorts of charts end up being what most programmed DX12 games to end up looking like.

Yes the API lets you get non trivial advantages for specific GPU architectures if you really put big effort into optimising at a low level for that architecture.

Given the amount of 'effort' seemingly put into PC ports, would anyone expect them to do that? For, the ~half dozen architectures there will be the PC space?
(For GCN 1.1 yes, but that architecture is gone in 3-6 months time.).

Especially given that all that effort is wasted/even counter productive in 2/3 years time when all the cards on sale are using different architectures.

I imagine that the CPU benefits should be much safer to get.

PS - Serious question. Has anyone checked to see what all these DX12 things do on Intel iGPU's? That would be quite an interesting sort of data point. And, yes, given trends we do need to take those reasonably seriously too.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
And, in all honesty, the only thing that's showing as genuinely slower on those DX 11 vs 12 charts seems to be fury.

I would not be hugely surprised if these sorts of charts end up being what most programmed DX12 games to end up looking like.

Yes the API lets you get non trivial advantages for specific GPU architectures if you really put big effort into optimising at a low level for that architecture.

Given the amount of 'effort' seemingly put into PC ports, would anyone expect them to do that? For, the ~half dozen architectures there will be the PC space?
(For GCN 1.1 yes, but that architecture is gone in 3-6 months time.).

Especially given that all that effort is wasted/even counter productive in 2/3 years time when all the cards on sale are using different architectures.

I imagine that the CPU benefits should be much safer to get.

PS - Serious question. Has anyone checked to see what all these DX12 things do on Intel iGPU's? That would be quite an interesting sort of data point. And, yes, given trends we do need to take those reasonably seriously too.

Is this based on some sort of personal experience? Or just from the performance of one Gameworks title?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I'm pretty sure both AMD and nVIDIA had said there are ways to do Conservative Rasterization and ROV without relying on the hardware. I remember an AMD employee specifically said those methods were used in Dirt Rally.

It needs to be hardware.
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I said I wouldn't be hugely surprised not knowledge :)

It does seem to be already established that the low level optimisation work for GPU basically has to be architecture by architecture.

So, does the idea of people doing PC ports putting in non trivial chunks of low level optimisation work sound plausible? I really don't know.

The Intel iGPU thing does interest me though, as that is another rather different architecture. Perfectly serious option at 1080 of course.
 

Krteq

Senior member
May 22, 2015
991
671
136
It needs to be hardware.
Nope

nV, AMD, Intel and M$ claims the opposite
Is it possible to achieve Conservative Raster without HW support?

Yes, it is indeed possible to do this, and there is a very good article describing it here.

Essentially it involves using the Geometry Shader stage to either:

a) Add an apron of triangles around the main primitive

b) Enlarge the main primitive

However both approaches add performance overhead, and as such usage of conservative rasterization in real time graphics has been pretty limited so far.
developer.nvidia.com - Don't be conservative with Conservative Rasterization