• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Where did you pull this from?

The developer talks and conferences and NVIDIA's response to the first AotS alpha benchmarks.

NVIDIA are very much actively working with Kronos and pushing Vulcan. NVIDIA titles, gameworks titles, all push DX11 whereas AMD are pushing DX12 with Deus Ex, Hitman and Frostbite 3.

It's understandable, DX12 tends to boost AMD GCNs performance while being a tad bit slower or a tad bit faster on NVIDIA's architectures due to the very well made NVIDIA DX11 multi-threaded driver compiler. Basically NVIDIA are already performing great under DX11 whereas AMD struggle quite a bit. Of course, as DX11 titles become more compute bound AMD are pulling ahead with all of their cards except Fiji based cards. Fiji is bottlenecked on the front end and its caching has issues keeping the CUs operating efficiently.

What we're seeing is a 290x surpassing a GTX 780/780 Ti and even a GTX 970 as newer titles are released. The R9 390x is giving the GTX 980 a run for its money too.

The only card AMD can't touch is the GTX 980 Ti, thoough the FuryX is doing quite well unless an overclock is present.
 
It should be noted that Nvidia GPUs do have DMA engines which allow for asynchronous copy operations so in a sense they do support multi-engine to a degree but they may only interleave it with the 3D queue or compute queue separately ...

Just like Nvidia, AMD also features DMA engines but where their different is that they have dedicated compute engines to enable them to run compute shaders concurrently with the 3D engine which means that AMD can have all 3 different types of queue running simultaneously!

I don't know about Intel ...

As far as performance gains go I would say that having asynchronous compute shaders is the bigger advantage between the two since it allows you to tap into drastically different resources on the GPU and despite consoles featuring UMA which is currently faster than PCIE, data transfers through that bus don't appear to be a bottleneck in most AAA PC games if any at all ...
 
The thing which irks many developers about Gameworks is that the code is closed source. The shaders tend to be optimized for the CUDA architecture. Meaning that gameworks, aside from the tessellation hoopla, utilizes long running shaders. This is perfectly suite to the 32 lane wide SIMD design which CUDA incorporates. GCN uses 16 wide SIMD design. So GCN likes many simple shaders working in parallel.

Gameworks, being closed source, forces AMD to do guess work when optimizing a games profile. Guessing what shader was used and replacing it with AMD optimized shaders. AMD doesn't have as great of a driver team as NVIDIA so drivers take time to release (Game ready drivers).

OpenGPU will eliminate this. Exposing the code to developers and any GPU maker who can then supply devs with optimized shaders for their architectures.

It also reduces the load on the GPU driver teams. Making developer relations important and costing less in software R&D.
 
Fiji is bottlenecked on the front end and its caching has issues keeping the CUs operating efficiently.
Actually it is because Fiji is a mish-mash of architectures made by AMD, and current and future technology that AMD will use.

From Hawaii AMD put in Fiji caching system and shaders.
From Tonga they added Color Compression.
From Polaris they used HBM memory controller.

But the biggest problem what we see in Fiji is the first bit. Cache system. 16 KB per CU L1 cache and 128 KB cache L2 per CU. The problem is that you have 512 GB/s bandwidth that cannot be utilized, just because of the small caches.

Effect? We have seen slides with complete inefficiency of memory bandwidth utilization on Fury X.

Polaris is supposed to get rid of this. And lets hope they did it finally properly.
 
Actually it is because Fiji is a mish-mash of architectures made by AMD, and current and future technology that AMD will use.

From Hawaii AMD put in Fiji caching system and shaders.
From Tonga they added Color Compression.
From Polaris they used HBM memory controller.

But the biggest problem what we see in Fiji is the first bit. Cache system. 16 KB per CU L1 cache and 128 KB cache L2 per CU. The problem is that you have 512 GB/s bandwidth that cannot be utilized, just because of the small caches.

Effect? We have seen slides with complete inefficiency of memory bandwidth utilization on Fury X.

Polaris is supposed to get rid of this. And lets hope they did it finally properly.

GPUs don't need a whole lot of cache, to hide latencies they process multiple wavefronts/warps ...
 
OpenGPU will eliminate this. Exposing the code to developers and any GPU maker who can then supply devs with optimized shaders for their architectures.

It also reduces the load on the GPU driver teams. Making developer relations important and costing less in software R&D.

Not going to happen for this reason:

NV has more $$ to sponsor game developers and studios with PR deals and incentives to join GameWorks instead of going with AMD's open source approach.

Games that NV sponsor will not feature Async Compute until NV's hardware is ready.

Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.

What we can learn from this is NV has the $$ to throw around to bribe developers to go down a route that is best for them at the expense of AMD. We know AMD simply cannot compete with this approach since they lack $$.
 
NV has more $$ to sponsor game developers and studios with PR deals and incentives to join GameWorks instead of going with AMD's open source approach.

In other words, NVIDIA does what it needs to in order to make sure games run best on NVIDIA hardware, good to know.

Games that NV sponsor will not feature Async Compute until NV's hardware is ready.

Cool, happy to know NVIDIA's got its customers' back!

Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.

Nice, all the more reason for me to prefer NVIDIA hardware, since I know NV's got my back.

What we can learn from this is NV has the $$ to throw around to bribe developers to go down a route that is best for them at the expense of AMD. We know AMD simply cannot compete with this approach since they lack $$.

Sucks for AMD and its customers, doesn't it? No wonder NV has 80% market share -- they take care of their customers.

Look all that matters is the end result. If NV makes the better gear for the stuff I want to play, I'm buying NV cards. If AMD gets it done and makes its cards superior for my use, then I'll buy them. No use sitting around whining that NV does things to make sure that its hardware runs modern games better.
 
None of those things Silverforce mentioned improved performance on NVIDIA hardware, all they did was potentially decrease performance on AMD hardware. How do end users with NVIDIA graphics cards benefit?

Yeah, NV wins, but not by increasing their performance, but by limiting the other side's performance. What benefit is that to any of us? The only one it helps is NVIDIA. NVIDIA users aren't affected one way or the other, except that if they want to buy a new card for better performance, they're not being offered all the options.
 
Last edited:
Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.
Yeah,also on the CPU side it was custom tailored to run on 6 amd cores and yet on PCs it runs better on a single AMD module (or 1 core+ht on intel) somehow intel intervened and turned a multithreaded game into a single threaded one...poor AMD gets screwed over left and right....right?

At some point people have to see past the conspiracy theories and consider the fact that the consoles are incredibly weak and what works well for them won't necessarily work well on PC ,thus the many incredibly bad console ports of the last years.

Built-in (pre scripted) benches will always work better on AMD GPUs ,a wicked mind would say that AMD makes uarchs (gcn) exclusively for benchmarks, since that is pretty much the only scenario where you will see enough "action" at once for AMD's GPUs to get an edge,but somehow this still makes people think that the cards will perform exactly the same way in gameplay.
 
In other words, NVIDIA does what it needs to in order to make sure games run best on NVIDIA hardware, good to know.



Cool, happy to know NVIDIA's got its customers' back!



Nice, all the more reason for me to prefer NVIDIA hardware, since I know NV's got my back.



Sucks for AMD and its customers, doesn't it? No wonder NV has 80% market share -- they take care of their customers.

Look all that matters is the end result. If NV makes the better gear for the stuff I want to play, I'm buying NV cards. If AMD gets it done and makes its cards superior for my use, then I'll buy them. No use sitting around whining that NV does things to make sure that its hardware runs modern games better.

I've rarely seen a better argument against the invisible hand and the idea that consumers actually exercise market power in a beneficial way for themselves.

Yeah,also on the CPU side it was custom tailored to run on 6 amd cores and yet on PCs it runs better on a single AMD module (or 1 core+ht on intel) somehow intel intervened and turned a multithreaded game into a single threaded one...poor AMD gets screwed over left and right....right?

At some point people have to see past the conspiracy theories and consider the fact that the consoles are incredibly weak and what works well for them won't necessarily work well on PC ,thus the many incredibly bad console ports of the last years.

You're arguing that computers are different from consoles therefore something that works well on consoles shouldn't be used even though it works well on computers as well? Good job!
 
Built-in (pre scripted) benches will always work better on AMD GPUs ,a wicked mind would say that AMD makes uarchs (gcn) exclusively for benchmarks, since that is pretty much the only scenario where you will see enough "action" at once for AMD's GPUs to get an edge,but somehow this still makes people think that the cards will perform exactly the same way in gameplay.

We were discussing DX12 in general and you come up with such random?
 
None of those things Silverforce mentioned improved performance on NVIDIA hardware, all they did was potentially decrease performance on AMD hardware. How do end users with NVIDIA graphics cards benefit?

Yeah, NV wins, but not by increasing their performance, but by limiting the other side's performance. What benefit is that to any of us? The only one it helps is NVIDIA. NVIDIA users aren't affected one way or the other, except that if they want to buy a new card for better performance, they're not being offered all the options.

Not just decreased performance on AMD hardware but on the PC platform in general.

If we want to see what OpenGPU is capable of, image quality and performance wise, look to Battlefront.

Frostbite 3 should showcase this even more once it moves to DX12.
 
You're arguing that computers are different from consoles therefore something that works well on consoles shouldn't be used even though it works well on computers as well? Good job!

They have different strengths and weaknesses, so you have to take them into account. Some methods that work well on console won't work on pc, especially since there are so many possible combinations of hardware.
 
The thing which irks many developers about Gameworks is that the code is closed source. The shaders tend to be optimized for the CUDA architecture. Meaning that gameworks, aside from the tessellation hoopla, utilizes long running shaders. This is perfectly suite to the 32 lane wide SIMD design which CUDA incorporates. GCN uses 16 wide SIMD design. So GCN likes many simple shaders working in parallel.

Gameworks, being closed source, forces AMD to do guess work when optimizing a games profile. Guessing what shader was used and replacing it with AMD optimized shaders. AMD doesn't have as great of a driver team as NVIDIA so drivers take time to release (Game ready drivers).

OpenGPU will eliminate this. Exposing the code to developers and any GPU maker who can then supply devs with optimized shaders for their architectures.

It also reduces the load on the GPU driver teams. Making developer relations important and costing less in software R&D.

This webpage you linked http://ext3h.makegames.de/DX12_Compute.html
says that short running shaders are best for nvidia cards.
 
None of those things Silverforce mentioned improved performance on NVIDIA hardware, all they did was potentially decrease performance on AMD hardware. How do end users with NVIDIA graphics cards benefit?

Yeah, NV wins, but not by increasing their performance, but by limiting the other side's performance. What benefit is that to any of us? The only one it helps is NVIDIA. NVIDIA users aren't affected one way or the other, except that if they want to buy a new card for better performance, they're not being offered all the options.
Nvidia may have gotten less performance too. Not going DX12 being 1 clue pointing it out.


So lets say that until nv suddenly sponsorship NVidia performed 100 and AMD 110. Then after the sponsorship Nvidia performed 95 and AMD 90. They are hurting themselves, but they hurt AMD even more.

NVIDIA users dont get to use their flashy FL12.1 cards (because we all know the higher the FL, the better the DX12 perf. Right??? Right??) on DX12, they dont get to use AC (you know, that feature their cards support like, totally). Those 2 features are performamce enhancer ones if your hardware properly supports it. Obviously AC would have been a no go for Maxwell, but they could have still made some use of going DX12.

But hey, that api that they promoted so much and said they worked with MS for soo long, well now they are kinda trying to sabotage it's adoption rate. Who would have known, Nvidia trying to push back innovation. That is sure a new one.
 
In other words, NVIDIA does what it needs to in order to make sure games run best on NVIDIA hardware, good to know.

All Nvidia and their gameworks does is make games less efficient, more buggy and lastly less enjoyable.

Cool, happy to know NVIDIA's got its customers' back!
oh you mean cool that Nvidia bribes developers not to develop better and more efficient games. 😀

Nice, all the more reason for me to prefer NVIDIA hardware, since I know NV's got my back.
yeah nv got your back as long as their latest gen launches. then you better upgrade or get shafted.

Sucks for AMD and its customers, doesn't it? No wonder NV has 80% market share -- they take care of their customers.
sure they do. ask those poor kepler owners 😀

Look all that matters is the end result. If NV makes the better gear for the stuff I want to play, I'm buying NV cards. If AMD gets it done and makes its cards superior for my use, then I'll buy them. No use sitting around whining that NV does things to make sure that its hardware runs modern games better.
Telling a lie over and over again does not make it a truth. Gameworks causes games to perform poorly both on Nvidia and AMD hardware. The only thing it does is make it slightly more bearable on Nvidia's latest gen. :thumbsdown:
 
Not going to happen for this reason:

NV has more $$ to sponsor game developers and studios with PR deals and incentives to join GameWorks instead of going with AMD's open source approach.

Games that NV sponsor will not feature Async Compute until NV's hardware is ready.

Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.

What we can learn from this is NV has the $$ to throw around to bribe developers to go down a route that is best for them at the expense of AMD. We know AMD simply cannot compete with this approach since they lack $$.

If there really isn't a way to get good DX12 performance out of current gen nvidia GPUs you can see how that would happen without any interference from nvidia though (but still be essentially caused by NV - it's hard for something to gain any headway when it's not really supported by most of the market).

You're duplicating all your DX work (and everything downstream of that - testing etc) for GCN users running windows 10 and maybe a couple of laptop skylake iGPUs. That's a worse proposition than mantle was (which at least wasn't just a specific windows release) and it's not hard to understand how management could have looked at that and decided it wasn't worthwhile.

Especially after how buggy the first game's rendering was for so long...
 
alright complete dummy question here, the R9 280X supports DX 12? I thought it was a DX 11 point something card?

Don't confuse feature sets with supporting the API. DX11.2 is a feature set. Just like DX12_1 is. You can support DX12_1 features without supporting every DX12 function.

Built-in (pre scripted) benches will always work better on AMD GPUs ,a wicked mind would say that AMD makes uarchs (gcn) exclusively for benchmarks, since that is pretty much the only scenario where you will see enough "action" at once for AMD's GPUs to get an edge,but somehow this still makes people think that the cards will perform exactly the same way in gameplay.
What are you basing this on?
 
Last edited:
Don't confuse feature sets with supporting the API. DX11.2 is a feature set. Just like DX12_1 is. You can support DX12_1 features without supporting every DX12 function.
This.
We had same situation with DX11 when it came out, it supported SM3 (DX9) level cards.

DX12/Vulkan will be hard for coders, but I really hope that there will be somewhat similar pitfalls on GPUs, so developers can use similar code path on most cards.
 
Tomb Raider was originally an AMD game, with TressFX3.0 and Async Compute being showcased and advertised, NV somehow manages to take the sponsorship, ship the game with their GPUs, removed DX12/Async Compute (even the devs said this is what it runs on originally for the Xbone), rename to PureHair and release with pre-release builds that ran very poorly on AMD GPUs.

Interesting indeed, there's DX12 files in the Steam release build and the Launcher has a "Enable DX12" option which was removed.

http://www.overclock3d.net/articles...2_option_appears_on_rise_of_the_tomb_raider/1

I wonder if we will ever get that "DX12 patch"...
 
ARK: Survival Evolved was supposed to have a DX-12 patch in late August 2015 but guess what ??? They still havent released it yet.
 
Back
Top