computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
I am with you. But the fact stands: DX12 is slower than DX11 in this "game". Which makes it clear that either Oxide has no clue what they are doing or they just dont care about nVidia.

There doesnt exist any explanation why DX12 should be slower on nVidia hardware.

There are but you don't want to acknowledge it.

It's called Async Compute.

Context switching between compute & graphics hurts NV performance. They say so in their programming guide.

https://developer.nvidia.com/sites/...works/vr/GameWorks_VR_2015_Final_handouts.pdf

p31

All our GPUs for the last several years do context switches at draw call boundaries. So when the GPU wants to switch contexts, it has to wait for the current draw call to finish first.

So, even with timewarp being on a high-priority context, it’s possible for it to get stuck behind a longrunning draw call on a normal context.

For instance, if your game submits a single draw call that happens to take 5 ms, then async timewarp might get stuck behind it, potentially causing it to miss vsync and cause a visible hitch.

Best to leave it running in DX11 serial mode.

Look at this, the 2nd time it was benched: http://www.computerbase.de/2015-10/...ashes-of-the-singularity-directx-12-1920-1080

NV performance does not decline in DX12 vs DX11. This was after Dan Baker posted to say they disabled Async Compute at NV's request.

Now the most recent beta, we see NV lose performance in DX12 vs DX11 again like it did in the 1st benchmark... looks like Oxide enabled Async Compute again.

AMD reps have said as much, that their uarch has no penalty for context switching unlike NVs (was posted awhile ago on twitter, reddit & facebook) and GCN is the only GPU uarch that has functional Async Compute.

https://community.amd.com/community...blade-stealth-plus-razer-core-gaming-solution

AMD's advanced Graphics Core Next (GCN) architecture, which is currently the only architecture in the world that supports DX12's asynchronous shading.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
NV performance does not decline in DX12 vs DX11. This was after Dan Baker posted to say they disabled Async Compute at NV's request.

So, they disabled "Async Compute" after they have found problems. Which mean they havent cared about nVidia hardware. D:

Now the most recent beta, we see NV lose performance in DX12 vs DX11 again like it did in the 1st benchmark... looks like Oxide enabled Async Compute again.
They have always lost performance in a gpu limit. The current benchmark in the beta doesnt use the same settings.

AMD reps have said as much, that their uarch has no penalty for context switching unlike NVs (was posted awhile ago on twitter, reddit & facebook) and GCN is the only GPU uarch that has functional Async Compute.

https://community.amd.com/community...blade-stealth-plus-razer-core-gaming-solution
Yeah. Doesnt explain why DX12 is slower than DX11 on nVidia hardware. Or wait - it does. Oxide has done it intentionally. :thumbsdown:
 
Feb 19, 2009
10,457
10
76
So, they disabled "Async Compute" after they have found problems. Which mean they havent cared about nVidia hardware. D:

You must have not paid any attention back last year when it was revealed Kepler & Maxwell cannot run rendering & compute at the same time, as in they do not support async compute.

This surprised Oxide because NV told them they could, their drivers told them it can, but when they used it, performance dropped. This started the whole drama episode between Oxide, NV and folks at B3D until it finally came down to MS GPU viewer seeing that Maxwell actually accepts the Async Compute call, but then runs it in serial mode anyway.

So they then turned it off at NV's request.

Does that mean Oxide doesn't care about NV hardware like you claim?

I don't know, it could be. They are AMD sponsored so they could just deliberately gimp NV hardware... who knows.

I just raise AC as a possible reason why NV perf drops in DX12, due to the overhead of accepting an Async Compute queue then having to re-assign it back into a normal serial mode queue.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I just raise AC as a possible reason why NV perf drops in DX12, due to the overhead of accepting an Async Compute queue then having to re-assign it back into a normal serial mode queue.

Sure,ok,but with a low level API is it not the devs responsibility to check which codepath runs better and use that?You are not forced to use AC in Dx12 you have the choice, it's not difficult to check for GPU numbers and run the bench with or without AC active.
So bottom line they (oxide) make bad choices for nvidia.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
Sure,ok,but with a low level API is it not the devs responsibility to check which codepath runs better and use that?You are not forced to use AC in Dx12 you have the choice, it's not difficult to check for GPU numbers and run the bench with or without AC active.
So bottom line they (oxide) make bad choices for nvidia.
nVidia are the only one with vendor specific code optimization on this game, is that a bad choice?nvidia claimed they could AC when clearly they don't (at least in a way that helps performance) and you want to blame oxide?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
nVidia are the only one with vendor specific code optimization on this game, is that a bad choice?nvidia claimed they could AC when clearly they don't (at least in a way that helps performance) and you want to blame oxide?

With DX12 there is only vendor specific code optimization. And even specific uarch code optimization.

Can this game even run on Intel IGP, assuming you ignore performance.

We already saw how GCN 1.2 could break Mantle performance over GCN 1.0 and 1.1.
 
Feb 19, 2009
10,457
10
76
Sure,ok,but with a low level API is it not the devs responsibility to check which codepath runs better and use that?You are not forced to use AC in Dx12 you have the choice, it's not difficult to check for GPU numbers and run the bench with or without AC active.
So bottom line they (oxide) make bad choices for nvidia.

Yes that sounds correct, it is the developer's choice if they enable Async Compute for hardware that don't support it. They need a separate check for IHV and enable/disable features.

We should see it in Beta 2 when they focus on optimizations.

DX12 should not perform slower, basically, worse case scenario, run compute serial as DX11 and get the benefits of lower API overhead on the CPU side.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
With DX12 there is only vendor specific code optimization. And even specific uarch code optimization.

Can this game even run on Intel IGP, assuming you ignore performance.

We already saw how GCN 1.2 could break Mantle performance over GCN 1.0 and 1.1.

Tell that to the guy who claims bad choices were made for nvidia.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Tell that to the guy who claims bad choices were made for nvidia.

Oxide is AMD sponsored. It would be odd if they dont focus on their sponsors uarchs in the first place. But again, does it work on Intel IGP and what performance?

The game is very far from release and was prematurely released due to what seems to be cash flow issues.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I am with you. But the fact stands: DX12 is slower than DX11 in this "game". Which makes it clear that either Oxide has no clue what they are doing or they just dont care about nVidia.

There doesnt exist any explanation why DX12 should be slower on nVidia hardware.

I agree with you that nvidia can do no wrong. If their hardware is not capable of dx12 features it should be at least as fast as dx11 patch and shouldn't cause stalls in GPU pipeline switching to compute tasks back and forth.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
where is the benchmark for 290?????????????????????????
I posted mine.
On my 4790kn rig @4.7Ghz and 2 R9 290s (Sapphire OC Tri-Xs at 1000 core)

I had overall frame rate of 36.4 for DX11 and 39.4 for DX12. Obviously Ashes of Singularity doesn't yet support CrossFire since the scores for 2 R9 290s should be much higher. I noticed the R9-390 is higher but remembered that even if vcore is nearly the same for 290 to 390, the memory is now 6 GHz for 390 vs 5.2Ghz for the Sapphire R9 290OC. I suspect that makes a difference.

In the AMD 290 I had a 3 point jump in fps going from DX11 to DX12.:thumbsup:
 
Last edited:

zlatan

Senior member
Mar 15, 2011
580
291
136
I am with you. But the fact stands: DX12 is slower than DX11 in this "game". Which makes it clear that either Oxide has no clue what they are doing or they just dont care about nVidia.

There doesnt exist any explanation why DX12 should be slower on nVidia hardware.

I think nobody has a clue. The last well documented architecture from NVIDIA was G80. I do a lot of optimization based on these docs, because I have no clue about how the newer architectures works. The best thing I can do is to assume that Fermi/Kepler/Maxwell is working the same way as G80 worked. That's why I'm saying that it is important to open up the tools and the architecture documents, so I get the ability to learn how these hardwares works and I can optimize accordingly.
 
Last edited:

Leadbox

Senior member
Oct 25, 2010
744
63
91
Oxide is AMD sponsored. It would be odd if they dont focus on their sponsors uarchs in the first place. But again, does it work on Intel IGP and what performance?

The game is very far from release and was prematurely released due to what seems to be cash flow issues.

I doubt they're targeting this at iGPs or APUs and again, when they found performance regressing on nvdia because of async compute, they took steps to accommodate them with vendor specific optimizations. They're hardly going to ignore the marketshare leader now are they? I just hope we don't end up missing out on AC altogether because it doesn't work for some.
 
Feb 19, 2009
10,457
10
76
I think nobody has a clue. The last well documented architecture from NVIDIA was G80. I do a lot of optimization based on these docs, because I have no clue about how the newer architectures works. The best thing I can do is to assume that Fermi/Kepler/Maxwell is working the same way as G80 worked. That's why I'm saying that it is important to open up the tools and the architecture documents, so I get the ability to learn how these hardwares works and I can optimize accordingly.

So you're basically saying DX12/Vulkan is everything that NV isn't about, because NV is pushing closed sourced propriety stuff, heavy driver optimizations etc, whereas these new API need transparency and open sharing of information to optimize?

Surely they will have to work with devs to provide better access & tools for DX12 games to run well on their hardware, at some point.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Surely they will have to work with devs to provide better access & tools for DX12 games to run well on their hardware, at some point.

Or they will provide a nice optimized GameWorks code to implement on the game. No need for devs to do any optimizations.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I doubt they're targeting this at iGPs or APUs and again, when they found performance regressing on nvdia because of async compute, they took steps to accommodate them with vendor specific optimizations. They're hardly going to ignore the marketshare leader now are they? I just hope we don't end up missing out on AC altogether because it doesn't work for some.

You tapdance around the question. Will it run on Intel IGP in DX12 mode, will DX12 be faster or slower than DX11? Assuming it will run DX12 at all.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
You tapdance around the question. Will it run on Intel IGP in DX12 mode, will DX12 be faster or slower than DX11? Assuming it will run DX12 at all.

Is there a point you're trying to make by bringing up the igp? Is the game being targeted to run on igps, I haven't a clue, do you?
 

Riek

Senior member
Dec 16, 2008
409
15
76
You tapdance around the question. Will it run on Intel IGP in DX12 mode, will DX12 be faster or slower than DX11? Assuming it will run DX12 at all.

If intel gpu supports dx12 and answers yes to support async comp. then I would assume it will run on dx12 and use async compute. (and other dx12 functionality).

That certain hardware limitations perform differently against some API's is not relevant when you are developing. It becomes relevant when you start optimizing... which they haven't (sheduled for next stage apparently)

That people draw conclusion or want to draw conclusion to performance impact of certain features is more than grasping for straws...especially when it is clear there is no clear performance optimization done.

Note: if one would enable xxx-lightning and performance is lower on other cards, is it also the game that isn't optimized well enough? or is it just the hardware below that shows its limitations?
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
This is what I've seen so far of the glorious DX12 revolution.

1) NV either paid or used their leverage to coerce/force/convince an AMD funded dev to remove/disable/forget a DX12 feature to not penalize NV's performance. Woof.

2) A game that used DX12 on a crappy console launched with it missing on PC, and this game was coincidentally sponsored/partnered by NV. Hmmm...

Why are we still talking about DX12 performance, more so on a game that isn't slated to launch for a few more months? I wouldn't be surprised if NV shows up at Oxide's door with another bag full of money/swag/puppies and the game launches with a DX11 client/option that puts it ahead of AMD's DX12 performance.

I also wouldn't be surprised if all these other "these games are slated to be DX12" turn out to be DX11 clients.

Maybe when NV decides to put out DX12 hardware/propaganda will it matter. Face it, AMD is screwed. Their own Dev took pity/money on/from Nvidia.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
This is what I've seen so far of the glorious DX12 revolution.

1) NV either paid or used their leverage to coerce/force/convince an AMD funded dev to remove/disable/forget a DX12 feature to not penalize NV's performance. Woof.

2) A game that used DX12 on a crappy console launched with it missing on PC, and this game was coincidentally sponsored/partnered by NV. Hmmm...

Why are we still talking about DX12 performance, more so on a game that isn't slated to launch for a few more months? I wouldn't be surprised if NV shows up at Oxide's door with another bag full of money/swag/puppies and the game launches with a DX11 client/option that puts it ahead of AMD's DX12 performance.

I also wouldn't be surprised if all these other "these games are slated to be DX12" turn out to be DX11 clients.

Maybe when NV decides to put out DX12 hardware/propaganda will it matter. Face it, AMD is screwed. Their own Dev took pity/money on/from Nvidia.

Or perhaps you imagine DX12 is something it isn't.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
If intel gpu supports dx12 and answers yes to support async comp. then I would assume it will run on dx12 and use async compute. (and other dx12 functionality).

That certain hardware limitations perform differently against some API's is not relevant when you are developing. It becomes relevant when you start optimizing... which they haven't (sheduled for next stage apparently)

That people draw conclusion or want to draw conclusion to performance impact of certain features is more than grasping for straws...especially when it is clear there is no clear performance optimization done.

Note: if one would enable xxx-lightning and performance is lower on other cards, is it also the game that isn't optimized well enough? or is it just the hardware below that shows its limitations?

So you would think it would run on DX12 Intel IGP. But no one have gotten it to work yet. So the assumption until proven otherwise must be that the game targets specific AMD and NVidia uarchs with its DX12.

Is there a point you're trying to make by bringing up the igp? Is the game being targeted to run on igps, I haven't a clue, do you?

Yes, since you claim NVidia is the only one with vendor specific code. You can of course do so from the rationale that its an AMD game, hence any DX12 not running on AMD must be vendor specific.