computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Interesting info on the Hitman thing. Makes me wonder if it will bring extra Image Quality or just performance gains? Also makes me wonder if it will mirror Ashes - ie, an AMD user not using DX12 is screwed.

Secondly, I wonder if NV's DX11 path performance would be equal to AMD's DX12 path performance.

So many questions. But I look forward to some of these answers. After all the DX12 sales pitch and promising future of (more than 100% GPU utilization? WAAAAAT?) it's about time we get something concrete.
 

dacostafilipe

Senior member
Oct 10, 2013
797
298
136
I've only seen max/min/avg FPS comparison of Ashes between DX11 and DX12.

Did somebody see a frametime variation graph somewhere? Thx
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
SilverForce11, I'm at work right now, but when I get home this evening I'll try to run the Ashes of Singularity benchmark on both of my rigs below to see if I can detect the differences you point out.
 

Spjut

Senior member
Apr 9, 2011
931
160
106
Don't nvidia support Asynchronous Compute? No. Under DX12, Nvidia do not support Asynchronous Compute.

Could Nvidia support it via NVAPI? Like how Nvidia did support certain DX10.1 features on its Tesla GPUs despite not being fully compatible?

And is the asynchronous compute problem something that could be solved relatively easy in a minor DX12 revision?
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Could Nvidia support it via NVAPI? Like how Nvidia did support certain DX10.1 features on its Tesla GPUs despite not being fully compatible?

And is the asynchronous compute problem something that could be solved relatively easy in a minor DX12 revision?
The Async compute this is likely due to nvidia not supporting some mundane DX12 requirement in hardware.

Nvidia do support Asynchronous Compute when working with CUDA. An example of this is PhysX and DX11. Titles which use PhysX are using CUDA and by extension execute compute and graphics commands concurrently (though not asynchronously).

For full asynchronous compute support, nvidia would need to incorporate ACE-like units in their hardware. This is what nvidia lack right now. I mean their scheduler relies heavilly on their software driver and much of CUDA's HyperQ solution (nvidia's async compute-like feature) is a software implementation.

This leads me to believe that under Vulcan, nvidia may be able to support concurrent execution of graphics and compute workloads.
 

Spjut

Senior member
Apr 9, 2011
931
160
106
Sorry if I'm getting on your nerves now, but I guess what I'm asking is, why can't Microsoft and Nvidia make an additional DX12 Asynchronous Compute feature adapted to Kepler/Maxwell?
 

dogen1

Senior member
Oct 14, 2014
739
40
91
This leads me to believe that under Vulcan, nvidia may be able to support concurrent execution of graphics and compute workloads.

If you're right, that's great news for amd and nvidia users alike.

It definitely seems likely considering cuda, and how vulkan is designed to work well for all vendors.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
Sorry if I'm getting on your nerves now, but I guess what I'm asking is, why can't Microsoft and Nvidia make an additional DX12 Asynchronous Compute feature adapted to Kepler/Maxwell?

It can't if the hardware doesn't support it.
 

dogen1

Senior member
Oct 14, 2014
739
40
91
It can't if the hardware doesn't support it.

He's asking why can't they add a workaround for maxwell that'll allow them to do what they already seem to be able to do with cuda.

I have no idea. It's probably more complex than you and I realize anyway.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
But he says Nvidia has a similar solution that works when using CUDA

Nvidia has no hardware solution ...

They can only run multiple compute queues but they can't interleave graphics and compute queues ...

Nvidia may support concurrent copy queues but it doesn't go any further than that ...
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Sorry if I'm getting on your nerves now, but I guess what I'm asking is, why can't Microsoft and Nvidia make an additional DX12 Asynchronous Compute feature adapted to Kepler/Maxwell?
I'm sure they could but Microsoft releases standards and expects that vendors will comply. Nvidia probably made a mistake in their implementation. Since DX12 is a standard to follow, nvidia will have to incorporate a fix in future GPU designs.

Graphics manufacturers must comply with API requirements. Not the other way around.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Really bad quality images.

Biggest difference is missing Bloom.
Not really sure if there is actually any difference in quality of lighting. (Done by their 'object space rendering' approach.(Which is basically shading objects in texture space or Reyes shading in textures instead of micropolygons.))

Bloom? The biggest difference? Do you mean the lighting? Lots of lighting. There's also a lot less smoke. Possibly done with particles. When you look at scenes where there are multiple fighters moving quickly the nVidia rendering is a stutterfest.

This is night and day difference. nVidia said they were going to implement the DX12 async compute in drivers. Are they? Lots of people here seemed to think it was a simple task with nVidia's driver team, money, and software capabilities.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I'm sure they could but Microsoft releases standards and expects that vendors will comply. Nvidia probably made a mistake in their implementation. Since DX12 is a standard to follow, nvidia will have to incorporate a fix in future GPU designs.

Graphics manufacturers must comply with API requirements. Not the other way around.

Yep, if you want the spec changed you join the group and lobby for it. If you want the actually codified part of the spec that people have had a chance to implement against changed (or heck, an experimental that's been around long enough that people have implementations), you have fun with that. It's not going to happen.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Bloom? The biggest difference? Do you mean the lighting? Lots of lighting. There's also a lot less smoke. Possibly done with particles. When you look at scenes where there are multiple fighters moving quickly the nVidia rendering is a stutterfest.

This is night and day difference. nVidia said they were going to implement the DX12 async compute in drivers. Are they? Lots of people here seemed to think it was a simple task with nVidia's driver team, money, and software capabilities.
From information I've received (source) they were working on doing just that but the solution hammered the CPU hard under AotS. Mostly because the CPU is already hammered hard under that title.

Its been several months now since then (I received the info back in October). It looks to me like the idea was abandoned and that instead nvidia will push DX11 + game works instead. Partnering with studios in order to counter what would otherwise be a rather embarrassing scenario.

All that having been said, the GTX 980 Ti is an incredible piece of hardware. It is every card under it that really lacks any sort of punch.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Sorry if I'm getting on your nerves now, but I guess what I'm asking is, why can't Microsoft and Nvidia make an additional DX12 Asynchronous Compute feature adapted to Kepler/Maxwell?
This is nvidia we're talking about. Why would they do this? It just means less reasons to upgrade to pascal.
 
Feb 19, 2009
10,457
10
76
Bloom? The biggest difference? Do you mean the lighting? Lots of lighting. There's also a lot less smoke. Possibly done with particles. When you look at scenes where there are multiple fighters moving quickly the nVidia rendering is a stutterfest.

This is night and day difference. nVidia said they were going to implement the DX12 async compute in drivers. Are they? Lots of people here seemed to think it was a simple task with nVidia's driver team, money, and software capabilities.

This was the original Ashes unveil very early on, running on Mantle.

https://www.youtube.com/watch?v=t9UACXikdR0&feature=youtu.be&t=101

You notice a lot of dynamic lights on all the projectiles, lots of smoke trails, basically similar to the images we see on the side by side comparison for the 390X.

Notice Oxide specifically mentions "Thanks to Mantle & you can do this on DX12 or Vulkan too... every single shot is casting light." You can clearly see dynamic lighting of the projectiles.

I noticed it was clearly absent in the first alpha benchmark that tech sites did.

On the 970 @ Digital Foundry
http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested
a.bmp.jpg


There is absolutely zero dynamic light casting in that initial release for NV GPUs. o_O

Compared to the 390 @ Digital Foundry
OCd0OeG.jpg


Notice there is some dynamic lights, but much less than the initial Mantle reveal and much less than the recent 390X side by side video.

Then it makes sense when you put that into context with what Oxide have said, they disabled async compute at NV's request. That there is indeed a vendor specific path and it's for NV only.

The Radeons are rendering a scene with many dynamic light sources all that time while the NV GPUs according to these shots from tech sites above are not.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
The specs showcase DX11 for nvidia cards, the DX12 path is being added by a joint venture between I/O and AMDs Gaming Evolved initiative.

The game would require two separate DX12 paths in order to support nvidia, unless nvidia spend the money to implement their own path it won't get done.

Nvidia cards will not be able to run the AMD path as they don't support Asynchronous Compute.

you said something like this before on OCN and I still wonder how accurate it is. I guess we'll see when the game comes out if dx12 works on nvidia cards.

My thinking is they simply run the compute tasks through the graphics queue for nvidia. It would just be slower.

What's the verdict on the side-by-side video? I don't think its just monitor because even then you would see the effects, just with less brightness etc.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
you said something like this before on OCN and I still wonder how accurate it is. I guess we'll see when the game comes out if dx12 works on nvidia cards.

My thinking is they simply run the compute tasks through the graphics queue for nvidia. It would just be slower.

What's the verdict on the side-by-side video? I don't think its just monitor because even then you would see the effects, just with less brightness etc.
Sure, you can run the compute tasks sequentially in the graphics queue but the DX12 path, for hitman, is built around GCN and Asynchronous compute. The shaders being used will be GCN optimized (AMD Gaming Evolved title). You end up with the same scenario we had with AotS. In this scenario the developer couldn't get even a GTX 980 Ti to run the code at a playable frame rate. Oxide worked with nvidia and implemented a vendor ID specific path without Async Compute and with nvidia optimized shaders. That's what nvidia runs today in AotS.

AotS barely used Async Compute as per Oxide. Hitman will be leveraging Asynchronous compute moreso than any PC title to date. In other words, far more compute work loads than AotS. If you run those sequentially, you'll likely hit a wall on a GTX 980 Ti.

But you're right, only time will tell.
 
Last edited:

Mahigan

Senior member
Aug 22, 2015
573
0
0
This was the original Ashes unveil very early on, running on Mantle.

https://www.youtube.com/watch?v=t9UACXikdR0&feature=youtu.be&t=101

You notice a lot of dynamic lights on all the projectiles, lots of smoke trails, basically similar to the images we see on the side by side comparison for the 390X.

Notice Oxide specifically mentions "Thanks to Mantle & you can do this on DX12 or Vulkan too... every single shot is casting light." You can clearly see dynamic lighting of the projectiles.

I noticed it was clearly absent in the first alpha benchmark that tech sites did.

On the 970 @ Digital Foundry
http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested
a.bmp.jpg


There is absolutely zero dynamic light casting in that initial release for NV GPUs. o_O

Compared to the 390 @ Digital Foundry
OCd0OeG.jpg


Notice there is some dynamic lights, but much less than the initial Mantle reveal and much less than the recent 390X side by side video.

Then it makes sense when you put that into context with what Oxide have said, they disabled async compute at NV's request. That there is indeed a vendor specific path and it's for NV only.

The Radeons are rendering a scene with many dynamic light sources all that time while the NV GPUs according to these shots from tech sites above are not.
I never noticed that before...
 
Feb 19, 2009
10,457
10
76
I never noticed that before...

Lol yeah I didn't either until I saw the recent side-by-side... then I went back to the initial press release benchmark and there it was.

NV GPUs are not rendering dynamic lights. Exactly matching Oxide's statement that their mass usage of dynamic lights are possible due to Mantle/DX12/Vulkan features, and that they disabled Async Compute at NV's request, adding NV specific rendering path with optimized shaders for them.

When they posted on OC3D forums and they actually said along the lines of Ashes is actually more optimized for NV. Makes sense, AMD GPUs are working so much more work.
 
Last edited:

Mahigan

Senior member
Aug 22, 2015
573
0
0
Lol yeah I didn't either until I saw the recent side-by-side... then I went back to the initial press release benchmark and there it was.

NV GPUs are not rendering dynamic lights. Exactly matching Oxide's statement that their mass usage of dynamic lights are possible due to Mantle/DX12/Vulkan features, and that they disabled Async Compute at NV's request, adding NV specific rendering path with optimized shaders for them.
And we've been comparing the benchmarks as if they're apples to apples when in fact the GCN cards are processing more effects...

Hmm...