computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 11 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Feb 19, 2009
10,457
10
76
The video is not available for me to view in the US. Any other links?

I was going to look to compare what the benchmark looks like on my PC (With a 980Ti) to see if I can reproduce the difference you are noticing.

Based on your description, I definitely have the glowing lights from weapons (both lasers and missile engines) and smoke from aircraft and missiles. These did not used to be in the benchmark for me, but a few months ago after an update they were all there.

Jet exhaust glowing (pulsating light)
4JdCs4A.jpg


Projectile lightning, for missiles, lasers
KZc5Gkg.jpg


Really obvious
psfLoFZ.jpg


I noticed when Ashes was first released for benchmark in the press, DigitalFoundry did a side-by-side comparison and the game lacked this lighting system, so this must have been a recent addition to their build.
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,784
3,101
146
I noticed when Ashes was first released for benchmark in the press, DigitalFoundry did a side-by-side comparison and the game lacked this lighting system, so this must have been a recent addition to their build.

Can't quite make heads or tails of it. They broke the jet lighting recently (there are no engine lights at all right now. You can sorta see them inside the jet model, so maybe the offset got borked)

It looks more like the photos from the AMD side.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Well... Hitman is freaking people out. Steam released both the Minimum and Recommended specs and folks are wondering why it's only DX11 and why a Radeon R9 290 is being compared to a GTX 770.

See here: http://wccftech.com/hitman-beta-pc-system-requirements-revealed/

Well that's because DX12 is an AMD exclusive with the title (like the weapons effects are nvidia exclusive in Fallout 4). Why is DX12 AMD exclusive? Asynchronous Compute, that's why.

The game will allow for better lighting, effects, smoke etc as well as better performance on AMD cards running DX12 by making full use of the ACEs on GCN.

See here: http://www.gamersnexus.net/news-pc/2309-amd-hitman-dx12-ace-workload-management

Don't nvidia support Asynchronous Compute? No. Under DX12, Nvidia do not support Asynchronous Compute.

What I said in this thread, in prior postings, is accurate.

As for the DX11 path, it will perform better on nvidia than AMD cards hence the weird recommended specs of a GTX 770 and R9 290.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Jet exhaust glowing (pulsating light)
4JdCs4A.jpg


Projectile lightning, for missiles, lasers
KZc5Gkg.jpg


Really obvious
psfLoFZ.jpg


I noticed when Ashes was first released for benchmark in the press, DigitalFoundry did a side-by-side comparison and the game lacked this lighting system, so this must have been a recent addition to their build.

Yeah. It's not even close to the same rendering. Is this new, the lack of effects from nVidia? Or has the press just not been reporting it? I'll be curious to see what comes of this.
 
Feb 19, 2009
10,457
10
76
@Mahigan

I think it's well beyond the point where we can safely say NV GPUs don't support Async Compute on hardware.

With AMD publicly saying only GCN supports that DX12 feature and NV not offering any refute, it's clear.

As I said before, if AMD wants devs to push some features they need to be active about it, send $ to studio PR department, send software engineers to implement it. They simply cannot rely on their GPU Open initiative and hope devs will take it upon themselves to implement it.

Tomb Raider's release with disabled DX12 path despite advertising async compute features of their high-tech engine is pretty lame.

This basically means it's not beyond NV to simply buyout devs, cancel DX12 PC port, ship it DX11 only. So whatever they say with Hitman or Deus Ex, may not be reality because the same things were said about Tomb Raider with TressFX3.0 and Async Compute lighting.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
It is lame, I agree. But we're about to see many titles feature Asynchronous compute. Whether nvidia support it or not.

Deus Ex: Mankind divided is the next one.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
Are you sure DX12 is AMD exclusive in Hitman?

"Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality without compromising performance. PC gamers may have heard of asynchronous compute already, and Hitman demonstrates the best implementation of this exciting technology yet. By unlocking performance in GPUs and processors that couldn’t be touched in DirectX 11, gamers can get new performance out of the hardware they already own."

I see three possibilities. That could be marketing blah, blah and the performance difference is minor. Or it could mean DX12 will perform decisively better on GCN but not be exclusive. Or it could truly be exclusive. Anything more concrete to conclude AMD only?
 
Feb 19, 2009
10,457
10
76
Yeah. It's not even close to the same rendering. Is this new, the lack of effects from nVidia? Or has the press just not been reporting it? I'll be curious to see what comes of this.

Probably a recent build, since these lighting effects were not present in older builds that were benched by tech sites.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
The specs showcase DX11 for nvidia cards, the DX12 path is being added by a joint venture between I/O and AMDs Gaming Evolved initiative.

The game would require two separate DX12 paths in order to support nvidia, unless nvidia spend the money to implement their own path it won't get done.

Nvidia cards will not be able to run the AMD path as they don't support Asynchronous Compute.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106

Hitman will leverage unique DX12 hardware found in only AMD Radeon GPUs—called asynchronous compute engines—to handle heavier workloads and better image quality without compromising performance.
Well, a lot of people have been saying AMD needs to fight fire with fire! We're gonna see how AMD handles having exclusive features. It'll be especially good if this ends up as part of OpenGPU and others can add the same support to their games.

By unlocking performance in GPUs and processors that couldn’t be touched in DirectX 11, gamers can get new performance out of the hardware they already own.
I'm interested to see if they are honest about this and offer real support optimizing for customers who have already purchased. Not just the latest released cards they are trying to peddle.

With on-staff game developers, source code and effects, the AMD Gaming Evolved program helps developers to bring the best out of a GPU.
The next time people claim AMD doesn't offer Dev support, remember this statement.
 
Feb 19, 2009
10,457
10
76
It is lame, I agree. But we're about to see many titles feature Asynchronous compute. Whether nvidia support it or not.

Deus Ex: Mankind divided is the next one.

No you're not understanding. NV can simply buy the devs, ship the PC build DX11 only. Bye bye Async Compute.

Look at what Crystal Dynamics have said.

http://www.pcgameshardware.de/Rise-...4451/Specials/Grafikkarten-Benchmarks-1184288

In an interview, Gary Snethen says in connection with the illumination used in Rise of the Tomb Raider also: "On the Xbox One and for Direct X 12 Async Compute is used with Direct X 11, the calculation is running synchronously on the other hand." Possibly comes later support the low-level API via patch.

Pretty obvious what has happened.

So if AMD wants their features used, they better sponsor the developers to do it.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Hitman will be the first game to truly leverage Async Compute. AotS barely made use of it. We should see some VERY impressive performance and effects added for AMD users. DX12 alone would allow AMD to circumvent the CPU bottleneck present under DX11 and AMD GCN. Leveraging Async compute will push AMD GCN performance ahead of Nvidia's capabilities.

I feel quite confident in these statements.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
No you're not understanding. NV can simply buy the devs, ship the PC build DX11 only. Bye bye Async Compute.

Look at what Crystal Dynamics have said.

http://www.pcgameshardware.de/Rise-...4451/Specials/Grafikkarten-Benchmarks-1184288



Pretty obvious what has happened.

So if AMD wants their features used, they better sponsor the developers to do it.
I'm aware, I stated the same before it was published because I was told prior to the release by a source.

OpenGPU will add all of the libraries being developed online for any developer to use.

I think that coupled with AMD Gaming Evolved, we should see some nice things for AMD users on the horizon (there are nice things such as anything running on Frostbite 3).
 
Feb 19, 2009
10,457
10
76
I hope Pascal's command engine and rendering pipeline supports parallel graphics + compute processing, so we can see more studios push graphics with "free" compute based effects.

Can you imagine if it does not? Wow, there will be major segregation of PC games, with AMD pushing for DX12 in games it sponsors while NV will push for DX11 in GameWorks titles... not a good prospect for AAA gamers.

And when is there an update for this monstrosity of PR-induced propaganda?

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Not befitting of a respectable tech site to still not fix such an error.

On a side note, part of the reason for AMD's presentation is to explain their architectural advantages over NVIDIA, so we checked with NVIDIA on queues. Fermi/Kepler/Maxwell 1 can only use a single graphics queue or their complement of compute queues, but not both at once – early implementations of HyperQ cannot be used in conjunction with graphics. Meanwhile Maxwell 2 has 32 queues, composed of 1 graphics queue and 31 compute queues (or 32 compute queues total in pure compute mode). So pre-Maxwell 2 GPUs have to either execute in serial or pre-empt to move tasks ahead of each other, which would indeed give AMD an advantage..
 
Last edited:

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I hope Pascal's command engine and rendering pipeline supports parallel graphics + compute processing, so we can see more studios push graphics with "free" compute based effects.

Can you imagine if it does not? Wow, there will be major segregation of PC games, with AMD pushing for DX12 in games it sponsors while NV will push for DX11 in GameWorks titles... not a good prospect for AAA gamers.

And when is there an update for this monstrosity of PR-induced propaganda?

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Not befitting of a respectable tech site to still not fix such an error.

Nvidia need not be as aggressive as AMD in leveraging parallelism ...

They depend on higher clocks, higher automatic occupancy, and more fixed function units to minimize the impact of the GPU going idle ...

Asynchronous compute wouldn't be all that advantageous for Nvidia unless the fixed function units were being seriously pressured much like AMD feeling the burn for having the same frontend along with the same geometry processing bottlenecks they faced on Fiji that they carried over from Hawaii ...

AMD's future as far as their GPUs are concerned lies with the usage of heavy compute kernels ...

The sooner they find a way to ditch the rasterizer, edge setup, and blending unit the better their prospects will be ...
 
Feb 19, 2009
10,457
10
76
@ThatBuzzkiller

The entire point of async compute is that the compute can run in parallel besides graphics, this means it's "free performance" as it does not delay or bottleneck graphics rendering.

It's not just about shader uptime or occupancy.

Example, you have 100 units with 100% occupancy that handles graphics & compute. In the typical game, graphics may use 70% of the rendering queues, compute 30%.

In the above example, the output is 100 unit per time.

Now, if you have 100 units with 70% occupancy, but it has separate rendering engines for compute that can queue with 100% efficiency, you are rendering still 100 unit per time.

For a better example, you have 100 units with 100% occupancy (ie. DX12 with multi-threaded rendering) AND separate efficient compute engines, you're outputting 130 units per time because those compute tasks can run in parallel while the graphics is being rendered. More work done in the same time because those work use inherently different parts of the shaders.

Having separate parallel compute engines makes sense if the workload involves a high level of compute. Perhaps at the start of the DX11 era, it was rarer, but now approaching 2016 and DX12 era, compute is a heavy slice of most game rendering so any architecture that will excel in this era must be compute focused and able to do it asynchronously to not bottleneck graphics.

Is it a coincidence that Pascal is touted for being a compute powerhouse? Same with Volta.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
For a better example, you have 100 units with 100% occupancy (ie. DX12 with multi-threaded rendering) AND separate efficient compute engines, you're outputting 130 units per time because those compute tasks can run in parallel while the graphics is being rendered. More work done in the same time because those work use inherently different parts of the shaders.
But does it really work like that? Why should you not be able to use 130 units no matter the work? If it is separate than you loose 30 units in every game that has no,or not enough, async.
What is this compute anyway,is it calculations that would otherwise be done with the driver on the CPU?Or is it stuff that has to be done with the GPU in serial if parallel is not possible?


Yes,now approaching the DX12 era compute is a heavy slice of most game rendering, but think about why,the consoles have 1.5Ghz cores, that's like a very low s755 era part,you have to use async to make multiple cores feed the GPUs ACEs separately at once because one core alone just wouldn't cut it.
 

Flapdrol1337

Golden Member
May 21, 2014
1,677
93
91
I think you're too optimistic silverforce, you seem to think that the compute can run without impacting the graphics workload even a little bit.
 

Tapoer

Member
May 10, 2015
64
3
36
Those light effects might be using AC, and since the code path for Nvidia isn't using AC, they don't have those light effects.

They still have to optimize the game, we shouldn't forget about that.

I think you're too optimistic silverforce, you seem to think that the compute can run without impacting the graphics workload even a little bit.


From what I have seen 20-30% of compute can be run in AC on GCN without impacting performance on the graphics side, maybe more on consoles with fixed hardware.
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
@ThatBuzzkiller

The entire point of async compute is that the compute can run in parallel besides graphics, this means it's "free performance" as it does not delay or bottleneck graphics rendering.

It's not just about shader uptime or occupancy.

Example, you have 100 units with 100% occupancy that handles graphics & compute. In the typical game, graphics may use 70% of the rendering queues, compute 30%.

In the above example, the output is 100 unit per time.

Now, if you have 100 units with 70% occupancy, but it has separate rendering engines for compute that can queue with 100% efficiency, you are rendering still 100 unit per time.

For a better example, you have 100 units with 100% occupancy (ie. DX12 with multi-threaded rendering) AND separate efficient compute engines, you're outputting 130 units per time because those compute tasks can run in parallel while the graphics is being rendered. More work done in the same time because those work use inherently different parts of the shaders.

Having separate parallel compute engines makes sense if the workload involves a high level of compute. Perhaps at the start of the DX11 era, it was rarer, but now approaching 2016 and DX12 era, compute is a heavy slice of most game rendering so any architecture that will excel in this era must be compute focused and able to do it asynchronously to not bottleneck graphics.

Is it a coincidence that Pascal is touted for being a compute powerhouse? Same with Volta.

I'm starting to sincerely regret giving any gamer's any knowledge about graphics APIs ...

It IS all about shader uptime and occupancy if you want to be compute bound ...

If your compute bound you will only get more work out of your rasterizers and etc ...

If your fixed function bound you could stand to get more out of your shaders ...

AMD assumes the the second scenario since their hitting bottlenecks not related to their compute power ...

You could hit near 200% or maybe even 300% usage if you count in the copy queuing theoretically but why would AMD want developers pushing non-compute related bottlenecks when they want to de-emphasize it as much as possible ?

One of the reasons why AMD performs sub-par compared to Nvidia in GameWorks related games that you despise so much has to do with the fact that they deliberately try and use more non-compute resources like tessellation to take advantage of AMD's weakness when it comes to geometry processing ...

Having dedicated compute engines only make sense when your trying to sidestep the non-compute resources and in Nvidia's defense they don't seem to have that issue unless their chip design teams start start taking inspiration from AMD or Intel's former Larrabee team ...
 

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
Yeah. It's not even close to the same rendering. Is this new, the lack of effects from nVidia? Or has the press just not been reporting it? I'll be curious to see what comes of this.
Really bad quality images.

Biggest difference is missing Bloom.
Not really sure if there is actually any difference in quality of lighting. (Done by their 'object space rendering' approach.(Which is basically shading objects in texture space or Reyes shading in textures instead of micropolygons.))
 
Last edited: