• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 53 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Could you do test in ashes with 1080p, low details and see what you get.

I get 98fps , with cpu performance of 102 fps. So heavily cpu bound.

I7-3770k @4.2GHz, Fury X @ 1100/545MHz

Send me a screen shot of your graphic settings so I can mirror them in my test.
 
Just low preset. Everything are as low as possible.

Just ran a couple back to back tests and i'm only getting 65FPS Average with 85FPS for the CPU. Stock Sapphire 390X(1080/1500) and stock X5690s(3.46Ghz). This is with the latest 16.3 Drivers and todays .93 AOTS update.
 
Quick screenshot of my CPU load under AOTS benchmark:

XxBJRsr.png



Looks like up to 18 threads are being used in the game.
 
CPU utilization is equally impressive during gameplay. Testing on 720p low, I measured 90% CPU usage on my i7 with all cores being almost equally loaded. I have no doubt that it could have gone higher if there were more units.
 
My 290s in CF look a lot better.()🙂
Look better? Because of more fps or image quality difference?

One thing I was curious about was Async being used for the lights from projectiles and other particles. I thought we could already do something like this efficiently with deferred shading for quite some time now?
Oxide said that while dx11 allows 10 lights at a time, dx12 can do 100. The term it used was different. I don't remember it anymore. Also couldn't find the article.

Quick screenshot of my CPU load under AOTS benchmark:

XxBJRsr.png



Looks like up to 18 threads are being used in the game.
18!😱
My respect for Oxide grows more and more.
 
Last edited:
Sorry this game sucks I don't care how it runs or how cool it is. Amd sponsorship so it sucks!
It looks cool though. I'm excited for big rts I forgot about them after the loss of Warcraft 4. Me and blizzard are forever uncool after that.
 
Oxide said that while dx11 allows 10 lights at a time, dx12 can do 100. The term it used was different. I don't remember it anymore. Also couldn't find the article.
Deferred Shading techniques definitely allow for more than 10 lights, I can confirm that myself in Unity and UE4, however, the downsides of using it may not be favorable as you lose, for example, MSAA. If I remember correctly, memory bandwidth becomes a bottleneck too with Deferred Shading, and the compute units may not be fully utilized as a result.
 
Last edited:
It wasn't that you were limited, but functionally limited in that it would kill frame rates and cause other performance related issues, so you wanted to limit it because of that.
 
https://www.youtube.com/watch?v=xoHxtrqdw6w

25,000 unique units all at once, no slowdown with single 980 and 4 core intel cpu (not specified)

Game looks pretty cool, the performance looks amazing. Hope other studios follow suit! 🙂

Indeed, it's why I have respect for Oxide because they are actually good at what they do and optimize their games. Civ 5/BE ran very well on all hardware and Ashes just continues the trend.

Their DX12 implementation is the best so far with both sides getting nice performance gains, with the option to toggle async compute off for NV GPUs so they too can get nice gains with DX12. It's a win-win and should be celebrated by gamers.
 
Indeed, it's why I have respect for Oxide because they are actually good at what they do and optimize their games. Civ 5/BE ran very well on all hardware and Ashes just continues the trend.

Their DX12 implementation is the best so far with both sides getting nice performance gains, with the option to toggle async compute off for NV GPUs so they too can get nice gains with DX12. It's a win-win and should be celebrated by gamers.

Shame their DX12 doesn't work on all IHVs yet. But again, with only 20000 users its not exactly a seller. So it doesn't matter. 🙂
 
Deferred Shading techniques definitely allow for more than 10 lights, I can confirm that myself in Unity and UE4, however, the downsides of using it may not be favorable as you lose, for example, MSAA. If I remember correctly, memory bandwidth becomes a bottleneck too with Deferred Shading, and the compute units may not be fully utilized as a result.
You can easily have thousands of lights with modern deferred/forward+ pipelines.
The cost isn't really on how many lights you have, but how many pixels those lights affect. (Even on screen >1000 lights on the sky not hitting any surface are really cheap, as they are culled before light accumulation pass.)
http://www.cse.chalmers.se/~uffe/clustered_shading_preprint.pdf

Shadows are quite expensive on all methods.

AotS doesn't use classic forward or deferred renderer, it's texture/object space renderer.
“Object Space Rendering in DirectX 12” – Dan Baker (Oxide Games)

One can expect more developers to look into texture based methods, also variants with virtual texturing.
 
Last edited:
Back
Top