computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 27 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
It is not about AMD. I'm talking about nVidia. This developer has stated that they need a low level API to create this engine and game. So where is the performance improvement on nVidia hardware? What happen to Star Swarm anyway?!

It is obvious that this is a marketing deal between them and AMD. Their beta versions have no NDA (unlike Fable Legends for example) and they are pushing them to reviewers every few months. Instead of optimizing their engine for nVidia user they are just using their "work" to get after them. You dont even hear anything about the game. It is only about DX12 and how great AMD is.

I dont even know why anybody with nVidia hardware should support this developer.

Uhhh, how about because most people with nvidia cards aren't religiously devoted to perceived conspiracy theories, and just want to play a modern RTS with lots of units?
 

Dygaza

Member
Oct 16, 2015
176
34
101
It is not about AMD. I'm talking about nVidia. This developer has stated that they need a low level API to create this engine and game. So where is the performance improvement on nVidia hardware? What happen to Star Swarm anyway?!

Starswarm had way higher batch count than AotS has. It wasn't rare for Batch count to go over 100k in several scenes.

This was the heaviest scene from my last run with extreme settings:

== Shot Long Shot 3 =========================================
Total Time: 5.006614 ms per frame
Avg Framerate: 56.325497 FPS (17.753950 ms)
Weighted Framerate: 56.133812 FPS (17.814575 ms)
CPU frame rate (estimated if not GPU bound): 106.223450 FPS (9.414117 ms)
Percent GPU Bound: 7.309047 %
Driver throughput (Batches per ms): 10166.375000 Batches
Average Batches per frame: 34212.089844 Batches

So in Starwarm , nvidia were cpu bound aswell in those heavy scenes. Yes developer could have made game with more drawcalls, but that would have made the game a lot more slower under dx11, especially on AMD hardware. Yes , unfortunately AMD is kinda holding back games on dx11 especially on drawcalls/frame. I bet Zlatan could share some insight to this.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Starswarm had way higher batch count than AotS has. It wasn't rare for Batch count to go over 100k in several scenes.

This was the heaviest scene from my last run with extreme settings:

== Shot Long Shot 3 =========================================
Total Time: 5.006614 ms per frame
Avg Framerate: 56.325497 FPS (17.753950 ms)
Weighted Framerate: 56.133812 FPS (17.814575 ms)
CPU frame rate (estimated if not GPU bound): 106.223450 FPS (9.414117 ms)
Percent GPU Bound: 7.309047 %
Driver throughput (Batches per ms): 10166.375000 Batches
Average Batches per frame: 34212.089844 Batches

So in Starwarm , nvidia were cpu bound aswell in those heavy scenes. Yes developer could have made game with more drawcalls, but that would have made the game a lot more slower under dx11, especially on AMD hardware. Yes , unfortunately AMD is kinda holding back games on dx11 especially on drawcalls/frame. I bet Zlatan could share some insight to this.

DX11 is unplayable on AMD hardware anyway. Doesnt make sense to hold the hardware back...
And it doesnt explain why nVidia is not faster under DX12.

GTX980 got 66FPS in 1080p: http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/5
In Ashes is just 44FPS. With less "batches" the hardware gets a worse result. Doesnt make any sense.

Uhhh, how about because most people with nvidia cards aren't religiously devoted to perceived conspiracy theories, and just want to play a modern RTS with lots of units?

Is there an actual game?
Right now there doesnt exist anything close to a real game. Just a benchmark to highlight AMD hardware.

Oh and people care from who they are buying products. Nobody will support a developer who literally said "we dont care about you". :(
 
Last edited:

Dygaza

Member
Oct 16, 2015
176
34
101
DX11 is unplayable on AMD hardware anyway. Doesnt make sense to hold the hardware back...
And it doesnt explain why nVidia is not faster under DX12.

Already explained that nvidia is pretty much running at 100% gpu usage at dx11, so going dx12, it's still 100%. Not much to gain there. Difference is that under dx12 all shaders are run the way developer intented, without nvidia driver interrupting , and replacing shaders. Which actually is quite grey area, and we have seen several cases of questinable image quality differencies. So yeah, but doing these shader swaps (I would go as far as replacing painting with a forgery), nvidia can be faster under dx11 in cases, when they are running "more optimised" shaders.

None of my friends however who play aots, uses dx11 on their nvidia systems, as dx12 is faster in bigger maps. Especially in endgame where there can be more units than in that benchmark.
 

Magee_MC

Senior member
Jan 18, 2010
217
13
81
DX11 is unplayable on AMD hardware anyway. Doesnt make sense to hold the hardware back...
And it doesnt explain why nVidia is not faster under DX12.
:(

I agree that it doesn't make sense to hold back AMD's hardware because NV's hardware can't do AC.

NV is not faster under DX12 because NV had already optimized their performance under DX11 to such a high degree that there isn't much additional performance to extract, and a large chunk of what is available (AC) they can't do because of hardware and architecture limitations.

It seems that you have this baseline expectation that NV will, like a law of nature, always be superior. If they aren't then somebody has to be screwing them over. Try to actually read what people are writing. They've done an excellent job explaining it.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
DX11 is unplayable on AMD hardware anyway. Doesnt make sense to hold the hardware back...
And it doesnt explain why nVidia is not faster under DX12.
They're not going to bother with dx11 were either mantle or dx12 path exists

GTX980 got 66FPS in 1080p: http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/5
In Ashes is just 44FPS. With less "batches" the hardware gets a worse result. Doesnt make any sense.
Not having to render any terrain but just a black background with white dots probably explains the performance here



Is there an actual game?
Right now there doesnt exist anything close to a real game. Just a benchmark to highlight AMD hardware.
The game is in development, they haven't even began specific optimizing for anyone yet.
Oh and people care from who they are buying products. Nobody will support a developer who literally said "we dont care about you". :(
nvidia's hardware is probably seeing close to max usage already, the lack of big gains in dx12 is not surprising, good to see dx12 surpassing dx11 though, wasn't the case with earlier builds.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
I dont even know why anybody with nVidia hardware should support this developer.

Because most of them don't have a negative level of informedness on the subject. Star Swarm is a very targeted demo designed to show off one particular thing DX 12 allows. It turns out that isn't what Ashes needs. Most people tend to get more informed and gain a better understanding of the subject when exposed to more information, so there's very few who share your misapprehensions. Everyone else either knows where to put the blame or don't know that blame needs to be allotted.

DX11 is unplayable on AMD hardware anyway. Doesnt make sense to hold the hardware back...
And it doesnt explain why nVidia is not faster under DX12.

GTX980 got 66FPS in 1080p: http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/5
In Ashes is just 44FPS. With less "batches" the hardware gets a worse result. Doesnt make any sense.

DX11 being unplayable on AMD hardware is news to all the people playing DX11 games better for the money on AMD hardware. Past that, AMD's hardware has capabilities above and beyond what DX11 uses that DX12 can use. NV has some, such as an ability to benefit from more draw calls, but they don't have others such as async compute. DX12 isn't a magical fairy dust you rub on a game for a guaranteed percent increase in performance, it's a way to let the hardware use capability that wasn't used in DX11. More capability, more increase.

Comparing Star Swarm to Ashes directly is taking this beyond farce. Results between different games aren't interchangable in the slightest. The number of batches says how many things are being drawn, not how much work they take to draw. Never mind anything else the GPU needs to do.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Already explained that nvidia is pretty much running at 100% gpu usage at dx11, so going dx12, it's still 100%. Not much to gain there. Difference is that under dx12 all shaders are run the way developer intented, without nvidia driver interrupting , and replacing shaders. Which actually is quite grey area, and we have seen several cases of questinable image quality differencies. So yeah, but doing these shader swaps (I would go as far as replacing painting with a forgery), nvidia can be faster under dx11 in cases, when they are running "more optimised" shaders.

Shaders? Nope. Shaders still gets translated and gets swapped with others if a IHV wants it.
What the driver cant do is to optimize memory management and other things. It is up to the developer. No improvement over DX12 shows a sloppy work. The nVidia card isnt under full load. Doesnt make any sense for an engine which is designed for a low level API. Look at the result from 1080p to 4K. The nVidia hardware is losing only 33% while the pixel count goes up 400%. The hardware is still limited by something within the API. This shouldnt happen with DX12 at all - except there hasnt happened any optimizing for the hardware.


Not having to render any terrain but just a black background with white dots probably explains the performance here

Not really. The terrain looks just bad. There is no real hardware power necessary to render a few triangles and a texture.

The game is in development, they haven't even began specific optimizing for anyone yet.
nvidia's hardware is probably seeing close to max usage already, the lack of big gains in dx12 is not surprising, good to see dx12 surpassing dx11 though, wasn't the case with earlier builds.
Na, they are pushing these betas out to benchmark the hardware. They are optimizing just for AMD hardware. Otherwise there would be a NDA to not talk about this game at this point. Doesnt make sense anyways to showcase a non functional game...
 
Last edited:

zlatan

Senior member
Mar 15, 2011
580
291
136
There is two extremely big difference between D3D11 and D3D12. First D3D11 is a single engine API, while D3D12 is multiengine. Second is more complicated, but the sort story is that for D3D11 a kernel driver must manage the memory and track the hazards, while in D3D12 there is no kernel driver, so the developer needs to write every management task in the engine.

The main requirement to get good results for D3D12 is the fully accessible documentations and the open sourced developer tools. AMD is an easy target, because of these documentations and these open sourced tools (CodeXL, PerfStudio 12, Tootle). Nvidia don't provide anything like this, so it is really hard to get better performance when there is no help from an IHV.
 

kondziowy

Senior member
Feb 19, 2016
212
188
116
That is exactly what I think could happen, that games will still be prioritized for Nvidia hardware.

But, as strange as it looks, AMD seems to do much better in games lately - The Division, Hitman, Rise of Tomb Raider after patches... And those are sill DX11 so no async there.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Na, they are pushing these betas out to benchmark the hardware. They are optimizing just for AMD hardware. Otherwise there would be a NDA to not talk about this game at this point. Doesnt make sense anyways to showcase a non functional game...

Yep, a game published by Stardock going to an open beta with no NDA is so rare. Not at all like other games by them, such as Sins of a Solar Empire, which spent about a year in a beta with no NDA whatsoever while it underwent early enough changes that fundamental gameplay mechanics could be changed. They've been doing early access with no NDA thing for so long that the first time they did it was literally closer to the initial release of steam than the early access program on steam.

Yet again, you don't let something as minor as utter ignorance get in the way of your opinions, and yet again you are wrong. How will you follow up saying that the early access model is anomalous for what is possibly the first user and is most certainly the single highest profile user of that model?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
This tells the true story here,

80321.png
 

Dygaza

Member
Oct 16, 2015
176
34
101
Yep tells exactly what we have been saying. Async is extra bonus, and really nice one, but real benefit comes from removal of cpu overhead bottleneck on AMD side.

I really hope nvidia gets similiar feature working on Pascal. Better performance for everyone is always best case scenario.
 
Last edited:

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
It's going to be exciting when the major engines get quality implementations of DX12. Along with uses of these async features for certain effects.

There will be no going back.
 

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
Good old 28nm GCN in Hawaii and Fiji forms just jumped to perf/W levels similar to Maxwell, if it's gonna have a 15-20% boost on every DX12 game thanks to getting rid of that DX11 driver of theirs. Async compute is just the icing on the cake for games that implement it.

Nice. I wonder how Polaris cards will do on this one.
 

Good_fella

Member
Feb 12, 2015
113
0
0
It's the same as Tomb Raider 2013, AMD paid Oxide and Stardock to sabotage performance on Nvidia hardware to screw Nvidia users, boycott these companies and let them go bankrupt, same with IO Interactive and Hitman, boycott them too.

Agreed.

AMD involved? Bad news for Nvidia and Intel.

http://www.computerbase.de/2016-02/.../#diagramm-cpu-skalierung-1920-1080-r9-fury-x

390X Ties with GTX 980 Ti in DX12.

Unbelievable.

But if it would be 970>Fury X it would be gimping. Right?
 
Last edited:

JDG1980

Golden Member
Jul 18, 2013
1,663
570
136
One thing people keep ignoring is if async Directx 12 games make Nvidia hardware look bad, Nvidia users (well some of them) will make them pay for it by either not buying the game or giving it a crappy review. If I am a manager at a AAA game developer I am telling my developers "throw that async stuff in the trash, we don't want to piss off most of our users." Doesn't matter if that is right or wrong technically, business is business.

If today's benchmarks are any indication, async compute doesn't substantially hurt Nvidia cards - it only reduced performance by 2 to 4 percent. On the other hand, AMD cards (except GCN 1.0) show ~10% gains.

The vendors might still think it's not worth it if it required extra effort, considering AMD's low market share at this time. But here is where AMD's console dominance really has the potential to pay off. We all know most modern AAA PC titles are low-effort console ports. And the consoles have AMD APUs and are written with a low-level API; in fact, the Xbox One version may already be using DX12! So a low-effort PC port of a console game will likely be pre-optimized for GCN. Most likely they will take the effort to add a DX11 path for some time to come, since there are still a lot of people on Windows 7, and Nvidia does better with DX11 anyway. But I don't think they will go out of their way to make things worse on DX12 for AMD.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
If today's benchmarks are any indication, async compute doesn't substantially hurt Nvidia cards - it only reduced performance by 2 to 4 percent. On the other hand, AMD cards (except GCN 1.0) show ~10% gains.

The vendors might still think it's not worth it if it required extra effort, considering AMD's low market share at this time. But here is where AMD's console dominance really has the potential to pay off. We all know most modern AAA PC titles are low-effort console ports. And the consoles have AMD APUs and are written with a low-level API; in fact, the Xbox One version may already be using DX12! So a low-effort PC port of a console game will likely be pre-optimized for GCN. Most likely they will take the effort to add a DX11 path for some time to come, since there are still a lot of people on Windows 7, and Nvidia does better with DX11 anyway. But I don't think they will go out of their way to make things worse on DX12 for AMD.
It is the other way around. Disabling Asynchronous Compute is effort that you have to put into coding.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
It is not about AMD. I'm talking about nVidia. This developer has stated that they need a low level API to create this engine and game. So where is the performance improvement on nVidia hardware? What happen to Star Swarm anyway?!

It is obvious that this is a marketing deal between them and AMD. Their beta versions have no NDA (unlike Fable Legends for example) and they are pushing them to reviewers every few months. Instead of optimizing their engine for nVidia user they are just using their "work" to get after them. You dont even hear anything about the game. It is only about DX12 and how great AMD is.

I dont even know why anybody with nVidia hardware should support this developer.

It's the same as Tomb Raider 2013, AMD paid Oxide and Stardock to sabotage performance on Nvidia hardware to screw Nvidia users, boycott these companies and let them go bankrupt, same with IO Interactive and Hitman, boycott them too.


Not that it will change your minds, but the below is what you should consider the truth, whatever that means to you:

First, there’s the fact that Oxide shares its engine source code with both AMD and Nvidia and has invited both companies to both see and suggest changes for most of the time Ashes has been in development. The company’s Reviewer’s Guide includes the following:

[W]e have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so…

This branch is synchronized directly from our main branch so it’s usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.


Oxide also addresses the question of whether or not it optimizes for specific engines or graphics architectures directly.

Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known “glass jaws” which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.

We reached out to Dan Baker of Oxide regarding the decision to turn asynchronous compute on by default for both companies and were told the following:

“Async compute is enabled by default for all GPUs. We do not want to influence testing results by having different default setting by IHV, we recommend testing both ways, with and without async compute enabled. Oxide will choose the fastest method to default based on what is available to the public at ship time.”

Second, we know that asynchronous compute takes advantages of hardware capabilities AMD has been building into its GPUs for a very long time. The HD 7970 was AMD’s first card with an asynchronous compute engine and it launched in 2012. You could even argue that devoting die space and engineering effort to a feature that wouldn’t be useful for four years was a bad idea, not a good one. AMD has consistently said that some of the benefits of older cards would appear in DX12, and that appears to be what’s happening.

http://www.extremetech.com/gaming/2...ashes-of-the-singularity-directx-12-benchmark