Ashes of the Singularity User Benchmarks Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

selni

Senior member
Oct 24, 2013
249
0
41
Stardock's team that wrote the Oxide engine is the same team that wrote all of the Civ 5 engines. Dan Baker is one of the handful of engine gurus in the entire world. They know exactly what's going on.

Counterpoint....MS is known for great, revolutionary code?

Random MS product X? Yeah, they can be pretty bad. The windows kernel team? Not exactly amateur hour.

Anyway all I can find are links to a dead(?) thread on this - http://forums.ashesofthesingularity.com/470548/. Is this the original source, or is it available somewhere else?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
So now there is this: http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843

It is kind of saying Nvidia put all their eggs in the DX 11 basket and not to expect great things for D 12. I am not so sure I want a 980 ti now >.<

An enlightening read, but it doesn't make sense when you look at the benchmarks.

If AMD's parallel processing superiority was that superior to Maxwell's, then why are the benchmarks so close? Shouldn't AMD's lead be much greater than Maxwell's in DX12, considering Maxwell's supposedly serial processing nature?

But instead we find this. The benchmarks are very close. The only distinguishing anomalies are that NVidia barely gains anything from using DX12 (and even loses performance), and AMD's performance under DX11 is terrible..

Personally, I think it's just that the benchmark itself is more tuned for AMD hardware, and that AMD's Mantle venture gave them a good headstart on NVidia when it came to tuning their drivers for explicitly parallel APIs like DX12 and Mantle.

NVidia on the other hand still have their work cut out for them..

TDPhX.png
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
Depends on the benchmark you look at.

For example the 4K results on that same site has Fury X 12.5% ahead with negative scaling again seen on Nvidia cards (this was reported in many other sites)

ashes_of_the_singularity_3840x2160_computerbase.png


290X ties a 980 Ti at Ars....

Review-chart-template-final-full-width-3.0021-980x720.png


Over at PcPer, a 390X beats the GTX 980 by a good margin...

ashes-6700k.png


It's possible that Fury X is ROP limited here though as the gap should be wider if all else were equal.
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's possible that Fury X is ROP limited here though as the gap should be wider if all else were equal.

That's what Mahigan on overclock.net claimed, that the Fury X was limited by ROPs.

I guess we'll know for sure if NVidia and or Oxide don't release a driver/patch that significantly increases performance for Maxwell and corrects the DX12 negative scaling..

Right now I'm inclined to believe that the issues are caused by the software being in alpha stage, and being optimized mostly for the Radeons since AMD is a sponsor of the game..

It's basically Star Swarm all over again...
 

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
A guy named Bandersnatch comments on some of Mahigan's arguments from a programmer's perspctive.

I'm in agreement with him. It looks like the Oxide optimized the code mostly for AMD and not for NVidia.. How else do you explain the lack of performance for the DX12 path vs that of the DX11 path?

The DX12 should be much faster period, unless they screwed up royally..

Of course you're in agreement with that random internet guy, since his entire post can be summarized by "not Nvidia's fault". When AMD performance isn't great in a GameWorks game, it's AMD's fault for not working with the dev, right? That's what all the obvious Nvidia supporters here repeat over and over. When Nvidia performance isn't great in a not Gaming Evolved game, it's the devs fault. Am I understanding this correctly? I mean Dan Baker, the co-founder of Oxide, said:

All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months

So what exactly are you talking about?
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Of course you're in agreement with that random internet guy, since his entire post can be summarized by "not Nvidia's fault". When AMD performance isn't great in a GameWorks game, it's AMD's fault for not working with the dev, right? That's what all the obvious Nvidia supporters here repeat over and over. When Nvidia performance isn't great in a not Gaming Evolved game, it's the devs fault. Am I understanding this correctly? I mean Dan Baker, the co-founder of Oxide, said:

And your post is obviously meant to deflect as much blame on NVidia as possible..

If you knew anything about DX12, you would obviously find fault with the benchmarks as well.. The whole point of DX12 is to put the burden of performance and optimization mostly on the developers, since it's their code and they should know it better than the IHVs..

So Dan Baker's claim that IVHs had access to the source code is really irrelevant, since it was primarily his studio's responsibility to optimize the code and make sure it runs properly across all hardware..

Secondly, the game sometimes runs SLOWER in DX12 mode compared to DX11. That in and of itself should raise flags if piece of code is running slower using a high performance low level API vs a highly abstractive one like DX11..

Thirdly, I remember several videos where AMD and Oxide talked about the superior parallel rendering of DX12 and Mantle compared to DX11, and how multicore CPUs would finally begin to stretch their legs.

Well where is it? Looking at the CPU benchmarks, it looks like all the talk of multicore CPUs gaining larger increases has gone out the window. Here, a dual core i3 4330 is faster than an AMD FX 8 core 8370.

It's possible that the poor CPU scaling may be impacting NVidia's performance..

ashesheavy-r9390x.png
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
A guy named Bandersnatch comments on some of Mahigan's arguments from a programmer's perspctive.

I'm in agreement with him. It looks like the Oxide optimized the code mostly for AMD and not for NVidia.. How else do you explain the lack of performance for the DX12 path vs that of the DX11 path?

The DX12 should be much faster period, unless they screwed up royally..

Call it taste of your own medicine. If the reason is true that the devs are focusing on AMD optimizations due to AMD influence, so be it. Nvidia rode a wave of this into first place.

We'll have to see if NV steps up their own games and delivers optimizations after release.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
That's what Mahigan on overclock.net claimed, that the Fury X was limited by ROPs.

I guess we'll know for sure if NVidia and or Oxide don't release a driver/patch that significantly increases performance for Maxwell and corrects the DX12 negative scaling..

Right now I'm inclined to believe that the issues are caused by the software being in alpha stage, and being optimized mostly for the Radeons since AMD is a sponsor of the game..

It's basically Star Swarm all over again...

Wow. Hit the nail on the head.

Oxide have a very close working relationship with AMD. They probably design and do most of the testing on AMD GPUs.

The driver problems Nvidia is experiencing may be Nvidia's fault. But the fact that Oxide and AMD are so closely linked may be a contributing factor (ie the game designed to be open but where efficiencies/shortcuts exist the devs have coded the game to be friendly to the AMD driver/GPU).

Either way, look at starswarm when it released and look at it now. Singularity will likely be the same.
 

Spjut

Senior member
Apr 9, 2011
931
160
106
Both AMD and Nvidia, as well as Intel, still have alot of time to work on their DX12 drivers

A guess from my part is that AMD could be better off since all its DX12 GPUs are based on the GCN architecture, so GCN optimizations from AMD and the game studios can benefit all cards. While Nvidia's Maxwell and Kepler architectures require different optimizations.
 
Last edited:

chimaxi83

Diamond Member
May 18, 2003
5,457
63
101
And your post is obviously meant to deflect as much blame on NVidia as possible..

If you knew anything about DX12, you would obviously find fault with the benchmarks as well.. The whole point of DX12 is to put the burden of performance and optimization mostly on the developers, since it's their code and they should know it better than the IHVs..

So Dan Baker's claim that IVHs had access to the source code is really irrelevant, since it was primarily his studio's responsibility to optimize the code and make sure it runs properly across all hardware..

Secondly, the game sometimes runs SLOWER in DX12 mode compared to DX11. That in and of itself should raise flags if piece of code is running slower using a high performance low level API vs a highly abstractive one like DX11..

Thirdly, I remember several videos where AMD and Oxide talked about the superior parallel rendering of DX12 and Mantle compared to DX11, and how multicore CPUs would finally begin to stretch their legs.

Well where is it? Looking at the CPU benchmarks, it looks like all the talk of multicore CPUs gaining larger increases has gone out the window. Here, a dual core i3 4330 is faster than an AMD FX 8 core 8370.

It's possible that the poor CPU scaling may be impacting NVidia's performance..

ashesheavy-r9390x.png

I'm not trying to assign blame. According to the dev, both companies are seemingly on a level playing field. I don't know how much truth there is to that. I'm pointing out that this exact situation, when the tables are turned, is AMD's fault. Why is it any different now?

Either way, this is an alpha, so judgement is being dealt too quickly. Same with ARK, even though I'm not a fan, I changed my thoughts on it because it's an alpha. This shouldn't even be an issue. I just wish developers would stay away from this close relationship bs with AMD and NV altogether.
 
Feb 19, 2009
10,457
10
76
A guy named Bandersnatch comments on some of Mahigan's arguments from a programmer's perspctive.

I'm in agreement with him. It looks like the Oxide optimized the code mostly for AMD and not for NVidia.. How else do you explain the lack of performance for the DX12 path vs that of the DX11 path?

The DX12 should be much faster period, unless they screwed up royally..

That thread has the one major assumption, that Oxide is biased against NV and prevents NV from submitting their own optimized code and forced to run AMD optimized code.

This is wrong. It's proven wrong by official statement AND backed up with an example.

http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

Often we get asked about fairness, that is, usually if in regards to treating Nvidia and AMD equally? Are we working closer with one vendor then another? The answer is that we have an open access policy. Our goal is to make our game run as fast as possible on everyone&#8217;s machine, regardless of what hardware our players have.

For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

And this is most important of all, it shows these guys are very ethical, certainly on another level that GameWorks devs cannot even compare to:

We only have two requirements for implementing vendor optimizations: We require that it not be a loss for other hardware implementations, and we require that it doesn&#8217;t move the engine architecture backward (that is, we are not jeopardizing the future for the present).

As for who is being honest and lying, I refer you to the NV PR claim that Oxide's game has an MSAA bug. It turned out to be an NV driver bug and the game does it in accordance to DX12. That and the recent history of NV stretching the truth (970 4GB anyone?) would have to make a logical person side with Oxide on this one.

IF you don't want to believe that GCN is simply better designed for Mantle/DX12 than Maxwell, that the performance issue is a combination of drivers & importantly hardware (you should read the threads at B3D, very informative from people who make games & engines for a living), then you can say the performance issue is due to the game still in alpha/closed beta.

If it turns out Maxwell 2 cannot handle async compute & async shading without incurring a performance hit (dx12 perform worse than dx11 when games use these specific features) for context switching their in-order serial pipeline, it means the hardware is fundamentally gimped for DX12. What are the chances of that? Think about how long uarch are in development for, then think about where DX12 came from. Pascal will be NV's uarch for DX12, IMO.

http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/500#post_24325746
^ Stuff like this is what guys on B3D have discussed, the "queues" don't mean much since it incurs a performance hit using async features of DX12. HyperQ originates in Kepler, for Tesla SKUs to boost compute performance. It wasn't designed for DX12, certain not like GCN's out-of-order parallel ACEs.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
For example, when Nvidia noticed that a specific shader was taking a particularly long time on their hardware, they offered an optimized shader that made things faster which we integrated into our code.

I may be reading things into it but this could easily be because that particular shader was designed with GCN in mind.

This seems like the thing that the devs (developing on both sets of hardware) should have realized and fixed on their own.

I could be completely wrong but my guess is that Ashes of Singularity was designed with AMD hardware as a priority.
 
Feb 19, 2009
10,457
10
76
I dont think even the best devs know more about a uarch than the actual IHV themselves. Would be very strange to have that expectation. It's why IHVs have optimization teams.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Call it taste of your own medicine. If the reason is true that the devs are focusing on AMD optimizations due to AMD influence, so be it. Nvidia rode a wave of this into first place.

We'll have to see if NV steps up their own games and delivers optimizations after release.

Comparing Gameworks to DX12 is an erroneous comparison and I'll tell you why.

Gameworks is more about features, and not about performance. Features that can at many times, be completely disabled.. Features that improve IQ, and may not even run on AMD hardware ie TXAA..

In the previous and current generation of GPUs, most games used abstract APIs like OpenGL, DX9, DX10 and DX11 and IHVs were largely responsible for performance via driver optimization since usage of these APIs prevented hardware specific optimizations.. Only low level APIs like Mantle and DX12 can immeasurably affect performance by mapping much closer to hardware..

And this is why developer optimization is so important for DX12. Developer optimization is now more important than ever, since the burden of performance and optimization falls squarely on them..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Why is it any different now?

See my response to railven. With abstract APIs, the IHVs were largely responsible for performance by optimizing their drivers, since those APIs prevented hardware specific optimizations by their very nature.

When a game like the Witcher 3 is programmed, it's not programmed for AMD or NVidia hardware. It's programmed for DX11, and it's up to the IHVs to make efficient drivers that can translate the DX11 commands to the hardware.

With DX12 on the other hand, developers can now tap into the hardware themselves. A dev can now program their games to use a SPECIFIC architecture, much like what is done with consoles.

So the consequences for performance due to over optimization for one architecture at the expense of another is very visible..
 

tential

Diamond Member
May 13, 2008
7,348
642
121
So can someone refresh my memory. Was the performance of graphics cards in dx 11 determined by the first dx 11 alpha benchmark that came out a year before the release of a game?
 
Feb 19, 2009
10,457
10
76
With DX12 on the other hand, developers can now tap into the hardware themselves. A dev can now program their games to use a SPECIFIC architecture, much like what is done with consoles.

Yes, but still within the context/limits of the API, DX12 in this case. So its up to IHVs to make hardware that runs best with this particular API, if not now, then certainly next-gen.

When AMD debuted GCN years ago, they designed it for an API that wasn't available yet, but it was in the works.

https://www.khronos.org/assets/uplo...y/2015-gdc/Valve-Vulkan-Session-GDC_Mar15.pdf

AUzQXxD.jpg


How likely is it that Maxwell 2 was designed and optimized for DX12? Hmm.
 
Feb 19, 2009
10,457
10
76
So can someone refresh my memory. Was the performance of graphics cards in dx 11 determined by the first dx 11 alpha benchmark that came out a year before the release of a game?

Current gen at the time? The 5870 vs 480. The 480 killed it in Tessellation, in benchmarks like Unigine & TessMark (which were out before games). NV also hyped up Tessellation to the max, even making multiple demos to showcase its usage (compare that to the relative quiet re: Dx12!) IIRC, in games like Stalker & Metro with one of the first Tessellation usage in DX12 games, 480 had an advantage.

AMD instead focused on DirectCompute, we can see them investing in features for games that use it, deferred lighting & global illumination. So which games ran better on what hardware, comes down to whether it was Tessellation heavy (Crysis 2!) or Direct Compute heavy (Dirt Showdown) etc.

In this context, DX12 brings low level API for more thread support as well as lower overhead. This should in theory benefit everyone, but more for AMD, since their DX11 is crippled on single-thread and incapability to utilize the ACEs. The other features touted are async compute & shading. So it will depend on the game and the features used.

Ashes both have high draw calls and async compute/shading usage. The devs mention the usage of async compute for lighting, their "thousands of dynamic lights" etc.
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
So now there is this: http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/400#post_24321843

It is kind of saying Nvidia put all their eggs in the DX 11 basket and not to expect great things for D 12. I am not so sure I want a 980 ti now >.<

This is what I had thought might be the case. It did not seem like a problem with the game since the oxide posts looked like they went by the books on this. Solid engine checked by all the IHVs and microsoft. Nvidia having the game code would have come out with a much more robust objection than they did. Nvidia also had access to the game code forever so it should not have been their drivers either (they are driver gods after all)

As far as pascal, it depends on whether nvidia expected dx12 to work as it does. If major changes were made too far into the development of pascal, it might still have most of the limitations.

We also see why nvidia has been quiet about dx12 for the most part.

Seeing that post I don't have much hope nvidia will release anything to improve their dx12 performance. I do recognize that, in a way, saying the game was optimized for AMD is a valid argument. But which way is the right way forward? If its "optimized" for AMD just because it is a well made dx12 engine and AMD has the better dx12 hardware, should we be complaining? They could alter the engine to do better with dx11 centered hardware but what benefit would that be to us? Just play dx11 if you have to.

Things could get very messy going forward if nvidia decides to throw their cash around and gimp/alter dx12 games. Oxide decided to code the game to take advantage of dx12, but with nvidias influence they could forego aspects and design it to suit nvidia's limited hardware instead.
 
Last edited:
Feb 19, 2009
10,457
10
76
Things could get very messy going forward if nvidia decides to throw their cash around and gimp/alter dx12 games. Oxide decided to code the game to take advantage of dx12, but with nvidias influence they could forego aspects and design it to suit nvidia's limited hardware instead.

Fully expected, PC ports with GameWorks to push FL12.1 subset of DX12 to give them an edge. All's fair in business & war as they say. A year from now, I'll look back at this thread and say: Yep, called it.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
That thread has the one major assumption, that Oxide is biased against NV and prevents NV from submitting their own optimized code and forced to run AMD optimized code.

I don't believe Oxide is actively biased. I just think that the game being in alpha state, is more optimized for AMD hardware than it is for NVidia given the fact that AMD is a partner and sponsor of theirs and they have used Mantle before..

So it's logical to believe that Oxide would have spent more time optimizing their engine and their game with AMD hardware.

As for who is being honest and lying, I refer you to the NV PR claim that Oxide's game has an MSAA bug. It turned out to be an NV driver bug and the game does it in accordance to DX12. That and the recent history of NV stretching the truth (970 4GB anyone?) would have to make a logical person side with Oxide on this one.

It's not a matter of lies and conspiracy. It's just that the NVidia path isn't optimized to it's full potential.

IF you don't want to believe that GCN is simply better designed for Mantle/DX12 than Maxwell, that the performance issue is a combination of drivers & importantly hardware (you should read the threads at B3D, very informative from people who make games & engines for a living), then you can say the performance issue is due to the game still in alpha/closed beta.

It's too early to come to conclusions on this. Look at what happened with Star Swarm, a benchmark from the same studio. When Star Swarm launched, AMD was killing NVidia with Mantle. Then NVidia came out with a driver that drastically boosted performance and they caught up....with regular DX11!

And then with DX12, NVidia is still faster to my knowledge than AMD in that particular benchmark.

If it turns out Maxwell 2 cannot handle async compute & async shading without incurring a performance hit (dx12 perform worse than dx11 when games use these specific features) for context switching their in-order serial pipeline, it means the hardware is fundamentally gimped for DX12. What are the chances of that? Think about how long uarch are in development for, then think about where DX12 came from. Pascal will be NV's uarch for DX12, IMO.

So NVidia's Maxwell architecture, a commercial success by all accounts that has consistently managed to outperform AMD in most metrics at lower power levels, has 12_1 feature level compatibility, can't effectively use a fundamental feature of DX12 due to the in-order serial design...? o_O

Sorry but that's bunk...especially the latter. There is NOTHING serial about modern GPUs. NVidia's GigaThread engine orchestrates everything that goes on in the graphics core, and it's all done in parallel.

Stuff like this is what guys on B3D have discussed, the "queues" don't mean much since it incurs a performance hit using async features of DX12. HyperQ originates in Kepler, for Tesla SKUs to boost compute performance. It wasn't designed for DX12, certain not like GCN's out-of-order parallel ACEs.

What proof or evidence do you have that Maxwell incurs a performance hit for using asynchronous compute?
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Regarding the AMD logo

The AMD logo is there because the developers first started to program their game using the AMD Mantle API. The game they wanted to build was pretty much impossible without Mantle. They built their game on AMD Mantle only to port it over to Direct X 12 afterwards (Mantle and Direct X 12 being incredibly similar).

The developer also worked closely with both nVIDIA and AMD. That's why you see nVIDIA's rather impressive DX11 performance. nVIDIA has had access to the code for over a year now (as have AMD). All of this is verifiable on the Developers blog: http://oxidegames.com/2015/08/16/the-birth-of-a-new-api/

http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/410#post_24322004

So NVidia's Maxwell architecture, a commercial success by all accounts that has consistently managed to outperform AMD in most metrics at lower power levels, has 12_1 feature level compatibility, can't effectively use a fundamental feature of DX12 due to the in-order serial design...?

Sorry but that's bunk...especially the latter. There is NOTHING serial about modern GPUs. NVidia's GigaThread engine orchestrates everything that goes on in the graphics core, and it's all done in parallel.

maxwell 2 is clearly good dx11 hardware