computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 28 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
What I would like to see at this point is how OC affects the performance of this game. Especially on AMD GPUs.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Wow, I missed one symbol and you can't answer though you got the idea what I'm saying.

If you are implying that DX12 scenario gimps performance on Nvidia hardware...

Fury X without Asynchronous Compute is still faster than GTX 980 Ti.

As it should be, it has 2 TFLOPs compute power more.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Axix,
sure they said this. But where do they say that they will use the modified branch? :D

Obviously they are trying to get out of the marketing devil. Fact is they have only promoted their company, their engine and this game with AMD and uses only AMD hardware to programm it. Sure it will run great on this hardware. And it is time consuming to optimize for other hardware when you have never cared about them.

"We believe our code is very typical of a reasonably optimized PC game." - Anno2205 looks much better and runs twice as good on nVidia hardware with DX11. Even games like the Total War series run better and can display hundreds of units. Marketing at best.
 
Last edited:

dogen1

Senior member
Oct 14, 2014
739
40
91
Shaders? Nope. Shaders still gets translated and gets swapped with others if a IHV wants it.
What the driver cant do is to optimize memory management and other things. It is up to the developer. No improvement over DX12 shows a sloppy work. The nVidia card isnt under full load. Doesnt make any sense for an engine which is designed for a low level API. Look at the result from 1080p to 4K. The nVidia hardware is losing only 33% while the pixel count goes up 400%.

Maybe this is related to their object space rendering method. I'm not sure how it works, but apparently shading rate is decoupled from rasterization. If that's true resolution may not affect performance as much.
 
Last edited:

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
I think you are vastly overestimating how much the typical pc gamer cares. The minority of dgpu owners that even look at game benchmarks overlaps to some (hopefully large) degree with people who know enough to understand why AMD cards are aging better when DX12 games become common.

Regardless, your argument only applies in this very narrow window of time we are in currently, and Tomb Raider may be the only real victim.

Nvidia itself will make maxwell look bad with pascal since it will no doubt offer real async compute support (unless you believe nvidia is completely oblivious to the direction real time graphics are headed in). The type of person who buys the latest and greatest hardware is probably also more likely to be a vocal fanboy. If devs really need to keep nvidia fanboys happy, wouldn't the GTX 1070 and 1080 folks be the ones that matter?

Nvidia is already "sabotaging" kepler performance vs maxwell and it isn't hurting them one bit. Why do you assume gameworks dx12 games in 2017 won't have async stuff that cripples maxwell cards? Nvidia's biggest competitor at this point is old nvidia cards. Pushing new tech (of questionable value) to entice upgrades helps their business.
we dont know anything about pascal capabilities on dx12 at all only thing so far we have seen is some SP and DP perfomance claims and thats all
if what the rumors are saying that pascal is just a bigger maxwell 2.0 compute beast is right then there is no reason to believe that they will actually be good on it..
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Axix,
sure they said this. But where do they say that they will use the modified branch? :D

Obviously they are trying to get out of the marketing devil. Fact is they have only promoted their company, their engine and this game with AMD and uses only AMD hardware to programm it. Sure it will run great on this hardware. And it is time consuming to optimize for other hardware when you have never cared about them.
Does this game

"We believe our code is very typical of a reasonably optimized PC game." - Anno2205 looks much better and runs twice as good on nVidia hardware with DX11. Even games like the Total War series run better and can display hundreds of units. Marketing at best.

Why would anyone promote with Nvidia when their current hardware isn't built for modern APIs? AMD is the only horse in the race built for the future. I thought that part has been made crystal clear to you several times by several people over several weeks now.

Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Why would anyone promote with Nvidia when their current hardware isn't built for modern APIs? AMD is the only horse in the race built for the future. I thought that part has been made crystal clear to you several times by several people over several weeks now.

Sure. The only horse in the race when you get paid to shoot everyone else. :D

Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.

You dont need DX11. You can just use DX9 for it:
https://youtu.be/Jq8OOe3yVkc?t=287

Doesnt even look worse. :\
 

pj-

Senior member
May 5, 2015
501
278
136
Why would anyone promote with Nvidia when their current hardware isn't built for modern APIs? AMD is the only horse in the race built for the future. I thought that part has been made crystal clear to you several times by several people over several weeks now.

Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.

Then how is there a dx11 version of AoTS?
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
I guess I'm set for dx12 with my el cheapo reference 290 with gelid icy cooler.
Shame we get benchmarks only and no games yet (shame on you Tomb raider!).

I'll be waiting eagerly for dx12 games, meanwhile it is mining me crypto coins at the rate of 130$/month :) I hope it will mine enough for a pascal upgrade until then!
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
I think that by the time async compute is in enough games to be important, nvidia will have launched its full pascal line, except maybe the very low end.

I agree. I also call that time "late 2017." Ten AAA games between now and then is a lot.
 

Vaporizer

Member
Apr 4, 2015
137
30
66

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I think that by the time async compute is in enough games to be important, nvidia will have launched its full pascal line, except maybe the very low end.
As the new benches show it's not about async,async only makes a small difference.
It's about what despoiler said,this here is the computations killer and why AMD with the higher TFLOPS get better numbers, but you have to ask yourself how many game devs are going to go through this kind of trouble of displaying thousands of units with AI and lighting and all that stuff,this kind of work will raise development costs by a fair amount and I doubt consoles could even pull that off,so doing this only for PC games? Yeah right!

Displaying hundreds of units? AoTS can do thousands all of which have AI down to per turret and produce individual lights. It's not even possible to do that in DX11 without guaranteed slideshow.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
They dont need to give a ETA because Oxide has sabotaged their DX12 performance. Async will not change this. The whole path is just bad.

Reviewer need to stop using Vaporware and an biased, paid benchmark to make such false claims. But i guess we need to wait for neutral games like Quantom Break and Fable Legends to put Oxide at rest.

It is like using 3DMark to determine the performance of the cards for real games.

As the new benches show it's not about async,async only makes a small difference.
It's about what despoiler said,this here is the computations killer and why AMD with the higher TFLOPS get better numbers, but you have to ask yourself how many game devs are going to go through this kind of trouble of displaying thousands of units with AI and lighting and all that stuff,this kind of work will raise development costs by a fair amount and I doubt consoles could even pull that off,so doing this only for PC games? Yeah right!

Anno2205. Oh and this game looks and runs much better than this "DX12 showcase" benchmark.
 
Last edited:

Magee_MC

Senior member
Jan 18, 2010
217
13
81
They dont need to give a ETA because Oxide has sabotaged their DX12 performance. Async will not change this. The whole path is just bad.

Reviewer need to stop using Vaporware and an biased, paid benchmark to make such false claims. But i guess we need to wait for neutral games like Quantom Break and Fable Legends to put Oxide at rest.

It is like using 3DMark to determine the performance of the cards for real games.



Anno2205. Oh and this game looks and runs much better than this "DX12 showcase" benchmark.

Oxide sabotaged their DX12 performance?!?!?!?

I don't suppose that you would have any citations, or proof of such extraordinary claims, would you?
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
They sabotaged because Nvidia hardware is slower under DX12.

Nothing to do with hardware, I guess.
 
Feb 19, 2009
10,457
10
76
But i guess we need to wait for neutral games like Quantom Break and Fable Legends to put Oxide at rest.

My question to you, since you're so actively hostile against Oxide because their DX12 implementation according to you is unfair... what if similar results are observed for other games?

Such as Async Shading is not possible for NV GPUs, they gain no performance from enabling it.

Will you come here to apologize to everyone that you have slurred against this past year?

Know who you are insulting first, these guys were the first to DX11, they implemented DX11 Multi-thread rendering, they implemented Compute shader based rendering, in Civilization 5, well before others.

Now they are at the forefront of DX12 (first to multi-GPU adapter too!!) and despite what they have said publicly that they are being more than fair, providing every hardware vendor source access from alpha (!!), you continue to smear them with your fud.

Why?

Think back to Starswarm, it wasn't long ago that their DX12 drawcall implementation showed NV GPUs in a superior light, you happily used that to your agenda. Well, here's the thing, DX12 games are more than just raw draw calls.

Back to Async Compute, I have said all along since 2014, Maxwell is NOT capable of Async Compute/Shading, they still have ONE engine and that is totally against the principle of DX12 multi-engine workloads.

They knew all along their hardware was incapable, but they have no issues telling lies to the press, even telling AnandTech they could. Now they have to issue a correction saying its not enabled, despite promising a long time ago to enable it.

I find it difficult to understand why people defend their actions. Tell lies to sell more hardware, 3.5GB 970, people defend that... tell lies that their GPUs support DX12 Async Compute to sell more hardware, you defending that too?

Here's the final say, because its officially from AMD, unless NV would like to make a statement against this:

NV GPUs do not have functional Async Shading/Compute. Not in Vulkan.

https://community.amd.com/community...on-gpus-are-ready-for-the-vulkan-graphics-api

Only Radeon™ GPUs built on the GCN Architecture currently have access to a powerful capability known as asynchronous compute, which allows the graphics card to process 3D geometry and compute workloads in parallel.

Not in DX12.

https://community.amd.com/community...blade-stealth-plus-razer-core-gaming-solution

AMD's advanced Graphics Core Next (GCN) architecture, which is currently the only architecture in the world that supports DX12's asynchronous shading.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
My question to you, since you're so actively hostile against Oxide because their DX12 implementation according to you is unfair... what if similar results are observed for other games?

And what if not? :\

Such as Async Shading is not possible for NV GPUs, they gain no performance from enabling it.
There are other hardware features like CR, ROV etc. which will improve performance. Why is Oxide not using them to realize better graphics?

Will you come here to apologize to everyone that you have slurred against this past year?
Why not. Will you do the same?

Think back to Starswarm, it wasn't long ago that their DX12 drawcall implementation showed NV GPUs in a superior light, you happily used that to your agenda. Well, here's the thing, DX12 games are more than just raw draw calls.
"More than draw calls"? What exactly is this engine doing which is not improving the performance on nVidia hardware at all? There is no difference between Star Swarm and Ashes. Both showing a huge amount of units with lighting effects. And yet even in 720p Ashes doesnt scale on nVidia hardware and result in worse performance than Anno2205 with nearly ~1million citizens.
 
Feb 19, 2009
10,457
10
76
@sontin
If I am wrong about Async Shading and NV GPUs can in fact do it on their hardware, then I will apologize, definitely.

Will you?

I suggest until NV has proven otherwise, you calm down with your smearing of developers who have gone above and beyond with providing full source code and even push DX12 multi-adapter, so NV GPU + AMD GPU can work together. If you really care, why don't you ask NV why they don't refute the claims from AMD, it's a public claim afterall.

Only Radeon™ GPUs built on the GCN Architecture currently have access to a powerful capability known as asynchronous compute, which allows the graphics card to process 3D geometry and compute workloads in parallel.

ps. Starswarm had no dynamic lights whatsoever. Many projectiles weren't even rendered. It was designed as a drawcall bottleneck, said so by their designers. In many ways, its similar to the 3dMark DX12 drawcall benchmark.
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
@sontin
If I am wrong about Async Shading and NV GPUs can in fact do it on their hardware, then I will apologize, definitely.

Will you?

Why do you always bring up Async Shading? Seriously, DX12 is much more than this. DX12 in Ashes doesnt improve the performance over DX11 on nVidia hardware. It has nothing to do with not supporting Async Shaders. :\
Otherwise there would never be an improvement on any other hardware plattform than AMD. But even on a smartphone Vulkan can increase performance: https://www.youtube.com/watch?v=P_I8an8jXuM

I suggest until NV has proven otherwise, you calm down with your smearing of developers who have gone above and beyond with providing full source code and even push DX12 multi-adapter, so NV GPU + AMD GPU can work together. If you really care, why don't you ask NV why they don't refute the claims from AMD, it's a public claim afterall.
Why should nVidia do something like this? They are not responsible for the DX12 implementation.

ps. Starswarm had no dynamic lights whatsoever. Many projectiles weren't even rendered. It was designed as a drawcall bottleneck, said so by their designers. In many ways, its similar to the 3dMark DX12 drawcall benchmark.
No, it was designed to showcase the need for a low level API:
Star Swarm is a real-time demo of Oxide Games’ Nitrous engine, which pits two AI-controlled fleets against each other in a furious space battle. Originally conceived as an internal stress test, Oxide decided to release Star Swarm so that the public can share our vision of what we think the future of gaming can be. The simulation in Star Swarm shows off Nitrous’ ability to have thousands of individual units onscreen at once, each running their own physics, AI, pathfinding, and threat assessments.


Nitrous uses the power of its proprietary SWARM (Simultaneous Work and Rendering Model) technology to achieve incredible performance on modern, multi-core computer hardware. SWARM allows Nitrous to do things previously thought impossible in real-time 3D rendering, like Object Space Lighting — the same techniques used in the film industry — and having thousands of unique individual units onscreen at once.
http://www.oxidegames.com/star-swarm/


Oh and from their "DX12" site:
Q. So…translation, please?A. DirectX 12 basically gives you more performance without having to swap out for new hardware. Trust us, it’s a good thing!
http://www.oxidegames.com/directx-12/
Looks like this got lost between Star Swarm and Ashes. :D
 
Feb 19, 2009
10,457
10
76
@sontin
Am I getting this right, you were one of the few who railed on when evidence was presented ages ago that NV GPUs can't do Async Shading. Now there's proof coming out that confirms it. AMD publicly saying only GCN hardware can do it, NV offering no refute to that..

Yet you offer no apology for all those you smeared (and continuing to smear Oxide), you are weaseling and goal post shifting to other DX12 features...

Be a man. Admit when you are wrong, it's okay.

ps. We all know there are other DX12 features, this was never about that.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
I would like an analysis of the differences between the dx12 and 11 builds. If more is happening in dx12 vs dx11 it would explain why nvidia doesn't gain performance in dx12. They could do the same thing faster in dx12 then, answering :

Looks like this got lost between Star Swarm and Ashes.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
http://www.techspot.com/article/1137-directx-12-multi-gpu-geforce-radeon/

1080p_Crazy_390.png



1080p_Crazy_FuryX.png
 
Last edited:
Feb 19, 2009
10,457
10
76
https://twitter.com/PellyNV/status/702556025816125440

Fun FACT of the day: Async Compute is NOT enabled on the driver-side with public Game Ready Drivers. You need app-side + driver-side!

They said it would be enabled a long time ago now.

Then there's this, from a little over a week ago:

http://www.overclock.net/t/1590939/...-async-compute-yet-says-amd/370#post_24898074

Async compute is currently forcibly disabled on public builds of Ashes for NV hardware. Whatever performance changes you are seeing driver to driver doesn't have anything to do with async compute.

This explains the lack of extra dynamic lights in those side by side videos of prior builds.

But then there's this:

I can confirm that the latest shipping DX12 drivers from NV do support async compute. You'd have to ask NV how specifically it is implemented.