Ashes of the Singularity User Benchmarks Thread

Page 30 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Ahem...

It seems they both support a variation of Async Shaders, AMD's is capable of handling more before performance drops, Maxwell 2 can handle up to 31 before performance drops and levels with AMD.

"Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues). Though, it evens out with GCN parts toward 96-128 simultaneous command lists (3-4 work order loads). Additionally, it exposes how differently Async Shaders can perform on either architecture due to how they're compiled.

These preliminary benchmarks are NOT the end-all-be-all of GPU performance in DX12, and are interesting data points in an emerging DX12 landscape."


https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/

It's going to be down to developers to code efficiently for all platforms.

The guy in that link (Alarchy) misunderstands what async compute is, he confuses it up with the ability to support multiple compute queues (32 in the case of Maxwell 2), which is a completely separate (albeit complementary) feature.

For a better explanation of the results from the Beyond3D program look at this one (already posted before in this thread I believe):
https://www.reddit.com/r/pcmasterra...e_all_jump_to_conclusions_and_crucify/cumlmwv
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Simple as what?
Did nvidia had access to windows 10 or to dx12?
No!They had access to dx11 and there they killed it,ashes runs twice as fast on nvidia then it does on amd.And it is a AMD title,figure that.

I remember certain members here saying that NVIDIA had access to DX-12 years ago. Now you are telling me that NVIDIA only had access to DX-11 ??? are you kidding me ??

So MAXWELL is very well optimized for DX-11, nothing new here. But in order for MAXWELL in DX-11 to perform almost on par with the DX-12 code, it needs more CPU workload. So if you dont use a fast CPU then MAXWELL in DX-12 will be faster.

Lets somebody do the test and see what happens ;)
 

iiiankiii

Senior member
Apr 4, 2008
759
47
91
They are insulting themselves by releasing an alpha benchmark of dx12 where dx12 is actually slower then dx11.

They should at least have released it to the major (i)GPU manufacturers (amd,nvidia,intel) for a month or so for internal testing before very well knowingly releasing it in a state that makes one company look very bad and one company look very good.

Welcome to GameWorks. Sucks to have such practice, doesn't it?
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
Ahem...

It seems they both support a variation of Async Shaders, AMD's is capable of handling more before performance drops, Maxwell 2 can handle up to 31 before performance drops and levels with AMD.

"Maxwell is capable of Async compute (and Async Shaders), and is actually faster when it can stay within its work order limit (1+31 queues). Though, it evens out with GCN parts toward 96-128 simultaneous command lists (3-4 work order loads). Additionally, it exposes how differently Async Shaders can perform on either architecture due to how they're compiled.

These preliminary benchmarks are NOT the end-all-be-all of GPU performance in DX12, and are interesting data points in an emerging DX12 landscape."


https://www.reddit.com/r/nvidia/comments/3j5e9b/analysis_async_compute_is_it_true_nvidia_cant_do/

It's going to be down to developers to code efficiently for all platforms.

This is just a bunch of misinformation, people really need to stop attempting to use that test as some sort of benchmark. It's not, all it's doing is showing that in this test that GCN is doing compute and graphics at the same time and Maxwell isn't.

Just look at the 960 vs 980ti results, the 960 has better compute times in that test than the 980ti....
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
I remember certain members here saying that NVIDIA had access to DX-12 years ago. Now you are telling me that NVIDIA only had access to DX-11 ??? are you kidding me ??

So MAXWELL is very well optimized for DX-11, nothing new here. But in order for MAXWELL in DX-11 to perform almost on par with the DX-12 code, it needs more CPU workload. So if you dont use a fast CPU then MAXWELL in DX-12 will be faster.

Lets somebody do the test and see what happens ;)

It already has been done.
http://cdn.arstechnica.net/wp-conte...ew-chart-template-final-full-width-3.0021.png
http://cdn.arstechnica.net/wp-conte...ew-chart-template-final-full-width-3.0011.png
Look at Nvidia 4K result and compare 6 core score to 4 core with HT off result.
http://cdn.arstechnica.net/wp-content/uploads/sites/3/2015/08/heavy.001.png
http://cdn.arstechnica.net/wp-content/uploads/sites/3/2015/08/heavy.002.png
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
And yet DX11 shows a bigger gain than DX12.

And the result with the 4core system doesnt fit any other results. So this is more an outliner than normal.
 
Last edited:

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Seriously SIGGRAPH 2015 was recent, imagine Dan Baker and Tim Foley in that same conference room educating people about next-gen APIs... and here a bunch of forum warriors are attacking their credibility on the very topic of which they were called to represent as the best in their fields.

I recall a similar attack against DICE's Johan Anderson, for being an AMD PR or shill back during the Mantle announcement. Guess what? All the Frosbite games ran excellent on NV hardware, even better than on AMD. Proof right there of a high standard of ethics.

Attacking respectable messengers when they deliver a message you dislike hearing is shameless.
Think about the incredible work these guys have done in partnership with AMD, we've gone from a clunky and limited DX11 to DX12/Mantle/Vulkan which finally gives the ability to push the hardware like consoles have been doing since forever.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
I remember certain members here saying that NVIDIA had access to DX-12 years ago.
How can that be if dx12 is based on mantle and amd only released mantle info after dx12 was almost ready?
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
Welcome to GameWorks. Sucks to have such practice, doesn't it?
So you are saying that dx12 is an amd only technology that will not run on any other hardware?
Because that's what GameWorks is,it only runs on nvidia just like mantle only ran on amd.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Its based on Mantle DRIVER. Every API is different even in code, but the core of it is the same. Thats what makes DX12 not locked for any of the vendors. Mantle was only for showcasing purposes. AMD did not bothered with developing it further if it the core was and is in every API there exists right now. However, Mantle looks to be experimental laboratory inside for AMD and their APIs. Even the newest RayTracing API for OpenCL is based on Mantle.

http://developer.amd.com/community/blog/2015/08/14/amd-firerays-library/
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
Welcome to GameWorks. Sucks to have such practice, doesn't it?

Can we stop with the GameWorks BS? this "test" does not have gameworks and AMD is just crap at DX11 compared to nvidia.
And when i point out that, its suddenly this "test" problem, oh the irony.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Can we stop with the GameWorks BS? this "test" does not have gameworks and AMD is just crap at DX11 compared to nvidia.
And when i point out that, its suddenly this "test" problem, oh the irony.

In games with DX-12 support why does AMD GCN cards have to be good working in DX-11 mode ??
 

dogen1

Senior member
Oct 14, 2014
739
40
91
I was responding to very concrete things.

1.Oxide gave the source code to companies so if there where a way for nvidia to improve dx12 they would have.
-Sure ok they did give out the source code,how where the companies supposed to run this source code in an dx12 environment?
So bottom line nvidia only got to work on dx11 for more then a year.

2.Oxide are very capable of programming so there is nothing wrong with their dx12 code.
-Doesn't convince me if they can't even get a shader to run fast.

This game still needs a lot of work, if afterwards it still runs crappy on nvidia and dx12 then sure it's nvidia's fault and the product is bad.

I think it's common for game developers to not know how to optimize well for nvidia hardware. There's very little documentation on them, in comparison to AMD.
 
Last edited:

Spjut

Senior member
Apr 9, 2011
933
163
106
DX12 is windows 10 only.

It's offered as a free upgrade for all Win7 and 8/8.1 users for a whole year however.

It just makes more sense to spend all resources on optimizing the DX12 path instead.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I was responding to very concrete things.

1.Oxide gave the source code to companies so if there where a way for nvidia to improve dx12 they would have.
-Sure ok they did give out the source code,how where the companies supposed to run this source code in an dx12 environment?
So bottom line nvidia only got to work on dx11 for more then a year.

nVidia hasnt got access to the source code of the "game" since one year. They got the source code of a previous versions of the engine which run fine on their hardware...

But Oxide hasnt explained why there wasnt a problem with Star Swarm on nVidia hardware either.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
In games with DX-12 support why does AMD GCN cards have to be good working in DX-11 mode ??

What "games with DX12 support" are you basing this on?

I would like a list of DX12 games with proper driver support out of beta please.

And your references for making this claim.
 

TheELF

Diamond Member
Dec 22, 2012
4,029
753
126
So nvidia has been boasting about dx 12.1 and else but they haven't touched dx 12 yet. Ok.

It's a very different thing to have the specifications of dx12 from having a working environment where you can actually run something.
 

Riek

Senior member
Dec 16, 2008
409
15
76
Can we stop with the GameWorks BS? this "test" does not have gameworks and AMD is just crap at DX11 compared to nvidia.
And when i point out that, its suddenly this "test" problem, oh the irony.

I only saw benches of the 290x vs the 980ti. So how can you talk about dx11 performance if by design they are both in a completely different performance category? (with 290x 40+% slower)
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
nVidia hasnt got access to the source code of the "game" since one year. They got the source code of a previous versions of the engine which run fine on their hardware...

What are you talking about ? What is this FUD?

But Oxide hasnt explained why there wasnt a problem with Star Swarm on nVidia hardware either.

Star Swar = DX12 - AC
Aot = DX12 + AC

Got it ? where is bench about Ac on star swar? even on Anandtech's article ( www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm) , you won't find any word about Async compute.

according to Silverforce11 :
some Warriors think that DX12 means 20% boost So nv cards should get Massive boost and should be Faster than AMD card!!!! AMD is low and should not be higher than nv!
wow! just wow!
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So, now we are back that AS is activated on nVidia cards? :hmm:

If this Oxide employee is real then nVidia cards dont use AS in Ashes.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Simple as what?
1) Did nvidia had access to windows 10 or to dx12?
No!They had access to dx11 and there they killed it,ashes runs twice as fast on nvidia then it does on amd.And it is a AMD title,figure that.

2) Oxide couldn't even code a fast shader for nvidia hardware,talking about experienced coders here, and you want us to believe that they managed to code async compute for them?

1) So, it's gone from nVidia has been working on DX12 w/ Msft for the last 5 years to only having DX11 access. I guess we are now supposed to believe that nVidia was waiting for their free Win10 upgrade from Msft before they had access to DX12?

2) The shader is plenty fast enough if the hardware supports async compute (Which nVidia's drivers said theirs GPU's did.). nVidia supplied them with one that didn't. So, if nVidia doesn't support a DX12 feature devs shouldn't use it? They instead should use whatever is faster on nVidia at the detriment of the games overall performance? That would make them more competent? Is that really your contention?