Ashes of the Singularity User Benchmarks Thread

Page 43 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

psolord

Golden Member
Sep 16, 2009
1,928
1,194
136
Well since this is the user benchmark thread, let me post my benchmarks.

Win 10, latest drivers, game version 0.49.11978.

Videos are recorded with an external recorded at 1080p/60fps, so performance is unaffected by the recording. Also spicy wallpapers alert! :p

Ashes of Singularity 1920x1080 High DX11+DX12 7950 @1.1Ghz CORE i7-860 @4Ghz

7950 DX11 score 23fps, DX12 score 27fps

GPU usage was not at 100% in DX11 which indicates a cpu limit. Otherwise the score could be better.

========

Ashes of the Singularity 1920x1080 High DX11+DX12 GTX 970 @1.5Ghz Core i5 2500k@4.8Ghz

970 DX11 score 44fps, DX12 score 40.4fps

My 2500k@4.8Ghz presented no cpu limit, still the cpu usage was crazy in DX11.



It was interesting to see the cpu usage dropping by a lot in DX12 and the cpu temp as well.

It is also interesting that the 970 was 91% faster than the 7950 in DX11 and 49% faster in DX12.

Unfortunately I could not test my 570 since it recently died :(

Also the benchmark won't run on my 5850s. It loads in DX11 mode but stays in the grey screen forever and in DX12 it just crashes.

Excuse the silly question, I know the 5850s are DX11 cards, but should they be able to run the DX12 shaderpath? Iirc AMD had said that there would be some basic support for DX12 even on older cards?
 
Last edited:

.vodka

Golden Member
Dec 5, 2014
1,203
1,537
136
No, AMD isn't providing DX12 support for HD2000-6000 cards. GCN only.

In that regard nV did it better, giving Fermi and Kepler DX12 compatibility / a WDDM 2.0 driver, at FL11.0 though. It's not available for Fermi yet, but it should be coming.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
Well since this is the user benchmark thread, let me post my benchmarks.

Win 10, latest drivers, game version 0.49.11978.

Videos are recorded with an external recorded at 1080p/60fps, so performance is unaffected by the recording. Also spicy wallpapers alert! :p

Ashes of Singularity 1920x1080 High DX11+DX12 7950 @1.1Ghz CORE i7-860 @4Ghz

7950 DX11 score 23fps, DX12 score 27fps

GPU usage was not at 100% in DX11 which indicates a cpu limit. Otherwise the score could be better.

========

Ashes of the Singularity 1920x1080 High DX11+DX12 GTX 970 @1.5Ghz Core i5 2500k@4.8Ghz

970 DX11 score 44fps, DX12 score 40.4fps

My 2500k@4.8Ghz presented no cpu limit, still the cpu usage was crazy in DX11.



It was interesting to see the cpu usage dropping by a lot in DX12 and the cpu temp as well.

It is also interesting that the 970 was 91% faster than the 7950 in DX11 and 49% faster in DX12.

Unfortunately I could not test my 570 since it recently died :(

Also the benchmark won't run on my 5850s. It loads in DX11 mode but stays in the grey screen forever and in DX12 it just crashes.

Excuse the silly question, I know the 5850s are DX11 cards, but should they be able to run the DX12 shaderpath? Iirc AMD had said that there would be some basic support for DX12 even on older cards?


Nice thanks,

Can you run the benchmark using the same CPU for both cards ?? DX-12 only

Also, can you run the CPU benchmark (GPU unlimited) with both GPUs but using the same CPU ?? DX-12 only

thanks ;)
 

Despoiler

Golden Member
Nov 10, 2007
1,966
770
136
Well since this is the user benchmark thread, let me post my benchmarks.

Win 10, latest drivers, game version 0.49.11978.

Videos are recorded with an external recorded at 1080p/60fps, so performance is unaffected by the recording. Also spicy wallpapers alert! :p

Ashes of Singularity 1920x1080 High DX11+DX12 7950 @1.1Ghz CORE i7-860 @4Ghz

7950 DX11 score 23fps, DX12 score 27fps

GPU usage was not at 100% in DX11 which indicates a cpu limit. Otherwise the score could be better.

========

Ashes of the Singularity 1920x1080 High DX11+DX12 GTX 970 @1.5Ghz Core i5 2500k@4.8Ghz

970 DX11 score 44fps, DX12 score 40.4fps

My 2500k@4.8Ghz presented no cpu limit, still the cpu usage was crazy in DX11.



It was interesting to see the cpu usage dropping by a lot in DX12 and the cpu temp as well.

It is also interesting that the 970 was 91% faster than the 7950 in DX11 and 49% faster in DX12.

Unfortunately I could not test my 570 since it recently died :(

Also the benchmark won't run on my 5850s. It loads in DX11 mode but stays in the grey screen forever and in DX12 it just crashes.

Excuse the silly question, I know the 5850s are DX11 cards, but should they be able to run the DX12 shaderpath? Iirc AMD had said that there would be some basic support for DX12 even on older cards?

I'm not sure how linkable you are back to your Ashes account, but videos and streaming are the only thing that breaks the NDA. Also, a stock run on the 970 would be a lot more helpful for discussion purposes.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Because the former IS EMULATION, whilst the latter is not..

Either they are both emulating a hardware feature in software, or neither are. They are the same concept, applied in the same context, just targeting different features. This handwaving about "natural" task for the CPU doesn't hold water. Yes, we realize a CPU is more advanced on a per core basis. A cpu can do literally every task a GPU can do, in software. Its a matter of speed, latency, throughput, energy use...

You dont get to pick and choose arbitrarily which features when implemented in software = emulation which is apparently naughty and bad, versus when a software implementation is "natural" (whatever that means) and thus ok and good.

Software implementations of dx12 features are either ok, or not, based on criteria which CAN NOT be the vendor which is implementing it. You can say "All software implementations that are sufficiently fast to not bottleneck the hardware are OK" and this is a very reasonable decision. This is likely how the vendors themselves decided one way or the other (do something in drivers vs in hardware). But you can't say "Dx12 features nvidia did in hardware can't be done in software because [insert hand wavey argument] but nvidia can do asynchronous compute in software because [some other hand wavey reason]"
 
Last edited:

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
You dont get to pick and choose arbitrarily which features when implemented in software = emulation which is apparently naughty and bad, versus when a software implementation is "natural" (whatever that means) and thus ok and good.

(do something in drivers vs in hardware).

So based on the benchmarks,both amd and nvidia need a very fast cpu(i5-i7) to get top speeds, so hardware my behind if you need the same amount of cpu.
If GCN would get you ~60 on the i3s and FXs then yes they would be wastly better and would do it all in hardware,since that is not the case and GCN needs wast amounts of computing power (FROM THE CPU) it's all the same.
 

psolord

Golden Member
Sep 16, 2009
1,928
1,194
136
No, AMD isn't providing DX12 support for HD2000-6000 cards. GCN only.

In that regard nV did it better, giving Fermi and Kepler DX12 compatibility / a WDDM 2.0 driver, at FL11.0 though. It's not available for Fermi yet, but it should be coming.

Ajh I see thanks.

I guess it's about time I move my gpus one rig down and get a new one for the primary.

Nice thanks,

Can you run the benchmark using the same CPU for both cards ?? DX-12 only

Also, can you run the CPU benchmark (GPU unlimited) with both GPUs but using the same CPU ?? DX-12 only

thanks ;)

You are welcome patrida. :)

I was not planning for any hardware rearrangement but I guess this test can be helpful so let me see what I can do.

Do you want videos or just results?

I'm not sure how linkable you are back to your Ashes account, but videos and streaming are the only thing that breaks the NDA. Also, a stock run on the 970 would be a lot more helpful for discussion purposes.

Hmmm there are lots of vids on YT and I thought it was ok.

I will take them down if anyone gets annoyed.

As for the OC it's nothing really. The card already boosts at 1.4Ghz by itself.

I can give you some stock results if you tell me what interests you.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
You are welcome patrida. :)

I was not planning for any hardware rearrangement but I guess this test can be helpful so let me see what I can do.

Do you want videos or just results?

Patrida just results will be fine, just take a few pics

thanks ;)
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
It was from this post
https://forum.beyond3d.com/posts/1870705/

"Hyper-Q enables multiple CPU threads or processes to launch work on a single GPU simultaneously, thereby dramatically increasing GPU utilization and slashing CPU idle times. This feature increases the total number of "connections" between the host and GPU by allowing 32 simultaneous, hardware-managed connections, compared to the single connection available with GPUs without Hyper-Q (e.g. Fermi GPUs)."

Link PDF : http://docs.nvidia.com/cuda/samples/6_Advanced/simpleHyperQ/doc/HyperQ.pdf

But sebbbi Answered.
https://forum.beyond3d.com/posts/1870717/

Hyper-Q is for running multiple compute dispatches simultaneously, not for running compute + graphics.

So Hyper-Q =! ACE
 
Feb 19, 2009
10,457
10
76
Hyper-Q dates back to Fermi for Teslas, parallel compute execution over a single engine. And no, it does not support parallel graphics + compute in the same pipeline. That much is certain now.

Also this is interesting, devs have to be careful of async compute usage as to not gimp NV GPUs:

http://wccftech.com/aquanox-dev-async-compute-implementation-limit-nvidia-users/

But its expected, UE4 doesn't even support async compute for the PC, only for Xbone. Wonder why??
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Hyper-Q dates back to Fermi for Teslas, parallel compute execution over a single engine. And no, it does not support parallel graphics + compute in the same pipeline. That much is certain now.

Also this is interesting, devs have to be careful of async compute usage as to not gimp NV GPUs:

http://wccftech.com/aquanox-dev-async-compute-implementation-limit-nvidia-users/

But its expected, UE4 doesn't even support async compute for the PC, only for Xbone. Wonder why??

Because until nVidia supports it, it doesn't matter. You must be new around here. ;)
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Also this is interesting, devs have to be careful of async compute usage as to not gimp NV GPUs:

http://wccftech.com/aquanox-dev-async-compute-implementation-limit-nvidia-users/

Now this is an interesting choice of words. You say "devs," implying that there are multiple developers involved. But when you click on the link, you see only a single developer, and a very old one at that.

I remember Aquanox from LONG AGO, and the same developer made a benchmark as well if I recall way back in the day. So now they're dusting themselves off and removing the cobwebs, and already they're making silly comments.. :|

But its expected, UE4 doesn't even support async compute for the PC, only for Xbone. Wonder why??

Remember Fable Legends, the game from which you yourself posted multiple links concerning asynchronous compute shaders and their performance enhancements for DX12?

Well it runs on the Unreal Engine 4.. :sneaky:
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
No [gówno] Sherlock. Lets compare a PS4 with 7870 level of graphics to a rig with Asus Ares 2 - Two cards that are two times more powerfull each. And for the sake of argument run games in ubersampling... to compare the CPUs inside those rigs.:thumbsdown:

Wait a sec, YOU posted that graph not me, and now you're mad that your own supposed evidence is being used against you? :D

The entire point of that graph was to test the CPU, so of course they're going to use a powerful GPU....duh! :rolleyes:

You just tried to make a point, bet then refuted it yourself. OK

I don't even... o_O

Do NOT pretend that a PC with 2x OCed HD7970s paired with Phenom II x6, a freaken dinosaur of a CPU is GPU bound at 1080p...

I don't even know what you're talking about. Maybe it's the language barrier, but none of what you are posting makes any sense. I never said anything about the Phenom II x6 being GPU bound at 1080p. The 3970x is 90% faster than the Phenom, so obviously that particular test is CPU bottlenecked.

And... I feel like you didn't finish your last point. If phenom x6 drops to 50 in the same scene where ps4 keeps 60, how does multiplayer is going to change that if we know that PC system has infinite graphics processing power compared to ps4?

I can tell you've never played BF4. BF4 multiplayer is largely CPU bound like many online games, because the CPU is having to keep track of a lot of different things at once on a 64 player server.....even things you can't see on the screen.

It is quite simple, PS4 has better API that can fully utilize its resources, even better than mantle.

Yes the PS4 has access to a lower level API than Mantle, so devs can theoretically use more of the hardware than a similarly spec'd PC. But my point was that the low level access isn't allowing the PS4 to play BF4 at a level of quality seen on even low end PCs with a GTX 750 Ti

Compare the above with the PS4 and the Xbox One's performance in the campaign.
 
Last edited by a moderator:
Feb 19, 2009
10,457
10
76
Remember Fable Legends, the game from which you yourself posted multiple links concerning asynchronous compute shaders and their performance enhancements for DX12?

Well it runs on the Unreal Engine 4.. :sneaky:

Customized version of UE4. :sneaky:

Lionhead Studios are the ones who ported Async Compute into UE4 for every other dev to use it, but its only enabled for Xbone. :sneaky:
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Customized version of UE4. :sneaky:

Lionhead Studios are the ones who ported Async Compute into UE4 for every other dev to use it, but its only enabled for Xbone. :sneaky:

So all of this time when you were posting that video repeatedly to bolster your argument about asynchronous compute, the game wasn't even using it for the PC version? :awe:

Very sneaky there man :sneaky:

On a side note, Joel Hruska over at Extremetech has written a nice summary about the asynchronous compute fiasco.

Read it here.

No new information, but it condenses all the information that is known in one article. I guess we'll just have to wait to see what NVidia does and whether they can fully enable concurrent asynchronous compute..

If they do, I'm not expecting large performance gains due to the difference in architectures between Maxwell 2 and GCN.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Hyper-Q dates back to Fermi for Teslas, parallel compute execution over a single engine. And no, it does not support parallel graphics + compute in the same pipeline. That much is certain now.

Also this is interesting, devs have to be careful of async compute usage as to not gimp NV GPUs:

http://wccftech.com/aquanox-dev-async-compute-implementation-limit-nvidia-users/

But its expected, UE4 doesn't even support async compute for the PC, only for Xbone. Wonder why??

wccftech seems to lean towards nvidia. I think they are just playing on fanboys though. Next month its AMD.

PC exclusive games are likely to vary on how much they use the feature. Console games are very likely to going forward. What I find wrong is that he says it would harm nvidia. They were doing compute tasks already and simply put them to run using ACEs. For nvidia, none of that is going on. Everything runs as it would in dx11. So its not something that would harm nvidia. The drop in performance nvidia sees in dx12 is for a different reason and should be looked into. It suggests their architecture is way too dx11 specialized.

If its a simple enough task to put compute shader tasks to run asynchronously, they should. Wonder how hard it was for the ashes devs to RUN their tasks like that
 
Feb 19, 2009
10,457
10
76
So all of this time when you were posting that video repeatedly to bolster your argument about asynchronous compute, the game wasn't even using it for the PC version? :awe:

Rather than make stupid assumptions, you should go to twitter and ask Lionhead Studios yourself.

All we know is they ported their customized async shaders to UE4 for other devs to use, but Epic only enables the Xbone version, so far.
 

dogen1

Senior member
Oct 14, 2014
739
40
91

Irenicus

Member
Jul 10, 2008
94
0
0
if they don't enable it for PC that will get hacked quickly ;) to work

I imagine there would be an absolute fury if they chose to enable it ONLY for consoles and leave it turned off for the pc. It would be a move so explicitly designed to gimp amd gpus for no reason.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I imagine there would be an absolute fury if they chose to enable it ONLY for consoles and leave it turned off for the pc. It would be a move so explicitly designed to gimp amd gpus for no reason.

...and then add the path that allows them to access it through drivers, but still didn't activate the DX12 path. The defenders would say, "Why should they allow AMD to use it. They didn't pay for it."

Don't think it will happen?
 

psolord

Golden Member
Sep 16, 2009
1,928
1,194
136
Well since this is the user benchmark thread, let me post my benchmarks.

Win 10, latest drivers, game version 0.49.11978.

Videos are recorded with an external recorded at 1080p/60fps, so performance is unaffected by the recording. Also spicy wallpapers alert! :p

Ashes of Singularity 1920x1080 High DX11+DX12 7950 @1.1Ghz CORE i7-860 @4Ghz

7950 DX11 score 23fps, DX12 score 27fps

GPU usage was not at 100% in DX11 which indicates a cpu limit. Otherwise the score could be better.

========

Ashes of the Singularity 1920x1080 High DX11+DX12 GTX 970 @1.5Ghz Core i5 2500k@4.8Ghz

970 DX11 score 44fps, DX12 score 40.4fps

My 2500k@4.8Ghz presented no cpu limit, still the cpu usage was crazy in DX11.



It was interesting to see the cpu usage dropping by a lot in DX12 and the cpu temp as well.

It is also interesting that the 970 was 91% faster than the 7950 in DX11 and 49% faster in DX12.

Unfortunately I could not test my 570 since it recently died :(

Also the benchmark won't run on my 5850s. It loads in DX11 mode but stays in the grey screen forever and in DX12 it just crashes.

Excuse the silly question, I know the 5850s are DX11 cards, but should they be able to run the DX12 shaderpath? Iirc AMD had said that there would be some basic support for DX12 even on older cards?

Nice thanks,

Can you run the benchmark using the same CPU for both cards ?? DX-12 only

Also, can you run the CPU benchmark (GPU unlimited) with both GPUs but using the same CPU ?? DX-12 only

thanks

Ok, here are the rest of the tests.

970+2500k cpu test with the same above frequencies.



And here are the 7950+2500k tests and graph





The 7950s score with the 2500k went up 1.5fps.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
Ok, here are the rest of the tests.

Could you by any chance be nice enough to run the GPU bench in a window and show us the threads in the background with process hacker or process explorer?
I am very curious to see how much of this "DX12 is soooo multithreaded" talk is actually truth.
CPU bench obviously will fill the CPU up 100% because it can always add more units the more cores you have but GPU bench will be interesting.
Afterburner is nice and all but it only shows you the core utilization as an average over time it does not show you what actually goes on with the threads.

Something like this ,main window sorted by cpu use so that we can see how much of the cpu it uses and the threads also sorted by cpu.
 

psolord

Golden Member
Sep 16, 2009
1,928
1,194
136
Sure thing.

See if I did it correctly.

This is DX12.



System at its normal usage clocks, 2500k@4.3Ghz, G1 GTX 970 stock.

And this is DX11.

 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Rather than make stupid assumptions, you should go to twitter and ask Lionhead Studios yourself.

All we know is they ported their customized async shaders to UE4 for other devs to use, but Epic only enables the Xbone version, so far.

Gears of War Ultimate edition PC supports asynchronous compute

The engine is based on Unreal Engine 3.5.

Cam McRae: We are still hard at work optimising the game. DirectX 12 allows us much better control over the CPU load with heavily reduced driver overhead. Some of the overhead has been moved to the game where we can have control over it. Our main effort is in parallelising the rendering system to take advantage of multiple CPU cores. Command list creation and D3D resource creation are the big focus here. We're also pulling in optimisations from UE4 where possible, such as pipeline state object caching. On the GPU side, we've converted SSAO to make use of async compute and are exploring the same for other features, like MSAA.