Ashes of the Singularity User Benchmarks Thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Azix

Golden Member
Apr 18, 2014
1,438
67
91
There never will be drivers for games, and optimizations in DX12. The game talks directly to the GPU, without driver scheduling.

I mean't their dx11 performance might change with a future driver update.
 

casiofx

Senior member
Mar 24, 2015
369
36
61
Bought my R9 290 19 months ago, damn for an old GPU AMD still managed to upgrade the performance via DX12. Best GPU's purchase decision ever.

About the gimped DX11 performance. Not a care given since 19 months ago R9 290/290X were meant to battle 780/780 Tis. Get it? Gimped DX11 R9 290 still holds its own against GTX 780. There were never really driver problems for me too. :)
 

boozzer

Golden Member
Jan 12, 2012
1,549
18
81
If other dx12 games performs the same way, this will be as big as it gets. I am at a lost for words of praise if this really does pan out. it is just huge!
 

tential

Diamond Member
May 13, 2008
7,348
642
121
I'm with you there. Wait a few years, let all of the bugs get patched out and scoop up a GOTY edition with all of the DLC on the cheap. By that time I can play the game without compromising on the graphics.

Yup. This is how I like to play. The "hype" and all of that stuff surrounding games doesn't appeal to me. I just like to play games when I want to.
4K is killing me. It's making me wait to play games now. I need to know how much the UHD 65 inch 4k wasabi mango freesync monitor is! I want to play in 4k freesync now :(
 
Feb 19, 2009
10,457
10
76
No way GCN has been ever developed with anything but DX11 in mind, they must have started working on that like +6 years ago, at least not GCN 1.0.

Guess how long ago when the console APU were in the planning stages.

Guess how long they've been working on Mantle.

It's pretty clear AMD designed GCN for DX11, but not fully optimized unless running a future API, such as Mantle. For one, the 8 extra ACEs are not utilized due to DX11's serial nature.

Some history lesson:
http://www.pcworld.com/article/2109...-pc-gamings-software-supercharged-future.html
http://www.eteknix.com/interview-amds-richard-huddy-responds-criticisms-mantle/

Also when AMD was asked why even invest to make Mantle at all, instead of just improving DX11 drivers like NV with their multi-threading, they said they wanted to invest in entire new future API with more features rather than patch up DX11. They want to move the industry forward while maximizing their uarch benefits.
 
Last edited:

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
Yeah GCN was pretty clearly made with an eye toward Mantle and future low-level APIs. Back in 2011, before the first GCN cards were even released and much before Mantle was announced, Richard Huddy was already talking about how one of the most common requests AMD gets from developers is to "get rid of the API".

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

'It's funny,' says AMD's worldwide developer relations manager of its GPU division, Richard Huddy. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

'I certainly hear this in my conversations with games developers,' he says, 'and I guess it was actually the primary appeal of Larrabee to developers – not the hardware, which was hot and slow and unimpressive, but the software – being able to have total control over the machine, which is what the very best games developers want. By giving you access to the hardware at the very low level, you give games developers a chance to innovate, and that's going to put pressure on Microsoft – no doubt at all.'


Add to that the fact that GCN has hardware features that aren't even exposed by DirectX 11, like the asynchronous compute engines, and it's clear that AMD had low-level APIs beyond DX11 in mind when making GCN.
 

Red Hawk

Diamond Member
Jan 1, 2011
3,266
169
106
With the direction DirectX 12 is going in, it's entirely possible that DirectX 12 Gameworks games will perform even worse on AMD cards than DirectX 11 Gameworks games do now, yes. DirectX 12 doesn't completely eliminate drivers, but it does shift more responsibility for optimizing the code away from the graphics driver over to the game developer.
 
Last edited:

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
With the direction DirectX 12 is going in, it's entirely possible that DirectX 12 Gameworks games will perform even worse on AMD cards than DirectX 11 Gameworks games do now, yes. DirectX 12 doesn't completely eliminate drivers, but it does shift more responsibility for optimizing the code away from the graphics driver over to the game developer.

Unless they block AMD's access to the game code though, AMD can see what's wrong and recommend fixes for the dev.
 
Feb 19, 2009
10,457
10
76
Unless they block AMD's access to the game code though, AMD can see what's wrong and recommend fixes for the dev.

Warner Bros refused AMD optimized code.

Ubisoft REMOVED dx10.1 implementation in AC.

Project Cars Devs think that sharing 20 Steam codes with AMD is doing their part.

GameWork DEVs are fully bought out, very unethical bunch, unlike Oxide.
 
Last edited:

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
I have been rather wondering at some peoples seemingly rather inconsistent beliefs about this :)
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Yeah GCN was pretty clearly made with an eye toward Mantle and future low-level APIs. Back in 2011, before the first GCN cards were even released and much before Mantle was announced, Richard Huddy was already talking about how one of the most common requests AMD gets from developers is to "get rid of the API".

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/1

'It's funny,' says AMD's worldwide developer relations manager of its GPU division, Richard Huddy. 'We often have at least ten times as much horsepower as an Xbox 360 or a PS3 in a high-end graphics card, yet it's very clear that the games don't look ten times as good. To a significant extent, that's because, one way or another, for good reasons and bad - mostly good, DirectX is getting in the way.' Huddy says that one of the most common requests he gets from game developers is: 'Make the API go away.'

'I certainly hear this in my conversations with games developers,' he says, 'and I guess it was actually the primary appeal of Larrabee to developers – not the hardware, which was hot and slow and unimpressive, but the software – being able to have total control over the machine, which is what the very best games developers want. By giving you access to the hardware at the very low level, you give games developers a chance to innovate, and that's going to put pressure on Microsoft – no doubt at all.'


Add to that the fact that GCN has hardware features that aren't even exposed by DirectX 11, like the asynchronous compute engines, and it's clear that AMD had low-level APIs beyond DX11 in mind when making GCN.

DX11 was released in the end of 2009. nVidia and AMD has released hardware which goes beyond it. Even Fermi is more advanced so that is compartible with DX12... :\
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Indeed, it's still an alpha state.

Did you know Ashes run with v-sync forced on in DX12 mode for AMD GPUs?

http://www.eurogamer.net/articles/digitalfoundry-2015-ashes-of-the-singularity-dx12-benchmark-tested

Well that's strange. Didn't look like other reviewers had any problems with that, or perhaps they were using monitors with higher refresh rates..

Funny to see the R290X matching the 980Ti. The normal performance gap is what, 40-50%?

Indeed. The Ars Technica review was an outlier in that regard though. That's why alpha software performance should always be taken with a healthy dose of salt.

As for NV's PR statements, they released an optimized driver for this game, in alpha, so it definitely shows that Oxide have been collaborating with them. Why did they feel the need to spite Oxide for making a DX12 game that's "not representative of DX12 games"?? Really, Oxide has been one of the foundation group to push DX12, featured in GDC and even SIGGRAPH events about these new APIs: http://nextgenapis.realtimerendering.com/

Does it really surprise you NVidia did that? Right now a few people are wondering whether NVidia's DX12 driver and or architecture are crippled, so assigning blame to the other party for the performance is standard marketing response..
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/500#post_24325746
In the same thread on another page you have exact explanation why Nvidia GPUs are getting lackluster performance in DirectX 12.

It is not due to developer, but due Nvidia hardware that is incapable of running in parallel.

And that explanation doesn't reflect reality. All graphics architectures are heavily parallel in their function by necessity, as graphics workloads are by nature embarrassingly parallel.

The NVidia GigaThread engine which is the scheduler manages thousands of threads in parallel according to NVidia, and also enables extremely fast context switching.

To think that NVidia would have overlooked such a fundamental principle is difficult to swallow, considering how much effort was put into Maxwell to make it compatible with DX12, and attain a higher feature level than the competition.
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
BTW, WCCFtech got an extremely large increase of 180% with no less than a GTX 770.

DX11 scores:

11873166_10205759146311229_1909515092_o.jpg


DX12 scores:

11874905_10205759147271253_1779442893_o.jpg


Source

Like I said, the benchmark is very fishy and should be taken with a grain of salt because it doesn't look like it's working properly.
 

positivedoppler

Golden Member
Apr 30, 2012
1,144
236
116
http://www.overclock.net/t/1569897/...singularity-dx12-benchmarks/500#post_24325746
In the same thread on another page you have exact explanation why Nvidia GPUs are getting lackluster performance in DirectX 12.

It is not due to developer, but due Nvidia hardware that is incapable of running in parallel.

There never will be drivers for games, and optimizations in DX12. The game talks directly to the GPU, without driver scheduling.

That would explain why Nvidia rushed the 970 and 980 to market so quickly and then released of 980ti so soon after Titan. Sale as much as possible before Direct12 games come out. Now with Directx12 games just now released, we're only a year away from Pascal. I say Nvidia timed everything perfectly. If a couple more games confirms the AOS trend, Geforce sales will tank pretty badly, afterall, who wants to buy hardware tuned for yesterday's API
 
Last edited:
Feb 19, 2009
10,457
10
76
BTW, WCCFtech got an extremely large increase of 180% with no less than a GTX 770.

If you read the info on the test and from the official blog, it's got lots of different test scenarios. The pics (can't access the large pic atm) on WCCFtech seems to do a CPU test for the 770 which focus on draw calls only. It's not the same test being run on the other GPUs (Full System Test).

Currently its an alpha, so its only interesting but nothing definitive. We can speculate why NV performs slower in DX12, potentially due to its in-order uarch, whether its true or not, would take a game engine expert to chime in on that topic. The proof is when DX12 games are released and Maxwell 2 is gimped on performance. Would people care or they just upgrade to Pascal?
 
Last edited:

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
Would people care or they just upgrade to Pascal?

I would certainly care. That would be a massive scandal, similar to the GTX 970 3.5GB VRAM issue..

Whether that would stop me from getting Pascal is another thing though. Come next year, I want to abandon SLI and go single GPU and get a 4K monitor. So I want the fastest single GPU money can buy, ie Titan XI or whatever..

Likely NVidia will come out with Pascal before AMD comes out with Arctic Islands, so a lot of impatient gamers will jump straight to Pascal right off the bat.
 

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Hopefully this bench is just an outlier, we need more data and expert analysis...if only techreport did it like they did for pump whine or pcper for cfx issues.
 
Feb 19, 2009
10,457
10
76
I would certainly care. That would be a massive scandal, similar to the GTX 970 3.5GB VRAM issue..

I recall a lot of people describe the 3.5GB Gimpage as "storm in a teacup", "nothing to worry bout" or even "sure, nv will have to keep optimizing drivers for the 3.5 + 0.5 partition, but its okay, cos nvidia!!"... including a few NA review sites.

NV has only claimed Maxwell 2 support for DX12 & FLs, they didn't claim it would be any good at it.

I doubt many would care IF Maxwell 2 tanks in DX12 and Pascal stomps on it.. it's natural, next-gen beat previous gen... Upgrade quick!

It'll probably be a huge win for NV sales/$$ as people move to upgrade to Pascal en-mass for DX12 games.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
I recall a lot of people describe the 3.5GB Gimpage as "storm in a teacup", "nothing to worry bout" or even "sure, nv will have to keep optimizing drivers for the 3.5 + 0.5 partition, but its okay, cos nvidia!!"... including a few NA review sites.

NV has only claimed Maxwell 2 support for DX12 & FLs, they didn't claim it would be any good at it.

I doubt many would care IF Maxwell 2 tanks in DX12 and Pascal stomps on it.. it's natural, next-gen beat previous gen... Upgrade quick!

It'll probably be a huge win for NV sales/$$ as people move to upgrade to Pascal en-mass for DX12 games.

Currently you put all your eggs into a single pre alpha game benchmark. Remember Star Swarm? Some people claimed nVidia was doomed too, look what happend. And yet again you pull the GTX970 card as some kind of statement to back up your extremely limited amount of data on the subject. I can understand the need for an escape goat after the newly published graphics share numbers. But this isnt it.

And AOTS is an AMD sponsored game. How are you feeling about nVidia sponsored games again?
 
Last edited:
Feb 19, 2009
10,457
10
76
Currently you put all your eggs into a single pre alpha game benchmark. Remember Star Swarm? Some people claimed nVidia was doomed too, look what happend.

And AOTS is an AMD sponsored game. How are you feeling about nVidia sponsored games again?

Star Swarm is a synthetic, draw call bottleneck. It's also why in those same threads I posted not to draw much conclusion because it isn't the entire game. The same as I & you posted in the 3dMark DX12 API test on the same issue.

Notice the CPU test in Ashes, no lights, no dynamic lightsources, just lots of smoke/trails for draw calls. Keep GPU load minimal, maximize CPU loading.

Nobody is putting eggs anywhere. I FULLY expect GCN to shine on DX12 given the similarities to Mantle. Do you deny that still?

As for an AMD sponsored game, it showcases HOW GOOD AMD ARE ethically and devs who work with them compared to GimpWorks. They willingly offer SOURCE code (not like blackbox obfuscation in GimpWorks) to ALL IHVs, over a year and they aren't even at release. They willingly accepted optimized shaders from NV, the non-sponsor & competitor to AMD, so that it improves performance for NV. They even offered to help NV fix their MSAA DX12 driver bug. Now compare that to the likes of Project Cars dev, who thinks their responsibility to ensure their game is optimized extends as far as sending AMD TWENTY (20!!) Steam keys post-release. Words apart. So how do I feel about AMD sponsored games? I feel vindicated, that AMD is an ethical company and that they support ethical developers to move forward the gaming industry as a whole. I'll support that anyday over GimpWorks.
 
Last edited:
Feb 19, 2009
10,457
10
76
Looks more like software rendering. It could use 256 cores for that matter then.

Which is excellent and showcases DX12's potential for draw call & parallelism. I was very impressed with Frostbite when it uses 6 threads, this is on another level and all involved with DX12 should be commended (and certainly not belittled like some members here who are unhappy with the results!).