computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 50 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
I wonder how much performance this new rendering model that is normally used in movie industry (Object Space Rendering) sacrifices compared to basic rendering modeling? Ofc we can't have direct comparision as there is no version out with basic model.

But the game appears to be quite much compute based as there is very low performance penalty when upping resolutions.

I get:
55fps on Ultra+ 8X MSAA on 1080p
37fps on Ultra+ 8X MSAA on 2160p

So 33% performance penalty when rendering 4x of pixels.

So I would call this game performing very well, Dual cards are getting over 60fps at 4k everyting maxed. How is this any different from any other new games running maxed at 4k?
Shaders during rasterization pass should be very simple, so the small decrease of performance during rasterization pass is not a surprise. (All lighting is done before rasterization.)

It also could be interesting to reduce shading resolution and still render in 4k.
Tech could work very well with virtual texturing, although it seems that they do use object based approach. (Not sure how much they reuse the shading between frames.)

Cannot wait to see their GDC presentation, should be fun.
 
Last edited:

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Rather than debating why NV under-performs in a DX12 game using Async Compute (because their hardware is incapable, /end), you should ask why NV is under-performing in games they actually sponsor with GameWorks, like The Division, where the gap is even bigger than Ashes.

Or games from their long-time partner like Far Cry Primal. These are all DX11 too, where NV supposedly has better drivers. -_-

I mean Hitman, I can understand, it's AMD sponsored so it runs better... but the others?

Even Rise of the Tomb Raider where NV sponsors, why is the 390 so much faster than the 970 in the release build?!

http://www.pcper.com/reviews/Graphi...Performance-Results/Adding-GTX-970-and-R9-390

Maybe because they are not using DX11 Context Lists because AMD and Intel does not have it and as a result Nvidia is in the minority of market that supports that feature?
 

Dygaza

Member
Oct 16, 2015
176
34
101
Maybe because they are not using DX11 Context Lists because AMD and Intel does not have it and as a result Nvidia is in the minority of market that supports that feature?

So you're saying nvidia is getting cpu bound now, instead of AMD...
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Oh,

That's rich. So devs shouldn't include Gameworks and PhysX effects that can't run on AMD GPUs?

I never said that, but you should remember that Gameworks and Physx are present on the PS4, so it works on AMD hardware as well. And Physx usage its mostly CPU on every platform.

The mayor problem with Gameworks is dev lazyness, the appealing part of gameworks is reducing development time by offering a library that has all the features already built-in, so you have the ones getting paid by Nvidia and the ones that are just lazy, that tends to end bad, because a pre-build lib does to offer the best solution for every scenario.

If AMD offers a open library that does the same than Gameworks its likely that its usage will be reduced.
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
We have a quotable quantity. 100% of market.

Am I to assume that when Nvidia leads in a game from now, you will say that the developer is baised towards Nvidia and unfair?

If it defends that its game runs better on Nvidia just because? yes, and still, Nvidia has the mayority of market, so at least makes more sence, a dev does not make a game to sell it to nvidia or AMD users, they make the game to sell it to everyone.
 

coercitiv

Diamond Member
Jan 24, 2014
7,225
16,982
136
the appealing part of gameworks is reducing development time by offering a library that has all the features already built-in, so you have the ones getting paid by Nvidia and the ones that are just lazy
Are you saying only sub-par developers would consider using GW without financial incentives?
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
So you think it's the devs fault AMD is faster? OK! Whatever?

They wasted money and time to do features that runs on what? 15-20% of market? the W10 quota is about 37%, and Nvidia has about 70% of market... so the users running AMD GCN cards on W10 is a very small market, and most of them likely comes from GCN 1.0 since they rebranded the same cards so many times.

Instead of doing that, they could have put that money on making a better game for all its customers, so is strange that a dev will defend that.

Its likely AMD paid did pay them, they did the same with Star Swarm... so what? if you belive that nvidia pays devs to use Gameworks you must belive that AMD pay Oxide to use they features as well.
 

Shivansps

Diamond Member
Sep 11, 2013
3,916
1,570
136
Are you saying only sub-par developers would consider using GW without financial incentives?

Money is always the problem, using GW saves development time, but a pre-built lib tends not to be the best solution in terms of features/performance.

SO it could be sub-par devs or just ones looking to save some money to do other stuff.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Are you saying only sub-par developers would consider using GW without financial incentives?

All of them are sub-par developers they all just buy the rights to a game engine and do only level design and mission scriptings,some of them slap some gamework on top of that and others don't.
Even if the engine is owned by the company that makes the game like EA owning frostbite and producing battlefield the game designers (dice) most of the times have absolutely nothing to do with the game code or optimizations.

GW allows you to add more stuff at the same price so of course a lot of devs are using it.
 

maddie

Diamond Member
Jul 18, 2010
5,147
5,523
136
I never said that, but you should remember that Gameworks and Physx are present on the PS4, so it works on AMD hardware as well. And Physx usage its mostly CPU on every platform.

The mayor problem with Gameworks is dev lazyness, the appealing part of gameworks is reducing development time by offering a library that has all the features already built-in, so you have the ones getting paid by Nvidia and the ones that are just lazy, that tends to end bad, because a pre-build lib does to offer the best solution for every scenario.

If AMD offers a open library that does the same than Gameworks its likely that its usage will be reduced.
You really believe this?

I would think it's more a monetary reason. Remember that the top executives for the studios might be mainly financially focused. The employees might be gaming enthusiasts, but the bean counters call the shots. Gameworks is lower cost initially, but as some are probably beginning to realize, very costly in the longer view as sales are diminished.
 

Leadbox

Senior member
Oct 25, 2010
744
63
91
They wasted money and time to do features that runs on what? 15-20% of market? the W10 quota is about 37%, and Nvidia has about 70% of market... so the users running AMD GCN cards on W10 is a very small market, and most of them likely comes from GCN 1.0 since they rebranded the same cards so many times.

Instead of doing that, they could have put that money on making a better game for all its customers, so is strange that a dev will defend that.

Its likely AMD paid did pay them, they did the same with Star Swarm... so what? if you belive that nvidia pays devs to use Gameworks you must belive that AMD pay Oxide to use they features as well.

I'm sure if nvidia hadn't lied about their level of support for that feature maybe different decisions might have been made. It didn't take Oxide long to implement Async compute and nvidia aren't exactly a million miles behind on performance without it. They(nvidia) promised a driver and claim they're yet to enable it, so whats a developer to do? Are you just upset nvidia aren't the fastest in this game and feel they should be because of their market share?Ashes is easily the most optimized dx12 anything on all compatible hardware.
 

maddie

Diamond Member
Jul 18, 2010
5,147
5,523
136
I'm sure if nvidia hadn't lied about their level of support for that feature maybe different decisions might have been made. It didn't take Oxide long to implement Async compute and nvidia aren't exactly a million miles behind on performance without it. They(nvidia) promised a driver and claim they're yet to enable it, so whats a developer to do? Are you just upset nvidia aren't the fastest in this game and feel they should be because of their market share?Ashes is easily the most optimized dx12 anything on all compatible hardware.
I think this is key for many supporters.

True believers can't imagine a loss in any scenario. The AMD leaning crowd tend to admit AMD's faults and flaws while advocating the strengths. If you read the most vocal Nvidia supporters, I'm reminded of Apple and it's cult.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
So you're saying nvidia is getting cpu bound now, instead of AMD...
ofc they do they dont have any way to off load anything paraller (dx12) wise into the gpu we know already that they throw most of the async compute workload into the cpu
 

Dygaza

Member
Oct 16, 2015
176
34
101
ofc they do they dont have any way to off load anything paraller (dx12) wise into the gpu we know already that they throw most of the async compute workload into the cpu

Heh, talk was about recently released dx11 tittles where AMD has been doing really well, not about dx12. And he claimed that lack of DCL's used in games would be the reason. (Even without DCL's nvidia has better dx11 drawcall performance than AMD).
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
They wasted money and time to do features that runs on what? 15-20% of market? the W10 quota is about 37%, and Nvidia has about 70% of market... so the users running AMD GCN cards on W10 is a very small market, and most of them likely comes from GCN 1.0 since they rebranded the same cards so many times.

Instead of doing that, they could have put that money on making a better game for all its customers, so is strange that a dev will defend that.

Its likely AMD paid did pay them, they did the same with Star Swarm... so what? if you belive that nvidia pays devs to use Gameworks you must belive that AMD pay Oxide to use they features as well.

15-20% of the market now. That work also got them a game that should age magnificently, a lot of PR because they're breaking new ground so all the tech news interested in what that new ground means covers their game (the marketing alone is probably worth the cost of the work), and most importantly that work doesn't cost them performance on other cards.

It's kind of amazing and adorable that for some reason the real NV diehards in this thread feel entitled to better performance when there's no sign that it's even possible. At a certain point the only way to get much more performance is to actually have more capable hardware. Optimizing only removes obstacles, there's a lot of essential work getting done. If you want more performance past then, well that's why settings exist. The ability for other cards to do better isn't actually costing you any performance, game development isn't some zero sum game of optimization. It's just that users with more hardware get more.
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
Now show me some game benches please.


Lmao. You just don't stop do you? A 770 by a professional overclocker on LN2, LN2, LN2 is about equal to my 7970 on water, and yet you're still spinning the same tune? You realize the 770 is even faster than the 680 right? And that 770 is the World Record winner. Lmao...

Here's technical apples to apples test done years ago. It seems you've missed being aware of that time period that the 680 clock for clock was much slower than tahiti.


12.11 vs 310.33
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Lmao. You just don't stop do you? A 770 by a professional overclocker on LN2, LN2, LN2 is about equal to my 7970 on water, and yet you're still spinning the same tune? You realize the 770 is even faster than the 680 right? And that 770 is the World Record winner. Lmao...

Here's technical apples to apples test done years ago. It seems you've missed being aware of that time period that the 680 clock for clock was much slower than tahiti.


12.11 vs 310.33

Dude please read the [H] review it is getting kinda painful.
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
Dude please read the [H] review it is getting kinda painful.


Why do I need to read H when they don't even know that running a gpu on a pcie slot run off the southbridge is a terribad idea? You keep holding onto this H review like it's some lifeline?

Why do I need to read another review when I owned the gear myself first hand, ran benches the community agreed on. PPL were free to critique my methods. How to tell when clutching at straws... are you ignoring everything posted?
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Dude please read the [H] review it is getting kinda painful.

Your continuing on about one very particular model of card in a blanket comparison is frankly bizarre unless you actually are trying to argue that lightning is representative of all models.
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
@guskline
Looks zero difference, even slightly slower.

NV loves to shout "Game Ready" on all their drivers (latest for The Division, Hitman, Ashes etc), but... no difference. Interesting.
I did not expect to see much difference. The biggest difference so far was when AotS began supporting Mutliple GPUs. My 290s in CF look a lot better.():)
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Why do I need to read H when they don't even know that running a gpu on a pcie slot run off the southbridge is a terribad idea? You keep holding onto this H review like it's some lifeline?

Why do I need to read another review when I owned the gear myself first hand, ran benches the community agreed on. PPL were free to critique my methods. How to tell when clutching at straws... are you ignoring everything posted?

Actually they know a lot more than you when comes to gpu reviews and when it comes to gpu reviews I tend to trust them more than some random dude in forums , thanks.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
Your continuing on about one very particular model of card in a blanket comparison is frankly bizarre unless you actually are trying to argue that lightning is representative of all models.

t was said many a times that 7970 when oced destroys a 680 and I just showed the reverse and this a card you could actually buy.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Actually they know a lot more than you when comes to gpu reviews and when it comes to gpu reviews I tend to trust them more than some random dude in forums , thanks.

And yet basic fundamental failures of methodology!

t was said many a times that 7970 when oced destroys a 680 and I just showed the reverse and this a card you could actually buy.

When could you buy it and for what price? Again, you're continually bringing up this, at least make a full case for it. The original context was whether it made sense to buy it at launch. Either the Lightning is a representative sample, or it was available close to launch for a reasonable price. Establish one of those two and you'll start having a serious argument.
 

Jaydip

Diamond Member
Mar 29, 2010
3,691
21
81
And yet basic fundamental failures of methodology!



When could you buy it and for what price? Again, you're continually bringing up this, at least make a full case for it. The original context was whether it made sense to buy it at launch. Either the Lightning is a representative sample, or it was available close to launch for a reasonable price. Establish one of those two and you'll start having a serious argument.

People make mistakes and if we discard them for doing a mistake you can't probably go to any site in the inter-web.

It was available three months after 680 debuted which is the same for all lightning cards and the price was $580.
 

thesmokingman

Platinum Member
May 6, 2010
2,302
231
106
And yet basic fundamental failures of methodology!



When could you buy it and for what price? Again, you're continually bringing up this, at least make a full case for it. The original context was whether it made sense to buy it at launch. Either the Lightning is a representative sample, or it was available close to launch for a reasonable price. Establish one of those two and you'll start having a serious argument.


The community tore them up over that fiasco with the triple 580s vs the 6990+6970 comparo. They tanked the triple 580s and didn't know why lol. It was the community that revealed the why.

Oh then there's the Physx mod fiasco. They didn't know that the mod is a mod is actually a memory hack which obviously is viral in nature as are all game trainers and other such hacks liek that. It got flagged for a false positive and they freaked out, banning the mod developer's website and all things related. When it was pointed out by members, they punished the members by yanking their news forum privileges or outright bans.

Yo [H] is the source, they are super smart!