Ashes of the Singularity User Benchmarks Thread

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

VR Enthusiast

Member
Jul 5, 2015
133
1
0
The main difference this time around is that AMD's new async shaders are free performance.

It's like the Oxide guy said, Nvidia tried to make them drop async shaders to harm AMD performance. This is the difference between it and something like Game Works which needs to be added.

Any game that doesn't drop async shaders will have a free benefit for AMD while Nvidia will have to pay the penalty of the older path. If Nvidia attempts to pay the developers to drop async shaders they are acting illegally.
 
Feb 19, 2009
10,457
10
76
@railven
Xbone's weaker hardware means that AAA games wanting to push it to the max HAVE to shift as much workloads as they can into Async Shaders, so the 2 ACEs can be busy, used to the fullest potential.

So a traditional game engine with dynamic lighting will have to shift that towards Async Shaders else performance on Xbone will be horrible.

Basically, devs need to extract every bit of power from that flawed console, in doing so, they will have to heavily leverage async shaders. PS4 version seems to be just higher res or higher frame rate. No extra eye candy, too much effort and MS won't like that, unless its a Sony exclusive title.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
@railven
Xbone's weaker hardware means that AAA games wanting to push it to the max HAVE to shift as much workloads as they can into Async Shaders, so the 2 ACEs can be busy, used to the fullest potential.

So a traditional game engine with dynamic lighting will have to shift that towards Async Shaders else performance on Xbone will be horrible.

Basically, devs need to extract every bit of power from that flawed console, in doing so, they will have to heavily leverage async shaders. PS4 version seems to be just higher res or higher frame rate. No extra eye candy, too much effort and MS won't like that, unless its a Sony exclusive title.

We must play different games. Devs didn't put the extra effort to make PS3 games even close to parity for 360 for a long time. This time around devs are intentionally hindering their games to make visual (performance some are actually giving MSFt the finger) parity.

PS3 games had content removed to "run on PS3", PS4 games are having content not coded/included to "have parity with Xbone."

How this will translate to ports is going to be a question we'll have to wait and see. Because, so far all those games ported over that were Xbone-exclusive are not destroying Nvidia cards.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
How this will translate to ports is going to be a question we'll have to wait and see. Because, so far all those games ported over that were Xbone-exclusive are not destroying Nvidia cards.

None of them are using async shaders yet.

Again I want to point out that the AMD cards improve across the board with DX12 and Ashes - even the R7 370 with only 2 ACEs. This isn't a case of bigger numbers helping out, it's a case of doing something right (AMD) and wrong (Nvidia).

I noted that earlier in the thread and now the Oxide guy has confirmed it.

The way to look at this is that through async shader usage, AMD will gain a good benefit no matter what. The benefit can only increase by adding more ACEs (I doubt it will be a linear increase or anywhere near it.)
 
Last edited:

stuff_me_good

Senior member
Nov 2, 2013
206
35
91
AFAIU nVidia uses CC's (shaders) for tessellation. AMD designed their GPU's differently and use separate transistors (Tessellation engine). That has worked better for nVidia in raw tessellation power. As the CC's increase so does tessellation performance. Now, AMD did it the way they did because they believe that doing it the way nVidia does will cause issues if the shaders become bottlenecked by the tessellation workload. So far that hasn't proven true. With DX12 feeding more information to the GPU that might change. We'll see.

You took it to literary. What I meant was that, what ever new important features DX12 brings, I hope AMD go overboard just like nvidia did with tesselation from the get go. AMD's mistake to bring tesselation first and settle on mediocre performance when clearly the market has been wanting a long time some serious triangle throughput.
 

littleg

Senior member
Jul 9, 2015
355
38
91
I'm not arguing that,but nvidia has optimized dx11 for 3-4 years now while async has been optimized for 0 years ,if old works better for now then why not use it for now?

That way there's no progress. With that mindset we'd never have gotten AMD64 or Core 2, we'd be sitting here trying to mop up the sweat from our Prescott7 8.6GHz CPUs.

It doesn't matter which company is innovating, only one needs to in order to drive the technology forward. The 6800GT, the X800 line, the monster 8800GTX, each time driving innovation forward and pushing the competition to improve.

Nvidia had the crown this go round, maybe they'll keep it with Pascal, maybe not but the quest for the crown is what gets us good technology and an evolving market.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
That way there's no progress. With that mindset we'd never have gotten AMD64 or Core 2, we'd be sitting here trying to mop up the sweat from our Prescott7 8.6GHz CPUs.
How is there no progress? It's left in for amd where it makes a big difference and taken out for nvidia for now until they figure out whats what = win win for every gamer.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Think about it. There used to be a console market and a PC market. The two are now converging. Going forward Microsoft has made cross platform gaming a central part of their strategy. We are entering a new era. A single gaming market.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
Think about it. There used to be a console market and a PC market. The two are now converging. Going forward Microsoft has made cross platform gaming a central part of their strategy. We are entering a new era. A single gaming market.

One that will probably be dominated by AMD.

Now that the developers have a taste of what it's like to have easy cross-platform, they'll never want to let it go. It depends on what Sony, Microsoft and Nintendo want to build with their consoles of course but AMD has a captive audience with them too.

With VR being the "next big thing" and an architecture much better suited to it, I can't believe AMD doesn't control the next generation consoles already.

But let's say I'm wrong and Nvidia somehow convinces Sony, MS and Nintendo to go with their hardware instead next gen - how do they do this? By offering cheaper chips? They can't because then their GPU profits would tank and their PC GPU market too. AMD can build console chips at globalfoundries and still have PC GPU chips made in Taiwan.

I doubt anybody believes that Nvidia stands a chance or can even afford to be in the consoles. AMD is taking control of the gaming market through a series of long-term plays. Genius.

Unless they go bust before it plays out.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
On a sidenote....

We, as a PC Gaming community, really need to fight all of this partisanship. We ought to encourage critical thinking rather than accept marketing claims by the large tech Corporations. We should encourage research and scientific queries rather than bash one another over Green vs. Red.

Most of the time people who aren't biased get trolled until they stop coming. Many of the trolls here are rather sophisticated and they are able to skirt the rules well. If anything, this forum breeds skilled trolls.

In any case, I find this actual discourse on DX12 rather enlightening. I appreciate your contributions. Its seeming more and more like DX12 is the biggest change in graphics since the unified shader in Xenos and G80
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I doubt anybody believes that Nvidia stands a chance or can even afford to be in the consoles. AMD is taking control of the gaming market through a series of long-term plays. Genius.

nVidia could win the consoles if they were willing to cut price enough I'm sure, they probably just dont want to win at that price...

Especially when you consider a high clocked 8 core A57 would end up being as fast or faster than the current console CPUs
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
NVIDIA lack an x86 license and architecture. With the HSA capabilities of AMDs APUs, I think it's safe to say that AMD has the better console geared product.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
imo price is a very big factor since nobody wants to lose money on consoles any more. ARM is cheap, and so if nvidia cut a deal I could see it. Especially since nVidia wants to sell some Denver cores. Also, I don't think x86 is as big a deal on consoles. ARM is a very popular instruction set as well and if it is powerful enough (which apparently is as low as Jaguar cores...) then it could happen. I dont think its probable, but its certainly possible
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
imo price is a very big factor since nobody wants to lose money on consoles any more. ARM is cheap, and so if nvidia cut a deal I could see it. Especially since nVidia wants to sell some Denver cores. Also, I don't think x86 is as big a deal on consoles. ARM is a very popular instruction set as well and if it is powerful enough (which apparently is as low as Jaguar cores...) then it could happen. I dont think its probable, but its certainly possible

Good point.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
imo price is a very big factor since nobody wants to lose money on consoles any more. ARM is cheap, and so if nvidia cut a deal I could see it. Especially since nVidia wants to sell some Denver cores. Also, I don't think x86 is as big a deal on consoles. ARM is a very popular instruction set as well and if it is powerful enough (which apparently is as low as Jaguar cores...) then it could happen. I dont think its probable, but its certainly possible

You are right that cost is key but there are other factors too.

x86 leaves the door open to simplistic cross-porting to PCs. Not only that but backward compatibility is a big thing in consoles as well. This is valuable over time even if it costs a little bit more upfront.

AMD has already shown that they are willing to accept worse margins for the market share gain. They had no other choice but from their perspective it's a win. Can you imagine what would happen if Nvidia had all the console market? AMD would be dead and buried in a handful of years.

So what I'm saying is there are benefits to x86, AMD is willing to eat margins (they can use globalfoundries to mitigate this) while Nvidia isn't. AMD already seems to have the forward-looking hardware in terms of VR too.

I just can't see any chance of Nvidia getting near a console because they might be able to compete with ARM CPUs. Who will manufacture almost 40 million big console chips every year for them anyway?

Maybe the next Wii or maybe they'll try their own. They have Shield so why not?
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
I agree, if they do it, it would almost certainly have to be a non-traditional like their Shield console or something similar
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
@railven
Xbone's weaker hardware means that AAA games wanting to push it to the max HAVE to shift as much workloads as they can into Async Shaders, so the 2 ACEs can be busy, used to the fullest potential.

So a traditional game engine with dynamic lighting will have to shift that towards Async Shaders else performance on Xbone will be horrible.

Basically, devs need to extract every bit of power from that flawed console, in doing so, they will have to heavily leverage async shaders. PS4 version seems to be just higher res or higher frame rate. No extra eye candy, too much effort and MS won't like that, unless its a Sony exclusive title.


We must play different games. Devs didn't put the extra effort to make PS3 games even close to parity for 360 for a long time. This time around devs are intentionally hindering their games to make visual (performance some are actually giving MSFt the finger) parity.

PS3 games had content removed to "run on PS3", PS4 games are having content not coded/included to "have parity with Xbone."

How this will translate to ports is going to be a question we'll have to wait and see. Because, so far all those games ported over that were Xbone-exclusive are not destroying Nvidia cards.

Xbone have the same async capabilities as GCN 1.0 cards. Here you can see the scaling of 270X (aka 7870) and 280X (aka 7970):
http://www.computerbase.de/2015-08/directx-12-benchmarks-ashes-of-the-singularity-unterschiede-amd-nvidia/2/#diagramm-ashes-of-the-singularity-3840-x-2160

270X gets around 15% boost and 280X gets about 30%. Faster cards show more benefit because of DX11 CPU bottleneck kicking in.
I guess that 15% with 270X is thanks to async shaders in the most part. There shouldn't be much CPU bottleneck with such card/cpu combo.

So, I guess devs will use as much async as xbone (or any GCN1.0 card) can handle (and then some if they aim for PC crowd).

Now the only question is when we will see next gen GPUs from AMD and NV. Tables may turn in the next round.

Everyone with 7970 (280X) is a winner. Yet another NeverSettle performance Boost. Lucky me! ;)
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
Want to know what I find suspect?

None of the big tech publications have commented on any of this so far.

So it's ok to quote false marketing but not ok to question the big tech companies...

Hurray for PR! Down with journalism!

Pardon me for sounding so negative but this is a big reason why I am working to create a new website. So many tech publications have turned into nothing more than 3rd party Public Relations firms.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
None of them are using async shaders yet.

Again I want to point out that the AMD cards improve across the board with DX12 and Ashes - even the R7 370 with only 2 ACEs. This isn't a case of bigger numbers helping out, it's a case of doing something right (AMD) and wrong (Nvidia).

I noted that earlier in the thread and now the Oxide guy has confirmed it.

The way to look at this is that through async shader usage, AMD will gain a good benefit no matter what. The benefit can only increase by adding more ACEs (I doubt it will be a linear increase or anywhere near it.)

None of whom? Console devs? Or the Xbone-exclusive console devs? Around here, I've been reading how PS4 devs have been using ACE and loving it.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
You post a lot of misinformation.

They never said reviewers should not use MSAA, nvidia did. They did not say they hadn't optimized their engine for MSAA AFAIK.

Your english is good so i don't get why you seem to be shifting sentences and meanings around. They said their dx12 MSAA was standard and ran the same on all hardware. They said that some modifications nvidia might have made in dx11 driver might not have carried over and are willing to make those driver like changes in their own game FOR NVIDIA.

They did:
Here is what the developers had to say about theAshes of the Singularity benchmark:
[...]
MSAA is implemented differently on DirectX 12 than DirectX 11. Because it is so new, it has not been optimized yet by us or by the graphics vendors. During benchmarking, we recommend disabling MSAA until we (Oxide/Nidia/AMD/Microsoft) have had more time to assess best use cases.
Read more at http://www.legitreviews.com/ashes-o...chmark-performance_170787#gYdgBq2mZY3KilBp.99
http://www.legitreviews.com/ashes-o...12-vs-directx-11-benchmark-performance_170787



Microsoft did not demonstrate asynchronous compute on nvidia hardware AFAIK. If you are talking about the fable legends demo you or someone else posted, that was demonstrating Typed UAVs or something of the sort. It was not async compute.
They did:
Around 43mins: https://channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance

I have yet to see anyone claiming maxwell 2 has it much less does well at it. A game having effects does not mean its being done asynchronously. You can do those things the normal way, but AMD having async means they can do it faster, separately from the graphics pipeline or do more.They can enable it in drivers through software but clearly the hardware chokes.
I can run the Multi-Engine sample, which uses a graphics and compute queue at the same time, just fine on my GTX980TI:
https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Samples/D3D12nBodyGravity
https://msdn.microsoft.com/en-us/library/windows/desktop/mt186620(v=vs.85).aspx
 
Last edited:

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
Pardon me for sounding so negative but this is a big reason why I am working to create a new website. So many tech publications have turned into nothing more than 3rd party Public Relations firms.

Welcome to Anandtech Forums
You certainly bring good points to the table, It's refreshing to read your posts. About the quoted part, i'm afraid you'll surrender to the pressure many sponsors make to tech websites. The monetary and new tech incentives might just be enough to convince you specially since you're just starting up your project but i really wish you good luck.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0

A GTX 980 Ti can handle both compute and graphic commands in parallel. What they cannot handle is Asynchronous compute. That's to say the ability for independent units (ACEs in GCN and AWSs in Maxwell/2) to function out of order while handling error correction.

It's quite simple if you look at the block diagrams between both architectures. The ACEs reside outside of the Shader Engines. They have access to the Global data share cache, L2 R/W cache pools on front of each quad CUs as well as the HBM/GDDR5 memory un order to fetch commands, send commands, perform error checking or synchronize for dependencies.

The AWSs, in Maxwell/2, reside within their respective SMMs. They may have the ability to issue commands to the CUDA cores residing within their respective SMMs but communicating or issueing commands outside of their respective SMMs would demand sharing a single L2 cache pool. This caching pool neither has the space (sizing) nor the bandwidth to function in this manner.

Therefore enabling Async Shading results in a noticeable drop in performance, so noticeable that Oxide disabled the feature and worked with NVIDIA to get the most out of Maxwell/2 through shader optimizations.

Its architectural. Maxwell/2 will NEVER have this capability.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
Because, so far all those games ported over that were Xbone-exclusive are not destroying Nvidia cards.

None of whom? Console devs? Or the Xbone-exclusive console devs? Around here, I've been reading how PS4 devs have been using ACE and loving it.

You said Xbone-exclusive, there are no Xbone games currently making use of ACEs. That is changing soon with Tomb Raider - http://gearnuke.com/rise-of-the-tom...breathtaking-volumetric-lighting-on-xbox-one/

There have only been a handful of PS4 games using async compute. None of which use async compute on PC.

There are no current PC games (except for Ashes of the Singularity) using async compute. This is because async compute needs DX12, Vulkan or Mantle (or console API).

That's why there were no Xbone-exclusive games ported over to the PC destroying Nvidia cards. They don't exist yet.
 

Mahigan

Senior member
Aug 22, 2015
573
0
0
I'm sorry if this pains you to hear, but I didn't mislead you. I'm not the one you should be angry with.
 

VR Enthusiast

Member
Jul 5, 2015
133
1
0
I don't get what all the fuss is over MSAA? Nvidia got the worst results when MSAA was disabled on arstechnica - that's where the 980 Ti had a small loss to the 290X, without MSAA.