computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

csbin

Senior member
Feb 4, 2013
839
352
136
http://www.computerbase.de/2016-02/directx-12-benchmarks-ashes-of-the-singularity-beta/

AMD Crimson 16.1

Nvidia GeForce 361.75


oQTDW.png




i8buG.jpg


rnUmL.jpg


MdXit.jpg
 
Last edited:

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Interesting that the Fury X scales worse with resolution under DX 12 than the 980 Ti under either API and that AMD sees huge DX12 gains while NV loses a bit. The gains AMD sees from lower resolution I think are the explanation for the Fury X vs. 980Ti, making good use of otherwise unused hardware.

Also feeling super good about trading my 970 for a 290. Seriously, wow.
 

Goatsecks

Senior member
May 7, 2012
210
7
76
These benchmarks are wierd: (1) At 4k the 980ti does better than the fury-x, this is flipped round at 1080p. Usually the fury-x does better at 4k, no? and (2) DX12 appears to be more expensive than DX11 for nvidia?
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
These benchmarks are wierd: (1) At 4k the 980ti does better than the fury-x, this is flipped round at 1080p. Usually the fury-x does better at 4k, no? and (2) DX12 appears to be more expensive than DX11 for nvidia?

Yeah. I'm surprised but I think number one is explainable by DX 12 letting the card use hardware that would be underutilized in DX 11 at lower res, if you look at the steadily decreasing gap for the Fury X between 11 and 12 when resolution increases, I think that's the phenomenon that's bringing it down below the 980 Ti.
 

el etro

Golden Member
Jul 21, 2013
1,581
14
81
Cbase is the best hardware review site in the world. I believe in every single test of them.
 

tential

Diamond Member
May 13, 2008
7,355
642
121
From the chart, until Beta 2, we really won't know how performance is. They haven't focused on performance yet.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
I looked again, Except GTX 980TI , All AMD Cards are now faster than Nvidia Cards if we look at DX12.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Interesting that Tahiti doesn't get the same boost Hawaii does. I thought I would regret "upgrading" from a 7970 but that might not be the case.

I feel like an idiot for getting a 970 over a clearance 290x if that holds for most directx 12 games though.
 

Dygaza

Member
Oct 16, 2015
176
34
101
These benchmarks are wierd: (1) At 4k the 980ti does better than the fury-x, this is flipped round at 1080p. Usually the fury-x does better at 4k, no? and (2) DX12 appears to be more expensive than DX11 for nvidia?

People in general still have weird beliefs that Fury series bad 1080p performance under DX11 is because of ROP's. As Fury X owner, games are very rare that get 100% gpu usage at 1080p (vsync off), even 1440p they're pretty rare. Amd just can't keep cards fed properly because how their driver + GCN-architechure works. Under DX12, they don't have this problem because a lot better cpu throughput.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Welp, if something was beaten into me lately, it's that you have to look at the higher resolution results to determine lasting power.

MdXit.jpg


Guess I'm covered for DX12 era. :D (Well, my GF is because I'll be upgrading anyways :D)
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
These benchmarks are wierd: (1) At 4k the 980ti does better than the fury-x, this is flipped round at 1080p. Usually the fury-x does better at 4k, no? and (2) DX12 appears to be more expensive than DX11 for nvidia?
Because DX11 is the bottleneck and AMD decided a long time ago rather than try and overcome the archaic API they'd push past it with Mantle. Now we have DX12, Vulkan, etc... Hopefully DX11 dies sooner rather than later. It's been around way too many years now.
 

crisium

Platinum Member
Aug 19, 2001
2,643
615
136
I think Fiji maybe encountering a ROP bottleneck here ...

Likely. Let's not forget that the 390 has a handful more ROPs than the 970, but the 980 Ti has a full 50% more than the Fury X. That's pixel pushing power. That's a big reason (others being VRAM and OC ability) that I think the Fury X can never catch a 980 Ti.

Even if you take the finest of the battle tested Hawaii, the 390X, it on average trades blows with the 980 - which is impressive given its older architecture and lower price. Essentially they are a wash. The 980 Ti has 37.5% more shaders, 50% more ROPs, 50% more bandwidth, and 50% more memory than the 980. It's a radically faster GPU in every way that counts. Fury X offers 45% more shaders, 0% more ROPs, 33% more bandwidth, and HALF of the memory compared to the 390X. What makes you think it should keep up with the 980 Ti when it cannot match the same gains over the lower card?

It's common sense to see the 970 with its crippled ROPs, bandwidth, and memory lose to the 390. It's not a stretch to see the 390X match the 980 since the 390X and 390 are far closer to each other than the 970 and 980. But it would take a miracle for the Fury X to really be faster than the 980 Ti. We'd need to see the 390 truly running over the 970 making it look like a value card, and we'd need to even see the 390X a full tier or more above the 980. It's asking too much.

And let's not forget that most comparisons of a Fury X and 980 Ti are stock both. Even if most people do not overclock let's not forget the numerous factory OC versions for almost no price premium offer a 5% to close to 25% performance advantage and the Fury X cannot close this gap (and if it partially does thanks to a consumer OC, let's not forget even the factory OC 980 Ti's can be OC'd more).

980 Ti will always be the faster card pending a true AMD driver miracle that makes Never Settle look meaningless. And that's ok. If your budget is $650+ then you buy Nvidia right now, and there's not much reason not to for the average consumer unless they also want to buy a cheaper Freesync monitor to go with it.

It's the below $600 field that AMD is looking fine in, and that needs to the headline here. In DX11 games on average we already have seen the 390 pull ahead of the 970, the 380 standing tall over the 960 (not to mention the 380X), the 390X fiercely battling the more expensive 980, etc. The news should be that DX12 may further this gradual lead even more. And since the bulk of Nvidia's dGPU profit in the past year and a half are likely from 970 and 960 sales, this is what you need to stress to consumers if you wish a return to at least the old 60-40 market split.
 
Last edited:
Aug 11, 2008
10,451
642
126
Because DX11 is the bottleneck and AMD decided a long time ago rather than try and overcome the archaic API they'd push past it with Mantle. Now we have DX12, Vulkan, etc... Hopefully DX11 dies sooner rather than later. It's been around way too many years now.

What died is AMD's marketshare. Maybe they should have tried to optimize better for DX11. After all, there are still no final DX12 games on the market.
 

Timmah!

Golden Member
Jul 24, 2010
1,429
657
136
I still dont understand how are Nvidia cards doing more or less as good under DX11 as they are doing under DX12/as AMD cards doing under DX12.

Was the DX12 not supposed to unlock all the CPU cores utility, thus increasing the performance several times in comparison to DX11, where only one CPU core is usually taxed? But now i am seeing basically the same frame-rates... can anyone elaborate please?
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Was the DX12 not supposed to unlock all the CPU cores utility, thus increasing the performance several times in comparison to DX11, where only one CPU core is usually taxed? But now i am seeing basically the same frame-rates... can anyone elaborate please?

I think you had too much DX12 koolaid ;)

Their implementation is still broken. But you will never get what you expect.
 

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
Was the DX12 not supposed to unlock all the CPU cores utility, thus increasing the performance several times in comparison to DX11, where only one CPU core is usually taxed? But now i am seeing basically the same frame-rates... can anyone elaborate please?
If you have a CPU where one core can drive the GPU to 100% than anything else does not matter,the GPU can not work at more then 100% no matter how many cores try to drive the GPU at once,any modern desktop CPU can handle that.

The consoles on the other hand have 1,5Ghz(when turboed) athlon 5350 cores,those cores are too small to drive a good GPU with only one core so Dx12/mantle/whatever PS4/xbone API helps a lot.
 

Timmah!

Golden Member
Jul 24, 2010
1,429
657
136
I think you had too much DX12 koolaid ;)

Their implementation is still broken. But you will never get what you expect.

I dont know if thats true, i am telling what i read. Perhaps thats just theoretical numbers and as you say its never going to scale perfectly with number of cores or whatever...but i still expected at least some improvement, i dunno, at least 2x perf increase, not to see the same numbers as with DX11. So all this talk about how superior and novel it is, and how DX11 is archaic and does not use your hardware fully, only to find out that your fully utilized hardware under DX12 produces same number of frames...

So its broken then? Seems like its broken then on some deep level, given the fact these technological improvements related to DX12 are sort of mail selling point of the game. After playing the game i can tell the gameplay/game design certainly is not (for now).
 

Timmah!

Golden Member
Jul 24, 2010
1,429
657
136
If you have a CPU where one core can drive the GPU to 100% than anything else does not matter,the GPU can not work at more then 100% no matter how many cores try to drive the GPU at once,any modern desktop CPU can handle that.

The consoles on the other hand have 1,5Ghz(when turboed) athlon 5350 cores,those cores are too small to drive a good GPU with only one core so Dx12/mantle/whatever PS4/xbone API helps a lot.

Disregarding consoles, who buys nowadays low end AMD CPU with high end GPU? People who have money for 980Ti/Fury X, are very likely to have the means to get i7, which, if i understand correctly, is fast enough to feed the GPU the data just using single of its 4/6/8 cores...thus making the GPU the bottleneck, right? Any point to DX12 then, when it comes to PC gaming? Do we expect upcoming next-gen GPUs become so powerful, that they will be able to utilize multiple cores of upcoming (Intel) CPUs?
 

ThatBuzzkiller

Golden Member
Nov 14, 2014
1,120
260
136
I dont know if thats true, i am telling what i read. Perhaps thats just theoretical numbers and as you say its never going to scale perfectly with number of cores or whatever...but i still expected at least some improvement, i dunno, at least 2x perf increase, not to see the same numbers as with DX11. So all this talk about how superior and novel it is, and how DX11 is archaic and does not use your hardware fully, only to find out that your fully utilized hardware under DX12 produces same number of frames...

So its broken then? Seems like its broken then on some deep level, given the fact these technological improvements related to DX12 are sort of mail selling point of the game. After playing the game i can tell the gameplay/game design certainly is not (for now).

Don't be led to believe that it's broken ...

The HLSL compiler compiles their shaders just fine so it is very much a valid implementation ...

DX12 itself on the otherhand is only meant to be cross vendor compatible, Microsoft didn't have to make it cross vendor friendly considering that bindless and multiple queues is modelled after GCN ...