computerbaseAshes of the Singularity Beta1 DirectX 12 Benchmarks

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

selni

Senior member
Oct 24, 2013
249
0
41
Oxide said there is no AMD optimized code but there is nVidia optimized code. I doubt AMD has done anything with the drivers to optimize it for DX11.

I'm always hearing how bad AMD DX11 drivers are but if you look at comparative benchmarks in reviews, which are all DX11, AMD does quite well. The 980ti and Titan X are the only cards that outperform AMD. And then it's not really by much most times.

That's a misquote, the statement was that there's no AMD vendor specific code. That reflects well on AMD (or how closely DX12 matches their architecture), but isn't the same thing.

Given how different GCN and Kepler/Maxwell are even code using general code (ie DX12 without vendor specific extensions) is going to perform differently on the different architectures depending on how it's structured/what features are used and so on - it's entirely possible to have code optimized for one vendor, but not be specific to it.

Is that what's happened here? Probably not, it just looks like nvidia's current generation DX11 performance is on par with DX12 anyway in this case.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
That's a misquote, the statement was that there's no AMD vendor specific code. That reflects well on AMD (or how closely DX12 matches their architecture), but isn't the same thing.

Given how different GCN and Kepler/Maxwell are even code using general code (ie DX12 without vendor specific extensions) is going to perform differently on the different architectures depending on how it's structured/what features are used and so on - it's entirely possible to have code optimized for one vendor, but not be specific to it.

Is that what's happened here? Probably not, it just looks like nvidia's current generation DX11 performance is on par with DX12 anyway in this case.

They said they just coded it like MSFT said to and it worked. That's not vendor optimized at all.

All I'm saying is they did use vendor "specific" (I said optimized, but you're really splitting hairs here, IMO.) code for nVidia which could be why their DX11 performance is so good. We know DX11 requires optimizing from the IHV's to run well.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Is there any difference in quality between DX11 and 12 in this game?
 

zlatan

Senior member
Mar 15, 2011
580
291
136
Yes, but it doesn't explain why nvidia's dx11 results are still better than 12. They're either not rendering the same thing in the same way (likely, driver shader replacement etc) or the dx12 implementation in this particular game is less optimal for nvidia's hardware than what their dx11 driver is doing automatically.

Or more likely some combination of both - lower level APIs aren't magic.
So D3D12 don't have an IHV specific kernel driver, and this is a huge difference compared to D3D11. In this case every job that executed in the kernel driver must be done in the graphics engine. The application must be tuned wisely for this, which is probably not the most important thing in a beta 1 phase, because the rendering optimization might change a lot in the next several months.
An IHV can improve this change by opening up their tools and documentations, so the devs can write efficient management for all GPU architectures. Without these the explicit access will be really hard. It is still possible, but the efficiency will be suboptimal. This is probably the main reason why NVIDIA don't get performance boost on D3D12 compared to D3D11. But again, this is not a final build!
 
Last edited:

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Your 980Ti performs ~ the same in DX11 and 12 simply because nvidia's DX11 drivers are very fast. The biggest benefit from D3D12 comes from the reduction in driver overhead. If no performance increase is seen, then you can only assume that nvidia's driver overhead is not enough to bottleneck performance.
Thank you for the explanation dogen1:thumbsup: I'm interested in the performance of the 4790k with 2 R9 290s in Cross Fire once the coders support CF/SLI gpus.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
Thank you for the explanation dogen1:thumbsup: I'm interested in the performance of the 4790k with 2 R9 290s in Cross Fire once the coders support CF/SLI gpus.

And you have a VERY fast CPU. I'm sure the gains from DX 11 to DX 12 would be even more on the AMD side for you if you had a crappy CPU.

Also my guess is AMD's dX11 performance in this game isn't optimized. We've seen the work AMD has had to do to bring performance up to par in some games. My guess is the DX11 portion of this game aMD would need to do something as well, but that's irrelevant since if you use DX11, in a DX12 game, you deserve 0 FPS.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
So, these high skilled Oxide guys get beaten by nVidia with DX11.

What was exactly the point of their marketing war against high level APIs when they cant create an efficient DX12 engine at all?
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Except GTX 980Ti , every AMD cards are now faster than rival.look at R9 390 and 280
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
So, these high skilled Oxide guys get beaten by nVidia with DX11.

What was exactly the point of their marketing war against high level APIs when they cant create an efficient DX12 engine at all?

???

Maybe Nvidia sucks at DX12?

It's beta....Sling the mud later.

Game isn't even optimized for performance yet.
 

coercitiv

Diamond Member
Jan 24, 2014
7,394
17,539
136
So, these high skilled Oxide guys get beaten by nVidia with DX11.

What was exactly the point of their marketing war against high level APIs when they cant create an efficient DX12 engine at all?
Are you saying their DX11 engine is efficient?
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Are you saying their DX11 engine is efficient?

Look at AMD. Nope, their DX11 engine is not efficient. It just nVidia's driver with make their engine looking good.

On the other hand a low level API cant beat a high level API on a hardware with 40% more compute performance, 33% more bandwith and 33% more power consumption.

Except GTX 980Ti , every AMD cards are now faster than rival.look at R9 390 and 280

Computerbases uses "reference clock" on nVidia and AMD hardware.

Would they use a custom GTX980TI, the GTX980TI with DX11 would be 15-20% faster... :\
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Look at AMD. Nope, their DX11 engine is not efficient. It just nVidia's driver with make their engine looking good.

On the other hand a low level API cant beat a high level API on a hardware with 40% more compute performance, 33% more bandwith and 33% more power consumption.



Computerbases uses "reference clock" on nVidia and AMD hardware.

Would they use a custom GTX980TI, the GTX980TI with DX11 would be 15-20% faster... :\

What source could you --possibly-- have that would support saying Oxide has an inefficient DX11 engine? They've gone on record repeatedly showing they have one of the most scalable DX11 RTS engines.

Do you have even the barest shred of evidence to support "Nope, their DX11 engine is not efficient. It just nVidia's driver with make their engine looking good"?
 
Last edited:

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Is there any difference in quality between DX11 and 12 in this game?

been wondering about this. They did claim some things could not be done in dx11. Specifically light sources. Also wonder if there is a difference between dx12 on AMD and dx12 on nvidia in terms of how the graphics is achieved.
 

coercitiv

Diamond Member
Jan 24, 2014
7,394
17,539
136
Look at AMD. Nope, their DX11 engine is not efficient. It just nVidia's driver with make their engine looking good.
What does AMD have to do with the efficiency of the DX11 engine? Do you reckon the Fury X should be equal to 980Ti in DX11 as well? I did not know you hold Fury hardware to such high regard.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Computerbases uses "reference clock" on nVidia and AMD hardware.

Would they use a custom GTX980TI, the GTX980TI with DX11 would be 15-20% faster... :\

I said for DX12.AMD cards really shine due to Potential of GCN for DX12. Fable showed it.With DX12 , All shaders in GCN Would not be idle.I heard even that half of shaders are idle if they are in DX11 mode.Also even if you overclock 970 , you won't be able to beat 390 ( Not 390x ).
 
Last edited:

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
What does AMD have to do with the efficiency of the DX11 engine? Do you reckon the Fury X should be equal to 980Ti in DX11 as well? I did not know you hold Fury hardware to such high regard.

Is this not the point? If nVidia is much faster with DX11 than AMD with DX11 then the DX11 engine is not efficient. nVidia has a superior driver to make their engine look good. Otherwise AMD would be on the same level.

So why would any sane developer take a low level API and do much more work when they could call it a day and let nVidia and AMD do the job?

So the only reason Oxide is promoting "low level" is just for AMD. For their engine they dont need it because nVidia cards get utilized nearly to 100% by the API and the driver.

Oh and they still dont care to optimize their DX12 engine for nVidia. :\

I said for DX12.AMD cards really shine due to Potential of GCN for DX12. Fable showed it.With DX12 , All shaders in GCN Would not be idle.I heard even that half of shaders are idle if they are in DX11 mode.Also even if you overclock 970 , you won't be able to beat 390 ( Not 390x ).

They dont need to beat AMD. If their is no difference then nothing changes.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Is this not the point? If nVidia is much faster with DX11 than AMD with DX11 then the DX11 engine is not efficient. nVidia has a superior driver to make their engine look good. Otherwise AMD would be on the same level.

So why would any sane developer take a low level API and do much more work when they could call it a day and let nVidia and AMD do the job?

So the only reason Oxide is promoting "low level" is just for AMD. For their engine they dont need it because nVidia cards get utilized nearly to 100% by the API and the driver.

Oh and they still dont care to optimize their DX12 engine for nVidia. :\



They dont need to beat AMD. If their is no difference then nothing changes.

I will just leave this one here, things will get interesting after BETA 2

oQTDW.png
 

guskline

Diamond Member
Apr 17, 2006
5,338
476
126
Behrouz, do you have the Fable beta? I thought I was signed up but noticed on my profile for the forum I had not checked Win 10. I have it checked now and hope in the next few weeks to see if I get an invite. Would love to compare my R9 290s in CF to my GTX980TI.
 

PhonakV30

Senior member
Oct 26, 2009
987
378
136
Sorry Mate, I have legendary AMD HD 5770 :).I will upgrade my entire PC to Polaris/Pascal when they arrive.I wish I could share with you if I had Fable beta.
 

zlatan

Senior member
Mar 15, 2011
580
291
136
So why would any sane developer take a low level API and do much more work when they could call it a day and let nVidia and AMD do the job?

Because their kernel drivers hurt the parallelism. Even if I use a lot of rendering threads with D3D11 70-80 percent of the CPU time is idle and unusable. No matter how optimized the engine is, the kernel driver threads will steal the available resources, and use it in a very inefficient way.

Every engine programer do the same job. Write the code and test it. I can see a lot of stalls, but unable to fix it because the IHVs don't allow me or the other devs to debug the kernel driver. We don't have the source, don't have the tools, nothing. If I can find a fix for a stall in the engine, than I'm the luckiest man in the world, but if not, I have to contact with the IHVs. At this point the whole development will get very nasty. They may provide an updated driver wich will fix the bug, but most of the time another problem will come, which requires another driver fix, which will rise another problem, and so on. If I'm lucky enough I may be able to meet some IHV engineers to talk about the problem directly, and I'm done this in the past. Every time I'm sit down the IHVs we just agreed that the API is the problem. One time I sit down with Microsoft and they told me the drivers are the problems. Sometimes I was thinking about if the IHVs and Microsoft talk about this, probably they agree that the devs are the problem. But in the end, these conversations doesn't make my code run faster, and this is really sad.
I don't know how to explain it to you without a lot of technical sentence. I think the easiest explanation is that the abstraction of D3D11 is in the wrong level. It is too high level to be fast and too low level to be easy to use. The new low level APIs aren't really low level. These just put the abstraction to the "right" level. Nothing more, nothing less.
The devs can do a much better job if they can manage the memory explicitly. The only thing what the IHV must do is to open up the tools and the architecture documents. That's all we want.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
The devs can do a much better job if they can manage the memory explicitly. The only thing what the IHV must do is to open up the tools and the architecture documents. That's all we want.

Well thats also what every Gamer should want too (optimized games) but unfortunately thats not what every IHV wants.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
I don't know how to explain it to you without a lot of technical sentence. I think the easiest explanation is that the abstraction of D3D11 is in the wrong level. It is too high level to be fast and too low level to be easy to use. The new low level APIs aren't really low level. These just put the abstraction to the "right" level. Nothing more, nothing less.
The devs can do a much better job if they can manage the memory explicitly. The only thing what the IHV must do is to open up the tools and the architecture documents. That's all we want.

I am with you. But the fact stands: DX12 is slower than DX11 in this "game". Which makes it clear that either Oxide has no clue what they are doing or they just dont care about nVidia.

There doesnt exist any explanation why DX12 should be slower on nVidia hardware.
 

Gikaseixas

Platinum Member
Jul 1, 2004
2,836
218
106
There doesnt exist any explanation why DX12 should be slower on nVidia hardware.

Nvidia could still be working on it's DX12 drivers (for this particular title). Lets avoid blaming others or Nvidia for that matter as it is still Beta.