D3D12 is Coming! AMD Presentation

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

positivedoppler

Golden Member
Apr 30, 2012
1,148
256
136
Oh wow, Nvidia threw a complete gutter ball in DirectX 12, I wonder what went wrong? This won't be pretty if they don't fix it, be a lot of pissed off Maxwell owners.
 

MagickMan

Diamond Member
Aug 11, 2008
7,460
3
76
Actually, if you read Dan Baker's latest blog he says that Nvidia, AMD, Microsoft, and Intel have had access to all AoTS source code for a year.


ExtremeTech has Fury X vs 980Ti AoTS benches.

http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head

I wasn't talking about the game. AMD could move nearly all of their Mantle work over to DX12, and it was heavily influenced by XBone development. As nvidia's DX12 drivers improve, they'll see similar gains.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
Oh wow, Nvidia threw a complete gutter ball in DirectX 12, I wonder what went wrong? This won't be pretty if they don't fix it, be a lot of pissed off Maxwell owners.

They are still faster than AMD with DX12. :awe:
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Oh wow, Nvidia threw a complete gutter ball in DirectX 12, I wonder what went wrong? This won't be pretty if they don't fix it, be a lot of pissed off Maxwell owners.

Its an alpha demo. Look at star swarm and how crappy that was at launch. And how well Nvidia improved their score.

71454.png


Probably see something like that again.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
A single, Alpha benchmark and we already made a conclusion ???

http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

A *single benchmark. We do need other games

How useful is the benchmark?

It should not be considered that because the game is not yet publically out, it’s not a legitimate test. While there are still optimizations to be had, Ashes of the Singularity in its pre-beta stage is as – or more – optimized as most released games. What’s the point of optimizing code 6 months after a title is released, after all? Certainly, things will change a bit until release. But PC games with digital updates are always changing, we certainly won’t hold back from making big changes post launch if we feel it makes the game better!

DirectX 11 vs. DirectX 12 performance

There may also be some cases where D3D11 is faster than D3D12 (it should be a relatively small amount). This may happen under lower CPU load conditions and does not surprise us. First, D3D11 has 5 years of optimizations where D3D12 is brand new. Second, D3D11 has more opportunities for driver intervention. The problem with this driver intervention is that it comes at the cost of extra CPU overhead, and can only be done by the hardware vendor’s driver teams. On a closed system, this may not be the best choice if you’re burning more power on the CPU to make the GPU faster. It can also lead to instability or visual corruption if the hardware vendor does not keep their optimizations in sync with a game’s updates.

While Oxide is showing off D3D12 support, Oxide also is very proud of its DX11 engine. As a team, we were one of the first groups to use DX11 during Sid Meier’s Civilization V, so we’ve been using it longer than almost anyone and know exactly how to get the get the most performance out of it. However, it took 3 engines and 6 years to get to this point . We believe that Nitrous is one of the fastest, if not the fastest, DX11 engines ever made.

It would have been easy to engineer a game or benchmark that showed D3D12 simply destroying D3D11 in terms of performance, but the truth is that not all players will have access to D3D12, and this benchmark is about yielding real data so that the industry as a whole can learn. We’ve worked tirelessly over the last years with the IHVs and quite literally seen D3D11 performance more than double in just a few years time. If you happen to have an older driver laying around, you’ll see just that. Still, despite these huge gains in recent years, we’re just about out of runway.

Unfortunately, our data is telling us that we are near the absolute limit of what it can do. What we are finding is that if the total dispatch overhead can fit within a single thread, D3D11 performance is solid. But eventually, one core is not enough to handle the rendering. Once that core is saturated, we get no more performance. Unfortunately, the constructs for threading in D3D11 turned out to be not viable. Thus, if we want to get beyond 4 core utilization, D3D12 is critical.

Gives some idea of why dx11 sometimes does better than 12.
 
Last edited:

SPBHM

Diamond Member
Sep 12, 2012
5,066
418
126
I think the DX11 results are pretty depressing for AMD, it shows how bad the disadvantage to NV can actually be in heavy scenes, now, while it's nice that they are doing so well with DX12, it's good to remember that most games until late next year will still be using DX11

but if we have better than expected DX12 adoption it's great help for AMD GPUs, the CPUs didn't like it all the much in this game, still the 8 core FX behind the i3, worse than some DX11 games.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

A *single benchmark. We do need other games



Gives some idea of why dx11 sometimes does better than 12.

No, it is not an argument.
nVidia didnt show negative scaling in the star swarm demo 6 months ago. Now we should believe that with better drivers, better API support and a better understanding of DX12 the situation should be much more worse for nVidia? ():)
 

AnandThenMan

Diamond Member
Nov 11, 2004
3,991
627
126
Sour sour grapes from Nvidia, they sure don't like it when they can't control the source code and prop up their hardware.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
Lol at the consistent nvidia hate.
If you're drawing any conclusions from this I feel bad for you.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Ever tried turning shadows all the way up in Rome 2/Attila Total War? Doesn't matter what graphics card you have or how many CPU cores you have, it will cause PCs with mid-level clock speeds to chug.

To be perfectly honest I've never played any of the Total War games :awe:

So how are most modern shadow methods done these days? Is it just a matter of testing a bunch or rays for where a shadow pixel must land (or twice in the case of a self shadow) and rendering out the results as a texture? If that's the case and is based on vertex shader programs, I can see why it's so draw call heavy.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
It is strange they did so well in star swarm. But that was missing a lot of the graphics in this. No smoke, terrain etc.

Probably was more like the API overhead test

Both companies may introduce improvements through driver updates, but I doubt the situation will change much for nvidia apparently because their dx11 drivers were pretty good at this stuff already.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136
Lol at the consistent nvidia hate.
If you're drawing any conclusions from this I feel bad for you.

Drawing conclusions from a benchmark? Yah no one does that.

Nvidia released a specific driver for this benchmark. Trying to play this off as just another alpha doesn't really hold. The game is in alpha yes, but the engine not at all. It's the same engine Star Swarm uses and that is how old?
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
ashes-gtx980.png



So much for DX12 being the panacea to AMD's low IPC CPUs.

It's only one benchmark. Let's not make sweeping conclusions yet. It's not making much of a difference on any CPU. At least in avg. FPS. It's actually performing slightly worse in most results with Intel.
 
Feb 19, 2009
10,457
10
76
I did say in the past, anything that makes AMD's weaker cores faster, will also make Intel's stronger core much faster.

They are not going to compete well on the CPU market until they improve their IPC. That's what everyone should understand. Also, Ashes seem to be limited to the 4core Intel, since their 4c is faster than their 6c CPU. But I think the bench is more than just draw calls, its a lot of async compute too as their devs have said.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
It's alright. The fx will be the best choice once direct x 13 comes out (/sarcasm from the cpu forums where constantly we're told once better threaded support is out amd processors will crush Intel ones. We obviously know better...

Who are saying that the FX CPU's will destroy Intel? The best I've seen is FX 8*** CPU's being able to O/C to the hilt and almost keep up with stock 4 core i7's. We've had multi thread/core software around to test on before DX12 (CPU render programs that can use as many cores/threads as are available have been around for years ans years.) FX hasn't been able to keep up in most of them. These are highly optimized parallel render programs

I think the DX11 results are pretty depressing for AMD, it shows how bad the disadvantage to NV can actually be in heavy scenes, now, while it's nice that they are doing so well with DX12, it's good to remember that most games until late next year will still be using DX11

but if we have better than expected DX12 adoption it's great help for AMD GPUs, the CPUs didn't like it all the much in this game, still the 8 core FX behind the i3, worse than some DX11 games.

DX11 results show that AMD hasn't optimized at all. If you look at DX11 games they compete quite well. 390X and 980 trade blows in real life games.

Lol at the consistent nvidia hate.
If you're drawing any conclusions from this I feel bad for you.

"Posted on August 16, 2015 by Dan Baker
There are incorrect statements regarding issues with MSAA. Specifically, that the application has a bug in it which precludes the validity of the test. We assure everyone that is absolutely not the case. Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months. Fundamentally, the MSAA path is essentially unchanged in DX11 and DX12. Any statement which says there is a bug in the application should be disregarded as inaccurate information."

Well according to the dev nVidia is lying. There is no MSAA bug.
 
Feb 19, 2009
10,457
10
76
All the IHV had source code for over a year. That's more than what most devs would do, so definitely any poor performance is entirely the fault of IHVs.

Also, I'm not impressed at all with FX CPU results, in fact, it looks horrible compared to Intels.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
I think the other elephant in the room is that Oxide says they will scale well to 16 cores yet that kind of scaling is completely absent in the benchmark. There really isn't any scaling over 4 cores.

Either Oxide is exaggerating or the engine is still in a very rough state. Though at this point I would really have expected to see some better scaling. If they still need to modify the engine to scale over 4x as many cores (which would be a nightmare to do) then currently the engine is unfinished.

All the IHV had source code for over a year. That's more than what most devs would do, so definitely any poor performance is entirely the fault of IHVs.

If you have ever coded anything you would realize how easy it is to break something but still have it work.