D3D12 is Coming! AMD Presentation

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Fury (non X) really disappoints there contra GTX980 and 390X. And GTX980TI is just the undisputed king of the hill.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
ashes-r9390x.png


Interestingly it seems that AMD does much better in DX12 mode. Largely catching up to the 980 with solid improvement across the board.

However, DX 12 appears to be boosting the average across the board. Hence Skylake demolishes the FX series.

Its obvious what the problem is. Its game logic. Exactly why you see no differences between high and low on the low end CPUs (game logic is bottlenecking 1 CPU core meaning that there is no penalty moving from high to low) while the high end CPUs show a drop in performance as you change the settings. The 6700k is also ahead of the 5960X.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Heh, even today with DX-11 and latest games that can take advantage of 4-6 threads show significant performance increase with the 8-core FX CPUs against Core i5. You really believe that DX-12 games will not benefit more from higher core count than DX-11 ??

So after the AOTS benchmarks, want to change your hopes? :awe:
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Interestingly it seems that AMD does much better in DX12 mode. Largely catching up to the 980 with solid improvement across the board.

I wonder how it would have looked if AMD had just made a multithreaded DX11 driver to begin with.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Those AMD chips must be working as quad cores most of the time or something. The bottleneck I guess, or something's way off about the results. usually the 8370 will do better than an i3 so its strange in dx12 it's not. That game might be showing a weakness in the design.

The below is interesting. While sites still use gameworks games to test AMD hardware, nvidia gets to tell them not to turn on AA. I would love to see benchmarks with it on.

What's a new benchmark without some controversy?

Just a couple of days before publication of this article, NVIDIA sent out an information email to the media detailing its “perspective” on the Ashes of the Singularity benchmark. First, NVIDIA claims that the MSAA implementation in the game engine currently has an application-side bug that the developer is working to address and thus any testing done with AA enabled was invalid. (I happened to get wind of this complaint early and did all testing without to AA avoid the complaints.) Oxide and Stardock dispute this claim as a “game bug” and instead chalk up to early drivers and a new API.

Secondly, and much more importantly, NVIDIA makes the claim that Ashes of the Singularity, in its current form, “is [not] a good indicator of overall DirectX 12 gaming performance.”

What’s odd about this claim is that NVIDIA is usually the one in the public forum talking about the benefits of real-world gaming testing and using actual applications and gaming scenarios for benchmarking and comparisons. Due to the results you’ll see in our story though, NVIDIA appears to be on the offensive, trying to dissuade media and gamers from viewing the Ashes test as indicative of future performance.

NVIDIA is correct in that the Ashes of the Singularity benchmark is “primarily useful to understand how your system runs a series of scenes from the alpha version of Ashes of Singularity” – but that is literally every game benchmark. The Metro: Last Light benchmark is only useful to tell you how well hardware performs on that game. The same is true of Grand Theft Auto V, Crysis 3, etc. Our job in the media is to take that information in aggregate and combine with more data points to paint an overall picture of any new or existing product. It just happens this is the first DX12 game benchmark available and thus we have a data point of exactly one: and it’s potentially frightening for the company on the wrong side.

Do I believe that Ashes’ performance will tell you how the next DX12 game and the one after that will perform when comparing NVIDIA and AMD graphics hardware? I do not. But until we get Fable in our hands, and whatever comes after that, we are left with this single target for our testing.

As some would expect, AMD gains massively from dx12 over dx11. Nvidia not so much. Will be interesting to see how this plays out in the future for other games. in this one the 390x comes out ahead of the 980 with dx12 even at 1080p. The dx11 performance is disappointing so get to upgrading

We keep hearing that excuse.

This is the game that can scale to 16 cores!

I wouldn't call them lazy. It does not seem to be scaling as we might expect though. They really should have included i5 results. This would help in judging the impact of hyperthreading as well as see whats going on with the fx processors. They are not FULL 8 core or 6 core chips
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Looks like Nvidia's drivers have some sort of bug there.

Nvidia's numbers.

nvidia-test-results.jpg


DX12 is not scaling with core speed on Nvidia's drivers at the moment. DX11 is.

Edit: 2 core results

nvidia-test-results2.jpg


Improvements across the board but you will notice there is little scaling on high between 2 and 6 cores. Looks like a driver bug.
 
Last edited:

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
Looks like Nvidia's drivers have some sort of bug there.

Nvidia's numbers.

nvidia-test-results.jpg


DX12 is not scaling with core speed on Nvidia's drivers at the moment. DX11 is.

Edit: 2 core results

nvidia-test-results2.jpg


Improvements across the board but you will notice there is little scaling on high between 2 and 6 cores. Looks like a driver bug.

Of course nVidia tries to deflect by blaming the game.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
I was going to say it looks like nvidia invested heavily in dx11 optimizations and the dx12 driver may not have them or they do not apply. seems oxide thinks so

So what is going on then? Our analysis indicates that any D3D12 problems are quite mundane. New API, new drivers. Some optimizations that that the drivers are doing in DX11 just aren’t working in DX12 yet. Oxide believes it has identified some of the issues with MSAA and is working to implement work arounds on our code. This in no way affects the validity of a DX12 to DX12 test, as the same exact work load gets sent to everyone’s GPUs. This type of optimizations is just the nature of brand new APIs with immature drivers.

Read more at http://www.legitreviews.com/ashes-o...mark-performance_170787/2#KhiOKIXUzvuwxXIv.99

Fortunately for nvidia this game is not released so we can go on thinking its early code effects.

I'll say this somewhat confirms the claims being made about nvidia being better at dx11 and could shed light on why project cars was slow on AMD cards because of what it was doing on CPU. This also should support the claims by PCars devs that dx12 patch for that game should give huge gains (probably mostly for AMD)
 

MagickMan

Diamond Member
Aug 11, 2008
7,460
3
76
It shows that nvidia wasn't as closely involved in early development, the numbers will improve quite a lot for them as they catch up.
 

Despoiler

Golden Member
Nov 10, 2007
1,968
773
136

tential

Diamond Member
May 13, 2008
7,348
642
121
As expected. i3 again beating FX series.
It's alright. The fx will be the best choice once direct x 13 comes out (/sarcasm from the cpu forums where constantly we're told once better threaded support is out amd processors will crush Intel ones. We obviously know better...
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
With DX12 being able to multithread much better, it certainly should help AMD CPU performance, but we all should know by now that when ever dev's get more free performance, they find a way to use it. You should expect the dev's to find new ways to tax the CPU's once DX12 eases up the current bottlenecks.

Having a faster CPU will remain an advantage, but it may be possible that new engines will be much better at using more cores, allowing added cores and threads to become a bigger advantage. If that becomes true, we may finally start to see CPU performance to grow much faster than now by putting more cores on the CPU.
 

tential

Diamond Member
May 13, 2008
7,348
642
121
I have a feeling this is going to change after 2 years of optimizations. Their drivers appear to have some CPU bottlenecks.
In 2 years not a single person will care about gtx 980ti and fury x performance besides people who didn't purchase them theory crafting.

Those who did purchase have disposable income and won't sit around 2 years from now with cards that are being beat out by midrange products.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
In 2 years not a single person will care about gtx 980ti and fury x performance besides people who didn't purchase them theory crafting.

Those who did purchase have disposable income and won't sit around 2 years from now with cards that are being beat out by midrange products.

Making assumptions about people and their money. Some people would be more inclined to keep a $600+ card beyond 2 years. 780tis and the $649 780 still have users pissed about the current standings. The cards have to be used by somebody and those somebodies care.

Actually, if you read Dan Baker's latest blog he says that Nvidia, AMD, Microsoft, and Intel have had access to all AoTS source code for a year.


ExtremeTech has Fury X vs 980Ti AoTS benches.

http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head

That was done using nvidias recent Ashes of the singularity driver. Should we really expect better later?

DirectX 12, in contrast, gives the developer far more control over how resources are used and allocated. It offers vastly superior tools for monitoring CPU and GPU workloads, and allows for fine-tuning in ways that were simply impossible under DX11. It also puts Nvidia at a relative disadvantage. For a decade or more, Nvidia has done enormous amounts of work to improve performance in-driver. DirectX 12 makes much of that work obsolete. That doesn’t mean Nvidia won’t work with developers to improve performance or that the company can’t optimize its drivers for DX12, but the very nature of DirectX 12 precludes certain kinds of optimization and requires different techniques.
 
Last edited:

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Judging by how poorly nvidia did at the launch of star swarm and how well they do now I have confidence that they will improve things. The poor scaling/clockspeed scaling indicates that they definitely have room to improve.

Star swarm when initially released had a ton of problems so saying this is exclusively nvidia's problem may be incorrect.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
Well, this explains all.

original.jpg

Where is that from?

If you look at those benches nvidia gets more out of dx11 than they do 12 with the faster CPUs. Sometimes more than AMDs dx12. I doubt they will improve on the performance. It's more likely they will get it to match dx11 performance. As far as CPU overhead they may be near the limit with dx11 already.

Doesn't look like this game does anything with dx12 beyond removing CPU limitations.
 

positivedoppler

Golden Member
Apr 30, 2012
1,148
256
136
Those numbers far exceed my expectation of DirectX 12...wow just wow. AMD and Intel gain the most. Intel stagnant PC sale may finally pick up.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
Those AMD chips must be working as quad cores most of the time or something. The bottleneck I guess, or something's way off about the results. usually the 8370 will do better than an i3 so its strange in dx12 it's not. That game might be showing a weakness in the design.

This game is a strategy game and so it is single threaded,no matter how heavily they multithread the graphics the game itself (all the logic) will keep running on one thread.
Sad part,it's the only genre where you can easily add units(turn one unit into hundreds of units without changing much of the code) .

The rest is like others already said,nvidia is very optimized for dx11 so they have smaller gains with dx12,also they have fever but stronger cores.
Sadly none of the reviews show cpu loads or ,even better actual threads from process explorer.