[WCCFtech] AMD and NVIDIA DX12 big picture mode

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
So, which would benefit AMD hardware more?

The answer is obvious, and I think at this point everyone realizes what I was trying to say. So, I'll just leave this dead horse at that.

Which what? From the scenario you outlined it would be impossible to tell.

If you want a useful scenario where it would actually be somewhat possible to predict, it would have to something like this:

Game A: Uses x amount of DX12 features. Uses a not insignificant amount of compute shaders. Uses async compute.
Game B: Uses the exact same amount of DX12 features as A. Uses the exact same amount and type of compute shaders and graphics shaders as A. Does not use async compute.

In that case game A should be faster, although by how much depends upon the ratio of compute shaders to graphics shaders.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Okay, it seems you guys are just choosing to ignore my posts. So here, I'll let you answer it:

Which of the two would benefit AMD best?

A DX12+AC game

A DX12 without AC game

Simple enough.


The first one but that is not what you said,

AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.

AMD will not suffer up to 30% if the game will not use Async Compute.
AMD will gain ANOTHER up to 30% from Async Compute ON TOP of the DX-12 performance gains.

You just dont realize that GCN will gain directly from DX-12 with or without the use of Async Compute.

So for example,

With a DX-12 game the R9 390X will be close/equal to GTX-980 but using Async Compute in the same game the R9 390X may be close/equal to GTX-980Ti.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The first one but that is not what you said,



AMD will not suffer up to 30% if the game will not use Async Compute.
AMD will gain ANOTHER up to 30% from Async Compute ON TOP of the DX-12 performance gains.

You just dont realize that GCN will gain directly from DX-12 with or without the use of Async Compute.

So for example,

With a DX-12 game the R9 390X will be close/equal to GTX-980 but using Async Compute in the same game the R9 390X may be close/equal to GTX-980Ti.

Which is what I said. I'm sure he'll dance around it some more though. :rolleyes:
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
AMD will not suffer up to 30% if the game will not use Async Compute.
AMD will gain ANOTHER up to 30% from Async Compute ON TOP of the DX-12 performance gains.
Yes but nvidia will always have top performance because they improve the drivers until they work as best as they can.
As seen in ashes where they managed to get the same FPS in dx11 that GCN needed dx12 to achieve.
So whenever amd will not gain 30% (or whatever % it will be) from AC they will suffer 30% compared to nvidia.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Yes but nvidia will always have top performance because they improve the drivers until they work as best as they can.
As seen in ashes where they managed to get the same FPS in dx11 that GCN needed dx12 to achieve.
So whenever amd will not gain 30% (or whatever % it will be) from AC they will suffer 30% compared to nvidia.

They only gain from AC. They don't suffer when it's not their. And the gain in Ashes is nowhere near 30%. That's not at all what the Dev said. That's just how you are trying to spin it.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Yes but nvidia will always have top performance because they improve the drivers until they work as best as they can.
As seen in ashes where they managed to get the same FPS in dx11 that GCN needed dx12 to achieve.
So whenever amd will not gain 30% (or whatever % it will be) from AC they will suffer 30% compared to nvidia.

When comparing cards that are position against each other from either side (e.g. Fury X vs. 980 Ti, 390X vs. 980, 390 vs. 970, 380 vs. 960), Nvidia is ahead by roughly 5% on average. So if AMD gained 30% in a given game from async compute (and assuming that Nvidia doesn't fix their drivers as they have said they will), then AMD would be ahead by 20-25%.

Also the current Ashes benchmark is not really a good indicator of final performance, seeing as it's still in alpha and has vsync forced on for AMD.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Yes but nvidia will always have top performance because they improve the drivers until they work as best as they can.
As seen in ashes where they managed to get the same FPS in dx11 that GCN needed dx12 to achieve.
So whenever amd will not gain 30% (or whatever % it will be) from AC they will suffer 30% compared to nvidia.

Its possible when you have layer of abstraction - like in DirectX11. To get more performance in DX12, Nvidia has to get specific code and add it to the GAME ITSELF. GPU API driver talks to the GPU, there is nothing else there. There is absolutely no place in gaining performance with drivers in DX12. You have to add specific code to the application with different approaches of execution to gain performance.
 

Krteq

Golden Member
May 22, 2015
1,007
719
136
Exactly. DX12 is an explicit API, it means that driver is playing "second fiddle".
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
It also means the developer have to code and optimize for every single uarch. And they have to followup whenever a new uarch is released. (This we already know wont happen.). The only real safe bet in terms of an lazy Xbox One port is to use GCN 1.1. Even GCN 1.2 could run like crap.

It also makes performance cheating very easy from a developer standpoint.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
The first one but that is not what you said,

Thank you!

AMD will not suffer up to 30% if the game will not use Async Compute.
AMD will gain ANOTHER up to 30% from Async Compute ON TOP of the DX-12 performance gains.

You just dont realize that GCN will gain directly from DX-12 with or without the use of Async Compute.

So for example,

With a DX-12 game the R9 390X will be close/equal to GTX-980 but using Async Compute in the same game the R9 390X may be close/equal to GTX-980Ti.

Holy crap, it's like none of you can read. That is exactly what I said. I didn't say it would be DX11 minus 30%, I said them not using it would SUCK for AMD (not performance, the scenario.) IE, it feels like AMD is at the mercy of devs. If they choose not to use AC for whatever reason (what reason I specifically stated was Nvidia money hatting them) it would suck for AMD (the scenario). Someone went on to argue "it would be just like now" to which I said was exactly what I meant.

If DX12+no AC == 100%
DX12+AC == ~130%

But people hung on to me saying "it would suck for AMD" and then bank rolled into "but DX12 is more than just AC!" No kidding!

Which is what I said. I'm sure he'll dance around it some more though. :rolleyes:

Yet you never answered my question. Had you done so, I would have said exactly what Atenra said.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
Shintai, not exactly, however it is possible. The problem is this. Its all turns out to the hardware you have and how well it handles the API you play with. If you have extremely capable hardware for the API - you don't have to fiddle with it. If your hardware is rubbish for that API - you have to fiddle with it, and the game itself.

Im guessing you know which is more true for GCN and for Maxwell. Maxwell is rubbish for DX12. Thats why there will be a lot to to do in game to optimize it for Nvidia hardware.

Compatibility with features does not mean anything in the case of DX12. Its all "how you use them". And that is the matter of hardware.
 
Last edited:

96Firebird

Diamond Member
Nov 8, 2010
5,738
334
126
Shintai, not exactly, however it is possible. The problem is this. Its all turns out to the hardware you have and how well it handles the API you play with. If you have extremely capable hardware for the API - you don't have to fiddle with it. If your hardware is rubbish for that API - you have to fiddle with it, and the game itself.

Im guessing you know which is more true for GCN and for Maxwell. Maxwell is rubbish for DX12. Thats why there will be a lot to to do in game to optimize it for Nvidia hardware.

Compatibility with features does not mean anything in the case of DX12. Its all "how you use them". And that is the matter of hardware.

Do you think GCN1.2 is not a capable hardware for Mantle? Because it had, and probably still has (I haven't seen any updates on it), issues with Mantle. Which, as you know, is an API sponsored by AMD, for hardware designed by AMD.

Links to review sites talking about the issue here.

New hardware needs game updates with Mantle, and presumably DX12, since they are both "closer to metal" APIs. And, as we've seen with Mantle, isn't as easy as some may hope.
 

Glo.

Diamond Member
Apr 25, 2015
5,930
4,991
136
I think the best answer would be testing GCN 1.2 architecture with DX12, because it is based on Mantle. So far, from what we have seen Fury gets boost to performance with DX12.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Shintai, not exactly, however it is possible. The problem is this. Its all turns out to the hardware you have and how well it handles the API you play with. If you have extremely capable hardware for the API - you don't have to fiddle with it. If your hardware is rubbish for that API - you have to fiddle with it, and the game itself.

Im guessing you know which is more true for GCN and for Maxwell. Maxwell is rubbish for DX12. Thats why there will be a lot to to do in game to optimize it for Nvidia hardware.

Compatibility with features does not mean anything in the case of DX12. Its all "how you use them". And that is the matter of hardware.

I think the best answer would be testing GCN 1.2 architecture with DX12, because it is based on Mantle. So far, from what we have seen Fury gets boost to performance with DX12.

I dont think you understand the issue. The issue is exactly like BF4/Thief with GCN 1.2. These games are pre GCN 1.2 and since the developers didnt make an updated version. Performance is subpair, even terrible, on GCN 1.2.

It got nothing to do with how the hardware handles the API. What does matter is if the DX12/Vulkan code is written to support the hardware. This is DX12/Vulkan/Mantle/Libgcm etc problem in a nutshell. It essentially requires a static hardware base.

As long as games have a DX11 path, then these can at least run. But some developers would like DX12 only games. Specially Xbox One developers (And they can for the matter drop any support besides GCN 1.1). And that is where future versions of GPU uarchs will suffer while it can possible break DX12 adoption completely for PC.

As a hypothetical example. Who says that AOTS will even support higher than GCN 1.2 and maybe Maxwellv2 in DX12 if they ever get to fix that game for it. Pascal, GCN 1.3/2.0 or whatever may have to run it as DX11. Either because they cant even run it as DX12, or DX12 gives subpair performance and issues.
 
Last edited:

parvadomus

Senior member
Dec 11, 2012
685
14
81
I dont think you understand the issue. The issue is exactly like BF4/Thief with GCN 1.2. These games are pre GCN 1.2 and since the developers didnt make an updated version. Performance is subpair, even terrible, on GCN 1.2.

Is it a game issue or just that AMD did not tune Mantle for GCN 1.2? I doubt the developers need to tune games for every arch, at that level, it would be dumb. Impossible to maintain.
I know there are some optimizations at shader levels, or optimized shaders for different architectures, but I think this time its way too deep.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Is it a game issue or just that AMD did not tune Mantle for GCN 1.2? I doubt the developers need to tune games for every arch, at that level, it would be dumb. Impossible to maintain.
I know there are some optimizations at shader levels, or optimized shaders for different architectures, but I think this time its way too deep.

Its a game issue.

Well this is what you get with "close to metal" coding and giving developers full control.
 

Sabrewings

Golden Member
Jun 27, 2015
1,942
35
51
Its a game issue.

Well this is what you get with "close to metal" coding and giving developers full control.

Yep, I said it a while ago. It's putting more power on developers, but it's also putting a lot more responsibility on them as well. It's not going to be the cake walk many have made it out to be.

I hope more developers start to use the popular engines like CryEngine, UE, or Unity. The engine devs can keep on top of recompiling for new hardware, while the actual game devs can work on making good games.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Are you suggesting that AMD and NVIDIA will not force developers to patch the games for the new GPU architectures ??

Also, i dont believe any developer would like his games to under-perform running on new hardware. They will be eaten alive by the gamers, they will get negative commends from the press, they will get bad reputation and and finally will lose sales thus money.

Games the last 4-5 years always require new patches for bug and performance fixes after the original Game release. Im sure they will get patched for new GPU architectures as well.
 

Snafuh

Member
Mar 16, 2015
115
0
16
Its a game issue.

Well this is what you get with "close to metal" coding and giving developers full control.

How do you know it's not a driver issue? AMD basically stopped developing and supporting Mantle. Even with the new "low level" APIs there is a driver layer between game and hardware
 

Sabrewings

Golden Member
Jun 27, 2015
1,942
35
51
Are you suggesting that AMD and NVIDIA will not force developers to patch the games for the new GPU architectures ??

Force how? Roll up in their offices at gunpoint?

Also, i dont believe any developer would like his games to under-perform running on new hardware. They will be eaten alive by the gamers, they will get negative commends from the press, they will get bad reputation and and finally will lose sales thus money.

Once a game starts to fall out of the public eye, they won't care and will be on to their next project.

This is why I suggested they use OTS engines. Let them (engine developer) worry about updating it. With this kind of effort required we might see a larger shift into engine providers and game developers based off those engines. If a game uses an in house engine, you might see less stellar performance on unsupported architectures versus an OTS engine game. It's also quite expensive to keep sending engineers back to the base code for a game you sold two years ago.

Games the last 4-5 years always require new patches for bug and performance fixes after the original Game release. Im sure they will get patched for new GPU architectures as well.

There's a huge difference in fixing bugs and reevaluating engine base code for a newly launched architecture.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Are you suggesting that AMD and NVIDIA will not force developers to patch the games for the new GPU architectures ??

Also, i dont believe any developer would like his games to under-perform running on new hardware. They will be eaten alive by the gamers, they will get negative commends from the press, they will get bad reputation and and finally will lose sales thus money.

Games the last 4-5 years always require new patches for bug and performance fixes after the original Game release. Im sure they will get patched for new GPU architectures as well.

Unless someone pays the developers they wont. We have already seen it.

Its nothing but an utopian dream to think game developers will continue to put more money and effort into games that dont sell anymore than they have to.
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
The need to continually update games will probably reinforce the trend of 2+ years of $15-20 DLC on each major release. That seems to be the most obvious way of delivering updates while getting return on the time to make them. Bad news for people hoping to see the end of the questionable-value DLC trend. But DLC wasn't going away anyways
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
How do you know it's not a driver issue? AMD basically stopped developing and supporting Mantle. Even with the new "low level" APIs there is a driver layer between game and hardware

GCN 1.2 doesnt have an issue in newer Mantle games after GCN 1.2 was released.

But it shouldnt really be a surprise to anyone. This is how low level access works.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The need to continually update games will probably reinforce the trend of 2+ years of $15-20 DLC on each major release. That seems to be the most obvious way of delivering updates while getting return on the time to make them. Bad news for people hoping to see the end of the questionable-value DLC trend. But DLC wasn't going away anyways

I dont think we see DLCs with such updates. I just think we see no updates. Unless its some kind of extremely popular game people continue to play in droves. Like Dota, CS, TF etc. Or something like MMOs.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Are you suggesting that AMD and NVIDIA will not force developers to patch the games for the new GPU architectures ??

How can they force devs? Right now if a game is broken on some new hardware (not just performance but visual errors and crashes too) they work around using driver patches. It's very much in the gpu makers interest to do this as they want to sell these new cards. With DX12 direct-to-the-metal this is much harder and we might need dev help.

However the dev has nearly no incentive for many games - the game is sold, and they've moved onto the next game. The dev or at least the dev team that did the game might no longer exist.

Mantle showed exactly how this is going to work with one of the biggest devs who were very pro mantle and one of the biggest games (BF4) where you'd expect if they were going to make an effort they would do it. They never did anything for new cards, you just had to fall back to DX11.

What I personally expect will happen is they will code to the metal for PS4 and XB1 (as they always did for all PS and xbox) but won't bother for PC (DX12 doesn't force you to do this low level coding). Nvidia/AMD will sponsor some title and for that they will put in faster to the metal code for some architecture as it's effectively paid for by the gpu maker. By this I mean they'll make it run fast for the current range of cards they are selling, the opposition gpu maker, and any cards after this won't work and their will be some simpler fallback (i.e. exactly like what happened with mantle).

I also expect as Nvidia is both richer and better at influencing devs this will mostly be Nvidia with their metalworks(tm) whatever they want to call it. Whoever's camp is not supported (mostly AMD) will rage at the game maker and demand their game not be included in gpu testing. It'll be like gamesworks magnified. They'll provide a low level interface, so for example intel/AMD will use some standard DrawObject() call, but if you are running Nvidia you'll use the special MetalWorks:: DrawObject() that is 50% faster. Nvidia will provide/maintain this in a way AMD can't hope to compete with (just like gamesworks).
 
Last edited:
Status
Not open for further replies.