[WCCFtech] AMD and NVIDIA DX12 big picture mode

Page 8 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Sabrewings

Golden Member
Jun 27, 2015
1,942
35
51
Star Citizen may not be a GE game exclusive but it will definitely use any GCN features are available in DX-12.

The person you quoted wasn't contesting which games will use DX12. He was contesting titles that are GE.

Yes, SC will probably use whatever it can, but that doesn't mean they'll put NV cards at a disadvantage either. It goes against CR's design philosophy. If they can get acceptable performance out of NV cards without AC, then they just won't use it on them.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
The person you quoted wasn't contesting which games will use DX12. He was contesting titles that are GE.

Yes, SC will probably use whatever it can, but that doesn't mean they'll put NV cards at a disadvantage either. It goes against CR's design philosophy. If they can get acceptable performance out of NV cards without AC, then they just won't use it on them.

Nobody said GE games put NV cards on a disadvantage. On the contrary, EVERY GE game out there is very well NV optimized, unlike GW games.

Also the context of the discussion is if Games will use Async Compute in DX-12 that will benefit AMD GCN cards. Since SC will not be a GW game and since they said they are using whatever benefit both AMD and NV im sure Async Compute is a given for SC and that is a Benefit and an added performance for the AMD GCN cards.

Every game using Async Compute being GE, GW or neutral will see performance gains with AMD GCN cards.
 

Sabrewings

Golden Member
Jun 27, 2015
1,942
35
51
Nobody said GE games put NV cards on a disadvantage. On the contrary, EVERY GE game out there is very well NV optimized, unlike GW games.

You're right. Nobody (including myself) said that. Nor did I bring up GW.

Also the context of the discussion is if Games will use Async Compute in DX-12 that will benefit AMD GCN cards.

Regardless of thread context, you quoted someone disputing GE games and lumped SC in there. My entire point here is that SC is not a GE game and will not be. CR has said he will not partner with one brand and not the other. Paid advertising is different since it isn't affecting his code in his game.

Since SC will not be a GW game and since they said they are using whatever benefit both AMD and NV im sure Async Compute is a given for SC and that is a Benefit and an added performance for the AMD GCN cards.

Again, not disputing anything. As I said, if they get gains out of AC for NV, they will use it. If they don't, they will disable it for NV. I expect them to follow that line of thinking for every advanced feature on both AMD and NV.

This is still all early speculation since the jury is still out on Maxwell's ultimate AC capabilities. We have an alpha AMD sponsored game, and some tests designed by folks at B3D who turn right around and tell everyone not to read too much into it.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
The person you quoted wasn't contesting which games will use DX12. He was contesting titles that are GE.

Yes, SC will probably use whatever it can, but that doesn't mean they'll put NV cards at a disadvantage either. It goes against CR's design philosophy. If they can get acceptable performance out of NV cards without AC, then they just won't use it on them.

GE doesn't put nVidia cards at a disadvantage. That's difficult to do when they post the source code for everyone to see. Not only does nVidia have access but so does everyone else to see exactly what they are doing. Devs are also free to change it however they want to as long as what they do doesn't hurt AMD's performance.

Every game using Async Compute being GE, GW or neutral will see performance gains with AMD GCN cards.

Unless they aren't allowed to implement it with the standard DX12 routines. If the path for AMD to use isn't there, then what happens. Remember that nVidia wanted it deactivated in the bench because they couldn't run it but Baker refused. That's what started the war of words with nVidia then taking the stance that the code was buggy rather than it being an issue on their end. What do you think would have happened if it was a GW or TWIMTBP title?
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
GE doesn't put nVidia cards at a disadvantage. That's difficult to do when they post the source code for everyone to see. Not only does nVidia have access but so does everyone else to see exactly what they are doing. Devs are also free to change it however they want to as long as what they do doesn't hurt AMD's performance.



Unless they aren't allowed to implement it with the standard DX12 routines. If the path for AMD to use isn't there, then what happens. Remember that nVidia wanted it deactivated in the bench because they couldn't run it but Baker refused. That's what started the war of words with nVidia then taking the stance that the code was buggy rather than it being an issue on their end. What do you think would have happened if it was a GW or TWIMTBP title?

What I gathered from the Ashes thread, if they don't use AC - AMD performance takes a huge hit. Though, I don't think I saw AMD performance using DX12 without AC (if that is even possible).

Going from the little info made available, NV can ride the coat tails of DX11 just a bit longer. AMD NEEDS DX12 specifically AC to be used otherwise it's gonna suck.

I hope all AMD users upgrade to Windows 10 too, seems DX11 is not gonna get any love from AMD anymore.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
What I gathered from the Ashes thread, if they don't use AC - AMD performance takes a huge hit. Though, I don't think I saw AMD performance using DX12 without AC (if that is even possible).

Going from the little info made available, NV can ride the coat tails of DX11 just a bit longer. AMD NEEDS DX12 specifically AC to be used otherwise it's gonna suck.

I hope all AMD users upgrade to Windows 10 too, seems DX11 is not gonna get any love from AMD anymore.

I didn't get the AMD's performance sucks without AC anywhere??? I thought I had read up to ~20% improvement from it. It's bad in DX11, but I would imagine that they haven't done any optimizing either. AMD does use dedicated hardware for a lot of things that nVidia simply uses their CU's for (tessellation, as another example). It makes sense that AMD performance will improve by utilizing this hardware.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Unless they aren't allowed to implement it with the standard DX12 routines. If the path for AMD to use isn't there, then what happens. Remember that nVidia wanted it deactivated in the bench because they couldn't run it but Baker refused. That's what started the war of words with nVidia then taking the stance that the code was buggy rather than it being an issue on their end. What do you think would have happened if it was a GW or TWIMTBP title?

Yea, I mean if the game will allow the Async Compute to work for the Desktop version.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
What I gathered from the Ashes thread, if they don't use AC - AMD performance takes a huge hit. Though, I don't think I saw AMD performance using DX12 without AC (if that is even possible).

Going from the little info made available, NV can ride the coat tails of DX11 just a bit longer. AMD NEEDS DX12 specifically AC to be used otherwise it's gonna suck.

I hope all AMD users upgrade to Windows 10 too, seems DX11 is not gonna get any love from AMD anymore.

Asynchronous compute is not the lions share of AMDs performance gain in ashes. They said it was noticeable (noticeable can be <10fps), but they also said they only kinda used it. Sounded like they just dumped a few already existing compute tasks on there and that was that. Most of the gain was just from API improvements over dx11 and using the hardware better.

The below is what was said.

Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute.
Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Asynchronous compute is not the lions share of AMDs performance gain in ashes. They said it was noticeable (noticeable can be <10fps), but they also said they only kinda used it. Sounded like they just dumped a few already existing compute tasks on there and that was that. Most of the gain was just from API improvements over dx11 and using the hardware better.

The below is what was said.

source?
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
I didn't get the AMD's performance sucks without AC anywhere??? I thought I had read up to ~20% improvement from it. It's bad in DX11, but I would imagine that they haven't done any optimizing either. AMD does use dedicated hardware for a lot of things that nVidia simply uses their CU's for (tessellation, as another example). It makes sense that AMD performance will improve by utilizing this hardware.

Pretty much what I said. From what I've read, if they don't use AC, we're back in the DX11 generation where a big chunk of GCN hardware goes basically unused. Which Nvidia can ride on the coat tails of (seeing DX11 performance for NV on Ashes as the example). While AMD needs AC used to get the best performance (ie I get the feeling AMD is not going to give DX11 much love, or further optimizations such as the recent multi-threaded driver they sorely needed.)

BUT, I base this on the little info I have (ie one game, which I'm not even sure is a good litmus test due to AMD's involvement. For all I know, Ashes can be the exception not the rule.)

EDIT:
Asynchronous compute is not the lions share of AMDs performance gain in ashes. They said it was noticeable (noticeable can be <10fps), but they also said they only kinda used it. Sounded like they just dumped a few already existing compute tasks on there and that was that. Most of the gain was just from API improvements over dx11 and using the hardware better.

The below is what was said.

Your source even supports my thoughts. Perhaps the word "sucks" is too strong for you, but losing up to 30% perf because a dev (for whatever reason) didn't use AC is definitely worthy of the word "sucks."
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Let me try to rephrase my post because I think people are assuming I'm attacking AMD.

Game "Whatever" launches it gets money hatted by Nvidia and they choose NOT to use AC. Game is pretty much a glorified DX12 game without AC.

AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.

Basically, Nvidia can repeat DX11-generation by securing just a handful of large titles. If the performance delta is that wide, it sort of hurts AMD in the end. And I'm sure Nvidia adding just some of their sauces wouldn't help either.

It feels like AMD is at the mercy of devs utilizing AC. If they don't (for whatever reason) it's gonna suck for AMD.
 

Azix

Golden Member
Apr 18, 2014
1,438
67
91
His source is AMD sponsored Oxide ("Kollock"). Who else would it be.

Who put nvidia code in their game and is currently working with them to fix issues on nvidia's side

Your source even supports my thoughts. Perhaps the word "sucks" is too strong for you, but losing up to 30% perf because a dev (for whatever reason) didn't use AC is definitely worthy of the word "sucks."

I thought you were talking about ashes. Without asyn used they'd just hold the same performance as they do now. If that sucks, then nvidia would be putting out sucky perf as well. Should not forget, the current dx11 performance is competitive.

eg, kepler is done in by most comparable gcn, 380 beats 960, 390 beats 970, fury beats 980. Others similar performance, better performance etc. So losing the generational leap provided by async compute just brings it down to nvidia performance levels.


Heh. Ok. Nevermind Azix.

yeah. My source was the game developer talking about their game.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Game "Whatever" launches it gets money hatted by Nvidia and they choose NOT to use AC. Game is pretty much a glorified DX12 game without AC.

I would imagine most early DX12 games will be like that, whether sponsored by NV or not because taking advantage of AC requires a complete rethinking of the game's engine. Chances are the DX12 games that will utilize AC will be designed from the ground-up to use this hardware feature (i.e., most likely games that struggled on PS4/XB1 so the developer had to maximize all the features of the GCN architecture to maximize performance on the PS4/XB1 consoles). Then when this console game is ported to the PC, the underlying game engine already takes advantage of ACEs in PS4/XB1. That's basically the only way AMD can bank on vast usage of ACEs in DX12 games without having to outright bribe developers to do it. The reason programmers/PC developers may want to use ACEs in games is because it makes sense (i.e., if you get 20-30% more performance by simply coding the game differently - then it's free performance). If for their game, they will be budget constrained or not have the latest modern engine in their game that was designed to take full advantage of DX12+ACEs, they won't use that feature since it will cost them too much time and money to utilize.

Since games take 2-3 years to make (or more), chances are most "DX12" games launching in 2016 started their design around the launch of PS4 (or even before that) and it's unlikely that developers were thinking that far ahead wrt to AC. Now, games that are going into design starting now and moving forward are likely to be coded to take advantage of AC if developers feel that AC will become a major feature-set of all future GPU architectures in 2018 and beyond (i.e., games that are being made starting Fall 2015 are likely to launch in 2017-2018 so developers are likely to anticipate these trends).

If AMD works directly with developers to help them re-code their DX12 game engine to use AC, then I can see how this feature might make it into some Gaming Evolved titles like Deus Ex Mankind Divided or Rise of the Tomb Raider. Even then, it's not a guarantee that developers will spend resources on performance optimizations that call for advanced features like ACEs since many developers couldn't care less about optimizations as they have deadlines and shareholder obligations to get the game out in a certain quarter -- we've seen this already with many horribly optimized and unfinished games that were rushed out the door such as Batman AK, AC Unity, Watch Dogs.

OTOH, we probably can't expect Gears of War Ultimate DX12 to use AC widely unless the XB1 version unless AMD works directly with MS to encourage them to use this feature to showcase DX12 + its max performance advantage over DX11. In that case, it absolutely makes sense for MS to spend the extra $$$ because the performance of DX12+ACEs will pummel DX11 performance in GoWU. Will MS spend the $ though? We don't know.

AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.

1. We can't accurately predict the specific magnitude of the impact of AC across many games from just one benchmark. AC could be a 5% benefit in some future DX12 game or 10%, or 35%.

2. Ashes uses DX12 + AC but DX12 itself provides major draw call benefits to AMD's graphics. Therefore, the move to DX12 should in theory provide AMD's GCN some benefit, even if the DX12 game doesn't use any AC. That is because AMD has a massive draw call bottleneck in their DX11 drivers.

http://www.eurogamer.net/articles/digitalfoundry-2015-why-directx-12-is-a-gamechanger

Basically, Nvidia can repeat DX11-generation by securing just a handful of large titles. If the performance delta is that wide, it sort of hurts AMD in the end. And I'm sure Nvidia adding just some of their sauces wouldn't help either.

Deus Ex:MD's release date is February 23, 2016.
Rise of the Tomb Raider's release date is "Early 2016"

By the time we start seeing DX12 games trickle, we should see Pascal in Q2-Q4 2016 and AMD's Arctic Islands too. What matters more is if Pascal has AC.

It feels like AMD is at the mercy of devs utilizing AC. If they don't (for whatever reason) it's gonna suck for AMD.

What makes you think AMD focused on ACE engines & the command processor underlying GCN thinking that most games will take advantage of these features? That's NOT the reasons behind these features in GCN.

In Eric Demer's 1+ hour presentation on GCN, he even mentioned that VLIW is perfectly fine for graphics workloads. The reason AMD focused on compute is not for games but to have their graphics card be able to perform other tasks more efficiently, to make it into a more general purpose product/device. For that reason, ACE/Command processor(s) + AMD's shaders/TMUs/ROPs/memory bandwidth that are needed for graphics card horsepower are two distinct & separate strategies that were pursued by AMD. AMD desired to make a card that's good for graphics + compute/other things. It just happens to be that because ACE/Command processor design provides so much more compute horsepower and flexibility that should you happen to use that feature for graphics, it's just a bonus.

It's amazing that after nearly 4 years, people still don't understand the fundamental reasons for AMD's GCN redesign over VLIW. It's not graphics, it was always about General Purpose Processing & Compute (i.e., let's make a product that can work for financial analysis, in the fields of geology/weather, natural disasters, etc.):

"Designed to push not only the boundaries of DirectX® 11 gaming, the GCN Architecture is also AMD's first design specifically engineered for general computing. Key industry standards, such as OpenCL&#8482;, DirectCompute and C++ AMP recently have made GPUs accessible to programmers. The challenge going forward is creating seamless heterogeneous computing solutions for mainstream applications. This entails enhancing performance and power efficiency, but also programmability and flexibility. Mainstream applications demand industry standards that are adapted to the modern ecosystem with both CPUs and GPUs and a wide range of form factors from tablets to supercomputers. AMD&#8217;s Graphics Core Next (GCN) represents a fundamental shift for GPU hardware and is the architecture for future programmable and heterogeneous systems."

https://www.amd.com/Documents/GCN_Architecture_whitepaper.pdf

And if one designs a unified architecture that's flexibility to be good as graphics and compute, and is scalable, it allows AMD to evolve this architecture because they knew they didn't have the resources to do a 2-year new GPU architecture cadence redesign that NV uses. It all makes sense if you pay attention to AMD's financial position and what they were trying to accomplish. As for many things in life, if you are a generalist (general purpose architecture), you also risk not being the best at any one particular thing. It's a risk AMD had to take since they can't do new GPU architectures every 2 years.

It doesn't get much clearer than that - AMD never designed GCN around ACEs/Command Processors specifically/mainly for graphics workloads. Their goal was to design the most powerful general purpose processing architecture that is scalable long-term and can handle many more tasks efficiently, with graphics just being a subset of those tasks. This was even covered in AT's original GCN architecture article. If AMD wanted to focus solely on graphics, they could have just made a scalar architecture for graphics with a focus on perf/watt, and kept improving TMUs, shader array, memory bandwidth, geometry engines. That's exactly what NV has done with Kepler and Maxwell and it paid off in many ways.

So no, AMD isn't screwed somehow if games don't use ACE because they will still be focusing on shaders, textures, memory bandwidth, perf/watt and IPC improvements with the next gen 16nm HBM2 node shrink. Why? Because that part of graphics is the backbone of graphics performance. If developers start using ACE, it's simply a bonus for GCN that has always been there since December 2011 but went unused for 4 years. It's not as if AMD has been sitting there all this time for 4 years and wondering why no one is taking advantage of ACEs on their December 2011 HD7970, because AMD knows that's not how game development works.

Again, for the vast majority of Maxwell and GCN 1.0-1.2 users, this likely won't even matter unless we start to see games using ACEs extensively in early 2016. We have to wait and see. Where ACE seems more important is for future generations of cards released in 2016-2019. It would benefit all PC gamers if AMD/NV went all in on this feature if there is more free performance to be had for graphics. If PS5/XB2's GPUs also have strong ACEs in 2019-2020, that would also be very good because who doesn't want more free performance from hardware features that already exist?

Since AMD already has ACEs in all of its major graphics cards going back to HD7000 series, they should be focused on perf/watt, going 8-16GB HBM2, increasing TMUs, SPs, ROPs, memory bandwidth to 1TB/sec+. It's NV that should be paying attention to ACEs, not AMD because AMD already has it in their design. AMD needs to focus on its weaknesses such as rasterization/polygon throughput, texture and fill-rate bottlenecks and geometry performance.
 
Last edited:

monstercameron

Diamond Member
Feb 12, 2013
3,818
1
0
Let me try to rephrase my post because I think people are assuming I'm attacking AMD.

Game "Whatever" launches it gets money hatted by Nvidia and they choose NOT to use AC. Game is pretty much a glorified DX12 game without AC.

AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.

Basically, Nvidia can repeat DX11-generation by securing just a handful of large titles. If the performance delta is that wide, it sort of hurts AMD in the end. And I'm sure Nvidia adding just some of their sauces wouldn't help either.

It feels like AMD is at the mercy of devs utilizing AC. If they don't (for whatever reason) it's gonna suck for AMD.

AC isn't the only way to get max utilization out of gpu hardware. Remember that a big feature of dx12 is command queue/list. https://msdn.microsoft.com/en-us/library/windows/desktop/dn859354(v=vs.85).aspx
 

Carfax83

Diamond Member
Nov 1, 2010
6,841
1,536
136
It's NV that should be paying attention to ACEs, not AMD because AMD already has it in their design.

ACEs are not necessary to perform concurrent asynchronous compute. That's just AMD's method, and it likely suits their architecture because GCN has a high amount of idle shaders at any given time.

Doesn't mean it will work for NVidia if they implement something similar. Andrew Lauritzen commented on this in a post on beyond3d forums which I posted a few pages back:

Absolutely, and that's another point that people miss here. GPUs are *heavily* pipe-lined and already run many things at the same time. Every GPU I know of for quite a while can run many simultaneous and unique compute kernels at once. You do not need async compute "queues" to expose that - pipelining + appropriate barrier APIs already do that just fine and without adding heavy weight synchronization primitives that multiple queues typically require. Most DX11 drivers already make use of parallel hardware engines under the hood since they need to track dependencies anyways... in fact it would be sort of surprising if AMD was not taking advantage of "async compute" in DX11 as it is certainly quite possible with the API and extensions that they have.

Yes, the scheduling is non-trivial and not really something an application can do well either, but GCN tends to leave a lot of units idle from what I can tell, and thus it needs this sort of mechanism the most. I fully expect applications to tweak themselves for GCN/consoles and then basically have that all undone by the next architectures from each IHV that have different characteristics. If GCN wasn't in the consoles I wouldn't really expect ISVs to care about this very much. Suffice it to say I'm not convinced that it's a magical panacea of portable performance that has just been hiding and waiting for DX12 to expose it.

Source
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Let me try to rephrase my post because I think people are assuming I'm attacking AMD.

Game "Whatever" launches it gets money hatted by Nvidia and they choose NOT to use AC. Game is pretty much a glorified DX12 game without AC.

AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.

Basically, Nvidia can repeat DX11-generation by securing just a handful of large titles. If the performance delta is that wide, it sort of hurts AMD in the end. And I'm sure Nvidia adding just some of their sauces wouldn't help either.

It feels like AMD is at the mercy of devs utilizing AC. If they don't (for whatever reason) it's gonna suck for AMD.

DX12 is more than async compute. The biggest thing is it allows the game to talk more directly with the hardware. It's also truly multicored not just multithreaded. It's not DX11+AC like you are portraying here.


DX11
DX11_funny.jpg
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
AC isn't the only way to get max utilization out of gpu hardware. Remember that a big feature of dx12 is command queue/list. https://msdn.microsoft.com/en-us/library/windows/desktop/dn859354(v=vs.85).aspx

DX12 is more than async compute. The biggest thing is it allows the game to talk more directly with the hardware. It's also truly multicored not just multithreaded. It's not DX11+AC like you are portraying here.


DX11
DX11_funny.jpg

Okay, it seems you guys are just choosing to ignore my posts. So here, I'll let you answer it:

Which of the two would benefit AMD best?

A DX12+AC game

A DX12 without AC game

Simple enough.
 

antihelten

Golden Member
Feb 2, 2012
1,764
274
126
Which of the two would benefit AMD best?

A DX12+AC game

A DX12 without AC game

Simple enough.

It's not really that simple, since it depends entirely upon how much compute the game in question uses.

Secondly, one DX12 game is not necessarily equivalent to another DX12 game, since there are tons of DX12 features that you may or may not be utilizing (besides async compute).
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
It's not really that simple, since it depends entirely upon how much compute the game in question uses.

Secondly, one DX12 game is not necessarily equivalent to another DX12 game, since there are tons of DX12 features that you may or may not be utilizing (besides async compute).

So, which would benefit AMD hardware more?

The answer is obvious, and I think at this point everyone realizes what I was trying to say. So, I'll just leave this dead horse at that.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
So, which would benefit AMD hardware more?

The answer is obvious, and I think at this point everyone realizes what I was trying to say. So, I'll just leave this dead horse at that.

You switch back and forth between benefiting AMD and hurting AMD. You make it seem like without AC AMD will be hurt more than nVidia when it's with AC AMD will be helped more. AMD could very well be equal to or better than nVidia even before AC is used.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
You switch back and forth between benefiting AMD and hurting AMD. You make it seem like without AC AMD will be hurt more than nVidia when it's with AC AMD will be helped more. AMD could very well be equal to or better than nVidia even before AC is used.

I switched, where? You still never answered my question.
 
Status
Not open for further replies.