Game "Whatever" launches it gets money hatted by Nvidia and they choose NOT to use AC. Game is pretty much a glorified DX12 game without AC.
I would imagine most early DX12 games will be like that, whether sponsored by NV or not because taking advantage of AC requires a complete rethinking of the game's engine. Chances are the DX12 games that will utilize AC will be designed from the ground-up to use this hardware feature (i.e., most likely games that struggled on PS4/XB1 so the developer had to maximize all the features of the GCN architecture to maximize performance on the PS4/XB1 consoles). Then when this console game is ported to the PC, the underlying game engine already takes advantage of ACEs in PS4/XB1. That's basically the only way AMD can bank on vast usage of ACEs in DX12 games without having to outright bribe developers to do it. The reason programmers/PC developers may want to use ACEs in games is because it makes sense (i.e., if you get 20-30% more performance by simply coding the game differently - then it's free performance). If for their game, they will be budget constrained or not have the latest modern engine in their game that was designed to take full advantage of DX12+ACEs, they won't use that feature since it will cost them too much time and money to utilize.
Since games take 2-3 years to make (or more), chances are most "DX12" games launching in 2016 started their design around the launch of PS4 (or even before that) and it's unlikely that developers were thinking that far ahead wrt to AC. Now, games that are going into design starting now and moving forward are likely to be coded to take advantage of AC
if developers feel that AC will become a major feature-set of all future GPU architectures in 2018 and beyond (i.e., games that are being made starting Fall 2015 are likely to launch in 2017-2018 so developers are likely to anticipate these trends).
If AMD works directly with developers to help them re-code their DX12 game engine to use AC, then I can see how this feature might make it into some Gaming Evolved titles like Deus Ex Mankind Divided or Rise of the Tomb Raider. Even then, it's not a guarantee that developers will spend resources on performance optimizations that call for advanced features like ACEs since many developers couldn't care less about optimizations as they have deadlines and shareholder obligations to get the game out in a certain quarter -- we've seen this already with many horribly optimized and unfinished games that were rushed out the door such as Batman AK, AC Unity, Watch Dogs.
OTOH, we probably can't expect Gears of War Ultimate DX12 to use AC widely unless the XB1 version unless AMD works directly with MS to encourage them to use this feature to showcase DX12 + its max performance advantage over DX11. In that case, it absolutely makes sense for MS to spend the extra $$$ because the performance of DX12+ACEs will pummel DX11 performance in GoWU. Will MS spend the $ though? We don't know.
AMD suffers up to 30% (that's just using the number given, could be less, could be more) right off the bat because of this.
1. We can't accurately predict the specific magnitude of the impact of AC across many games from just one benchmark. AC could be a 5% benefit in some future DX12 game or 10%, or 35%.
2. Ashes uses DX12 + AC but DX12
itself provides
major draw call benefits to AMD's graphics. Therefore, the move to DX12 should in theory provide AMD's GCN some benefit, even if the DX12 game doesn't use any AC. That is because AMD has a massive draw call bottleneck in their DX11 drivers.
http://www.eurogamer.net/articles/digitalfoundry-2015-why-directx-12-is-a-gamechanger
Basically, Nvidia can repeat DX11-generation by securing just a handful of large titles. If the performance delta is that wide, it sort of hurts AMD in the end. And I'm sure Nvidia adding just some of their sauces wouldn't help either.
Deus Ex:MD's release date is February 23, 2016.
Rise of the Tomb Raider's release date is "Early 2016"
By the time we start seeing DX12 games trickle, we should see Pascal in Q2-Q4 2016 and AMD's Arctic Islands too. What matters more is if Pascal has AC.
It feels like AMD is at the mercy of devs utilizing AC. If they don't (for whatever reason) it's gonna suck for AMD.
What makes you think AMD focused on ACE engines & the command processor underlying GCN thinking that most games will take advantage of these features? That's NOT the reasons behind these features in GCN.
In Eric Demer's 1+ hour presentation on GCN, he even mentioned that VLIW is perfectly fine for graphics workloads. The reason AMD focused on compute is not for games but to have their graphics card be able to perform other tasks more efficiently, to make it into a more general purpose product/device. For that reason, ACE/Command processor(s) + AMD's shaders/TMUs/ROPs/memory bandwidth that are needed for graphics card horsepower are two distinct & separate strategies that were pursued by AMD. AMD desired to make a card that's good for graphics + compute/other things. It just happens to be that because ACE/Command processor design provides so much more compute horsepower and flexibility that should you happen to use that feature for
graphics, it's just a bonus.
It's amazing that after nearly 4 years, people still don't understand the fundamental reasons for AMD's GCN redesign over VLIW. It's not graphics, it was always about General Purpose Processing & Compute (i.e., let's make a product that can work for financial analysis, in the fields of geology/weather, natural disasters, etc.):
"Designed to push not only the boundaries of DirectX® 11 gaming, the GCN Architecture is also AMD's first design specifically engineered for general computing. Key industry standards, such as OpenCL™, DirectCompute and C++ AMP recently have made GPUs accessible to programmers. The challenge going forward is creating seamless heterogeneous computing solutions for mainstream applications. This entails enhancing performance and power efficiency, but also programmability and flexibility. Mainstream applications demand industry standards that are adapted to the modern ecosystem with both CPUs and GPUs and a wide range of form factors from tablets to supercomputers. AMD’s Graphics Core Next (GCN) represents a fundamental shift for GPU hardware and is the architecture for future programmable and heterogeneous systems."
https://www.amd.com/Documents/GCN_Architecture_whitepaper.pdf
And if one designs a unified architecture that's flexibility to be good as graphics and compute, and is scalable, it allows AMD to evolve this architecture because they knew they didn't have the resources to do a 2-year new GPU architecture cadence redesign that NV uses. It all makes sense if you pay attention to AMD's financial position and what they were trying to accomplish. As for many things in life, if you are a generalist (general purpose architecture), you also risk not being the best at any one particular thing. It's a risk AMD had to take since they can't do new GPU architectures every 2 years.
It doesn't get much clearer than that - AMD never designed GCN around ACEs/Command Processors
specifically/mainly for graphics workloads. Their goal was to design the most powerful general purpose processing architecture that is scalable long-term and can handle many more tasks efficiently, with graphics just being a
subset of those tasks. This was even covered in AT's original GCN architecture article. If AMD wanted to focus
solely on graphics, they could have just made a scalar architecture for graphics with a focus on perf/watt, and kept improving TMUs, shader array, memory bandwidth, geometry engines. That's exactly what NV has done with Kepler and Maxwell and it paid off in many ways.
So no, AMD isn't screwed somehow if games don't use ACE because they will still be focusing on shaders, textures, memory bandwidth, perf/watt and IPC improvements with the next gen 16nm HBM2 node shrink. Why? Because that part of graphics is the backbone of graphics performance. If developers start using ACE, it's simply a bonus for GCN that has
always been there since December 2011 but went unused for 4 years. It's not as if AMD has been sitting there all this time for 4 years and wondering why no one is taking advantage of ACEs on their December 2011 HD7970, because AMD knows that's not how game development works.
Again, for the vast majority of Maxwell and GCN 1.0-1.2 users, this likely won't even matter unless we start to see games using ACEs extensively in early 2016. We have to wait and see. Where ACE seems more important is for future generations of cards released in 2016-2019. It would benefit all PC gamers if AMD/NV went all in on this feature if there is more free performance to be had for graphics. If PS5/XB2's GPUs also have strong ACEs in 2019-2020, that would also be very good because who doesn't want more free performance from hardware features that already exist?
Since AMD already has ACEs in all of its major graphics cards going back to HD7000 series, they should be focused on perf/watt, going 8-16GB HBM2, increasing TMUs, SPs, ROPs, memory bandwidth to 1TB/sec+. It's NV that should be paying attention to ACEs, not AMD because AMD already has it in their design. AMD needs to focus on its weaknesses such as rasterization/polygon throughput, texture and fill-rate bottlenecks and geometry performance.