When did AMD do any deluding? It was a benchmark released by a game dev that blew this open. Other than make a tweet about it. I haven't seen AMD do anything.
This isn't just about the AotS benchmark, this is about the concept of asynchronous compute itself, and how it fits into the whole DX12 picture at large.
As I said before, asynchronous compute isn't a major feature of DX12, and NVidia had no obligation to support it in hardware. But is it a useful feature? Absolutely, and I can see it gaining more of a prominent role for future games.
But the usefulness likely varies from one architecture to another. Just because GCN gains a significant performance increase, doesn't mean that NVidia or Intel architectures will..
The whole point of asynchronous compute is to keep the GPU as occupied as possible. And when you look at the benchmarks for Fiji, it's obvious that it has problems with scaling and utilization.
As Ryan said at the end of his Fury review:
Bringing this video card review to a close, well start off with how the R9 Fury compares to its bigger sibling, the R9 Fury X. Although looking at the bare specifications of the two cards would suggest theyd be fairly far apart in performance, this is not what we have found. Between 4K and 1440p the R9 Furys performance deficit is only 7-8%, noticeably less than what wed expect given the number of disabled CUs.
And here only a 20% increase in average performance between a 290x and a Fury X at 1440p, despite a significant 45% increase in shaders and texture units and the benefit of liquid cooling.
