@Head1985
If they were to beef up everything they would run out of die space, or gone bigger than the capability of manufacturing and drive up TDP. The design is a trade-off, its got a lot of potential with the 8 ACE engines that isn't utilized much in DX11 due to the non asynchronous pipeline.
It scales well at 4K and should be great for DX12. For DX11 lower resolutions, getting efficiency/uptime on the 4K shaders will be hard with the same front-end designed for Hawaii, it shows it with the poor improvement at 1080p vs 290/390X.
AMD's GCN uarch can be considered "forward/future proof" due to its design for asynchronous workloads which sadly doesn't happen in DX11, so they made Mantle and eventually it lives on in Vulkan/DX12. So we're seeing a uarch that is literally ahead of its time, designed for a more advanced API. Good decision or bad? I don't know, because they will struggle more (less efficient) at DX11 versus NV and that's where the majority of the gaming is still at, and likely for the next year or two before Win10 become "popular".
Because it will (hopefully) trigger a price war and push down GTX 980 prices.
Just 2'ish weeks left now until release.
For those of us living in EU, the GPU prices here are bonkers. The 980 Ti costs 900 dollar equivalent.
Let's not pretend to know as much as dedicated engineers at AMD. It was obviously their decision to go with a trade-off design, maybe it was quicker, cheaper to implement given the non-change of using Tonga's front-end.
Who knows, maybe this entire series is just a stop-gap, never intended to exist at all given the rush to HBM1 but they got caught with their pants down by Maxwell 2 on 28nm (rather than just another Kepler refresh to last to 16nm).
@Head1985
If they were to beef up everything they would run out of die space, or gone bigger than the capability of manufacturing and drive up TDP. The design is a trade-off, its got a lot of potential with the 8 ACE engines that isn't utilized much in DX11 due to the non asynchronous pipeline.
It scales well at 4K and should be great for DX12. For DX11 lower resolutions, getting efficiency/uptime on the 4K shaders will be hard with the same front-end designed for Hawaii, it shows it with the poor improvement at 1080p vs 290/390X.
AMD's GCN uarch can be considered "forward/future proof" due to its design for asynchronous workloads which sadly doesn't happen in DX11, so they made Mantle and eventually it lives on in Vulkan/DX12. So we're seeing a uarch that is literally ahead of its time, designed for a more advanced API. Good decision or bad? I don't know, its a tough call, because they will struggle more (less efficient) at DX11 versus NV and that's where the majority of the gaming is still at, and likely for the next year or two before Win10 become "popular". Historically and the current events, you could say it was a bad call. They should have designed hardware more efficient for DX11 then when Mantle/Vulkan/DX12 is mainstream, release a new uarch designed for those APIs. But obviously its a risk factor decision as I'd imagine there's pros & cons associated with that which AMD's financials would not allow constant multiple uarch designs.
There is a parallel to draw here given their CPU FX series and going with more coars (!) before the software was able to utilize it all. But luckily for them, DX12 will come a lot sooner than asking devs to make their windows software truly multi-threaded.
Just to add, if you listen to the devs featured on AMD's E3 show, they showcased using the asynchronous compute engines to add effects & physics that's in their words, basically "free" in terms of performance. It's a big deal because that's what GCN was designed for but it hasn't been utilized.
Because it will (hopefully) trigger a price war and push down GTX 980 prices.
Just 2'ish weeks left now until release.
For those of us living in EU, the GPU prices here are bonkers. The 980 Ti costs 900 dollar equivalent.
I sold my two 290s in Crossfire - which worked great but having a single GPU is a blessing, too, since I don't have a private sauna in my living room anymore - and I am now trigger-happy on the buy-button. I've bought and cancelled the order twice now.
Using my trusty old 560 Ti is as fun as it sounds. Can't wait for July 14th to come around fast enough.
Couldn't have said it better. This is why I think even the 7970, 2 ACE design is up for some renewed ass kicking under DX12, even after almost four years, at least with the lower overhead benefit of the API although it doesn't support FL12.0. Hawaii is getting most of the improvement with the 8 ACEs, and Fiji will be able to get that huge shader array going much better than now under DX11.
The problem is, as you've said, we're still in the DX11 era and this means trouble for AMD. On top of that DX12 games are going to get heavier on the GPU thanks to these new possibilities, but still, there should be an improvement over DX11 for AMD's hardware.
Time will tell.
The people who keeping banging the drum of "AMD has more driver CPU overhead" should be the first ones to acknowledge that DX12 would therefore have a greater FPS uplift in AMD cards than nVidia just on that point alone. But of course you don't see this in reality.
In any case, it'll be a while before devs can release DX12 games much less get a lot of good experience with it.
Fury? We're recycling names again. Shame.
![]()
Fury? We're recycling names again. Shame.
![]()
