Do you really think AMD has that kind of supply chain clout?
With 25% of the discrete GPU market, this seems unlikely to me.
Do you really think AMD has that kind of supply chain clout?
Hopefully the tech will be huge for AMD.
They've successfully moved Msft and the rest of the industry in the direction they want with API's for their future product lines. They've successfully employed the assistance of one of the worlds preeminent memory companies to help design the RAM they so desperately need. They found a way to squeeze in Freesync, which I'm fairly certain was a complete afterthought. They've just got to get to that point in the future that it all comes together and capitalize on it. Hopefully for them, in the next 12 months or so they'll have it all working together.
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.
That's Zen + GCN + HBM APU.
It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.
I'm very optimistic about their future direction specifically due to HBM and DX12.
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.
That's Zen + GCN + HBM APU.
It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.
I'm very optimistic about their future direction specifically due to HBM and DX12.
It will all come together when their APU is free from system memory bandwidth limits and when they improve CPU core IPC.
That's Zen + GCN + HBM APU.
It has the capacity to kill low-end and mid-range dGPUs altogether. With DX12 multi-GPU aware API, imagine a 200W Zen/GCN APU with 8GB HBM2, then you add a 200W class high-end GCN HBM2 dGPU on top of that, and the system will utilize the iGPU to increase your performance in games. It's something even Intel has NOTHING to counter, because a Zen APU system + GCN dGPU combo would slaughter an Intel + any dGPU system.
I'm very optimistic about their future direction specifically due to HBM and DX12.
That's the way I see it. Keep in mind though that the iGPU+dGPU could be something everyone uses with the new API's. It's typical of AMD to allow others to use tech they've developed. I just wish people would realize the importance of hardware compatibility between brands and appreciate what AMD does instead of chastising them.
I don't think it would be simple for developers on DX12 to implement multi-GPU on such different architectures, such as Intel's iGPU being compatible with GCN or Maxwell/Pascal.. would not end well.
This is where I see the advantage AMD have, they can have the same or similar uarch in their APU & dGPU, with DX12, the iGPU would no longer be useless sitting idle like our current Intel i5/i7 dGPU rigs.
DirectX abstracts away GPUs as it is- if DX12 multi-GPU does take off, then I expect it will work with mismatched ones just fine. (That's a big "if" of course...) I just hope someone builds GPU physics into their engine, and runs it on the integrated graphics.
We already know you can match Intel iGPU with AMD/NVidia dGPU's (Ashes of the Singularity supports that) so other combinations should be possible as well unless either AMD or Nvidia do something to block that.
DirectX abstracts away GPUs as it is- if DX12 multi-GPU does take off, then I expect it will work with mismatched ones just fine. (That's a big "if" of course...) I just hope someone builds GPU physics into their engine, and runs it on the integrated graphics.
But imagine Intel's iGPU isn't FL12 compliant, it won't help where games use those features.
Even if they implement DX12 compute based physics, Intel's iGPU can't due to lack of asynchronous compute functions.
Unless Intel comes up with a new GPU uarch, they are not going to be relevant for cross-multi-GPU DX12.
I don't think it would be simple for developers on DX12 to implement multi-GPU on such different architectures, such as Intel's iGPU being compatible with GCN or Maxwell/Pascal.. would not end well.
This is where I see the advantage AMD have, they can have the same or similar uarch in their APU & dGPU, with DX12, the iGPU would no longer be useless sitting idle like our current Intel i5/i7 dGPU rigs.
@raghu78 I know some people here predict a looming death for dGPU but it won't happen. All that will happen is the enthusiasts market have to pay much more $ for their dGPUs to make it worthwhile. It also won't ever die off if AMD has their way with APU + dGPU (or even multiple dGPU) and it will all add up to greater performance.
I also believe software is going to gradually change and adapt to exploit the benefits of heterogeneous computing architecture with unified memory where CPU and GPU have low latency high bandwidth communication and can share pointers. We are already on a first generation of unified memory consoles. The next gen consoles will be even better with 16-32 GB of unified HBM. With AMD Zen based APUs also supporting a unified memory architecture with HBM and most likely Intel also developing their own unified memory architecture with HBM the vast majority of gaming devices will be unified memory architectures based. So the software developers are going to eventually have to improve their software design to exploit the best performance out of such architectures.
Ahh, but if they buy up all of it and use it, then others can't. For example, the guy who made Softsoap bought multiple years' production of the right pumps for the soap dispensers so the product couldn't be duplicated until it established itself.
Except the radiator, per the Fury manual , is recommended to be mounted higher than the gpuGuru3D has a teaser video up today on the Fury X:
http://www.guru3d.com/
The video doesn't show any performance but does give a good look at the card installed on their bench.
If AMD can buy the entire HBM2 supply then yields must be horrible. It's feasible but unlikely that Nvidia could stick to GDDR5 for 2016 and go to HMC in 2017 (maybe late 2016).
That is a bit OT though so to get back OT...
Does anyone really think there will be 8GB HBM1 variants of Fury? I just don't see it myself.