K, so obvious question is obvious....
Infinity fabric link CPU<->GPU...
How exactly are they going to physically pull this off? Mobo level? cable? What of the generic PCIE interface? Augmented? Some type of nearby connection? Something over PCIE 4.0?
What is the physical connection going to look like here?
For reference, what I'm looking to see from AMD (Nvidia's NVlink tech) :
Have they detailed this yet? Is this going to flow down to consumer zen2 or be cut out and delayed? Seems all the mobos would have incompatibility as this is board level?
I thought that it used the PCIe wiring but because of the upgraded signaling hardware, it could exceed the PCIe spec? That's also because it doesn't have to support all the same features that PCIe does (so basically when it has the hardware the software would put it in that compatibility mode enabling higher performance). PCIe is capable of more its just that because it supports various other features (and is targeting a wide variety of systems) that it doesn't push things as far as it can (meaning they can use the same hardware but just push it higher and achieve more, but they want things to be certified for it; kinda like how ethernet and HDMI function where they keep the same pin layout but require stricter cable/wire specs, as well as the two ends to have the proper signaling capability, to guarantee that performance level). I believe that the shorter distance between the GPU slots is why it can run higher than the PCIe CPU-GPU links, although the talk about things being on a ring is interesting). PCIe 5 should offer 128GB/s.
I could be incredibly mistaken on this though, and I'd expect that it won't be activated on consumer boards as I'm not sure it has much benefits right now (I don't believe that games are limited even by PCIe 3.0 spec, as I think most of the time it shows very little if any improvement of x16 compared to x8 in games; if they do try to make multiple GPUs function as a single monolithic one it probably would help though). Will be interesting to see if Threadripper supports it though as those users should be able to make use of it for their tasks.
The other thing to take into account is latency, which I'm not sure is a huge issue but would be curious about what the latency is. Which I think the ring aspect might be there to manage that partly (so if you have a system of 8 GPUs, the ring would try to have GPUs nearby communicate and be aware of how the system is setup to maximize performance, or it might go GPU 1 and 5 talk, 2 and 6, 3 and 7, and 4 and 8, so that there's the same latency between all of them).
AMD did explicitly say it doesn't require bridge or switches.