- Mar 27, 2009
- 12,968
- 221
- 106
I thought the following info on Nvidia Pascal was rather interesting:
http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/
Based on that (and other info I have read about Pascal) NVlink would be a means to allow CPU and GPU access to each other's memory without PCEe 3.0 being a bottleneck.
This combined with the unified memory feature of Pascal sounds similar to how an AMD APU works.
Since I don't expect Intel to add Nvlink to their future CPUs I wondering what kind of timeline folks expect for Intel's answer to this technology?
http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/
Outpacing PCI Express
Today a typical system has one or more GPUs connected to a CPU using PCI Express. Even at the fastest PCIe 3.0 speeds (8 Giga-transfers per second per lane) and with the widest supported links (16 lanes) the bandwidth provided over this link pales in comparison to the bandwidth available between the CPU and its system memory. In a multi-GPU system, the problem is compounded if a PCIe switch is used. With a switch, the limited PCIe bandwidth to the CPU memory is shared between the GPUs. The resource contention gets even worse when peer-to-peer GPU traffic is factored in.
NVLink addresses this problem by providing a more energy-efficient, high-bandwidth path between the GPU and the CPU at data rates 5 to 12 times that of the current PCIe Gen3. NVLink will provide between 80 and 200 GB/s of bandwidth, allowing the GPU full-bandwidth access to the CPU’s memory system.
Based on that (and other info I have read about Pascal) NVlink would be a means to allow CPU and GPU access to each other's memory without PCEe 3.0 being a bottleneck.
This combined with the unified memory feature of Pascal sounds similar to how an AMD APU works.
Since I don't expect Intel to add Nvlink to their future CPUs I wondering what kind of timeline folks expect for Intel's answer to this technology?