When will Intel develop a higher speed interconnect between CPU and GPU?

cbn

Lifer
Mar 27, 2009
12,968
221
106
I thought the following info on Nvidia Pascal was rather interesting:

http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/

Outpacing PCI Express

Today a typical system has one or more GPUs connected to a CPU using PCI Express. Even at the fastest PCIe 3.0 speeds (8 Giga-transfers per second per lane) and with the widest supported links (16 lanes) the bandwidth provided over this link pales in comparison to the bandwidth available between the CPU and its system memory. In a multi-GPU system, the problem is compounded if a PCIe switch is used. With a switch, the limited PCIe bandwidth to the CPU memory is shared between the GPUs. The resource contention gets even worse when peer-to-peer GPU traffic is factored in.

NVLink addresses this problem by providing a more energy-efficient, high-bandwidth path between the GPU and the CPU at data rates 5 to 12 times that of the current PCIe Gen3. NVLink will provide between 80 and 200 GB/s of bandwidth, allowing the GPU full-bandwidth access to the CPU’s memory system.

Based on that (and other info I have read about Pascal) NVlink would be a means to allow CPU and GPU access to each other's memory without PCEe 3.0 being a bottleneck.

This combined with the unified memory feature of Pascal sounds similar to how an AMD APU works.

Since I don't expect Intel to add Nvlink to their future CPUs I wondering what kind of timeline folks expect for Intel's answer to this technology?
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
Intel's answer to this technology is to get rid of dGPUs altogether. That's their endgame.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
PCIe 4.0 with 64GB/sec(2x32GB/sec) is coming to Skylake-E.

Even PCIe 3.0 is far away from being a bottleneck for GPUs.

And for nVlink. Its more a QPI/HT/Omnipath interface for GPUs.

nvlink_configurability-624x289.png
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel's answer to this technology is to get rid of dGPUs altogether. That's their endgame.

Yes, I do expect that to happen.

Maybe the entire processor ends up being x86. This with some type of adaptive core scheme where certain CPU cores adaptively take on a more GPU like quality, while other focus on single thread. Thus essentially producing a custom CPU to GPU ratio for every task.

This would fix one complaint I have about AMD's current APU system which makes due with a fixed CPU to GPU ratio.

However, It might be having the entire processor be x86 is less efficient for repetitive tasks where the proper CPU to GPU ratio can be predicted ahead of time? The question in those cases then becomes "APU or CPU + dGPU?" If CPU + dGPU then does Intel develop some kind of Nvlink scheme?
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
If CPU + dGPU then does Intel develop some kind of Nvlink scheme?
As others have already pointed out, they already have: QPI.

Between QPI and PCIe, that is all the chip-to-chip communication Intel plans on having. QPI for CPU to other CPU-like devices, and PCIe for peripherals. As for video cards, unless they're Intel they won't be using QPI any time soon.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Yes, I do expect that to happen.

Maybe the entire processor ends up being x86. This with some type of adaptive core scheme where certain CPU cores adaptively take on a more GPU like quality, while other focus on single thread. Thus essentially producing a custom CPU to GPU ratio for every task.

This would be interesting: have the "integrated GPU" be a bunch of small x86 cores. That, along with Xeon Phi, would get rid GPGPU entirely.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
Not sure how x86 GPU would work. Theoretically they could have a hardware decoder/translator to convert x86 instructions to something the GPU could understand/use, which would replace software frameworks like HSA.
 

jpiniero

Lifer
Oct 1, 2010
16,830
7,279
136
This would be interesting: have the "integrated GPU" be a bunch of small x86 cores. That, along with Xeon Phi, would get rid GPGPU entirely.

I imagine it would be more like a cluster of vector processors and not a full x86 core. It'd be like if they separated out the EUs from the rest of the GPU portion.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
Not sure how x86 GPU would work. Theoretically they could have a hardware decoder/translator to convert x86 instructions to something the GPU could understand/use, which would replace software frameworks like HSA.

It would work like anything else. Programs access the GPU via the driver. Take a look at the currrent Xeon Phi. It can run x86 programs directly, or it can be accessed via libraries, similar to GPU access. There's nothing technically stopping Intel from turning it into an x86 GPU.
 

SarahKerrigan

Senior member
Oct 12, 2014
735
2,036
136
It would work like anything else. Programs access the GPU via the driver. Take a look at the currrent Xeon Phi. It can run x86 programs directly, or it can be accessed via libraries, similar to GPU access. There's nothing technically stopping Intel from turning it into an x86 GPU.

That's actually its origin - Xeon Phi is a direct descendant of the "Larrabee" x86-GPU project. It even has fused-off texture units on the chip.
 

NTMBK

Lifer
Nov 14, 2011
10,452
5,839
136
That's actually its origin - Xeon Phi is a direct descendant of the "Larrabee" x86-GPU project. It even has fused-off texture units on the chip.

Not any more. The old 32nm parts did, but they were only sampled, never fully released. The 22nm parts don't have the graphics hardware.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
It would work like anything else. Programs access the GPU via the driver. Take a look at the currrent Xeon Phi. It can run x86 programs directly, or it can be accessed via libraries, similar to GPU access. There's nothing technically stopping Intel from turning it into an x86 GPU.

Yes, but running a driver would sort of eliminate the benefit of having x86 cores take over the role of GPU. Part of the beauty of Phi is it just runs x86 commands, period. Intel can't eliminate dGPUs or iGPUs that way since, to date, the raw processing power of GPUs is still greater than even the best of Xeon Phi in those narrow applications for which one requires a GPU.

I'm fairly certain that we'll continue to see the development of HD graphics, and that Intel will try to use those iGPUs to marginalize dGPU manufacturers wherever possible.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I'm fairly certain that we'll continue to see the development of HD graphics, and that Intel will try to use those iGPUs to marginalize dGPU manufacturers wherever possible.

HD graphics will continue, but I think eventually their growth (ie, increasing proportion of the die size) will slow down dramatically (re: Getting power reductions on node shrinks is getting more difficult. To keep TDP the same for X die size on the advanced node power reduction needs to be 50%, but it is usually around 30%. TSMC 28nm to 20nm was only ~20%.)

Instead I think we will see lower power parts that normally used to be on the motherboard get integrated into the mainstream die and then the IC designer will take the clockspeed increase on the remaining parts that are CPU and GPU.

Now regarding the E5 Xeon, I don't think we will ever see HD graphics get integrated into that die. For those multi-socket Xeon processors I thinking some kind of adaptive x86 core that has both CPU and GPU like qualities will come into being. This core design will then get shared with the mainstream. The question then is how that x86 core design will affect the mainstream chip's iGPU focus? Maybe Intel can actually reduce the amount of HD graphics and put a greater focus on x86 for mainstream?
 

videogames101

Diamond Member
Aug 24, 2005
6,783
27
91
Intel Omni-Path ?

Omni-path is an optical interlink between CPUs to form a cluster I believe. It's supposed to be built into "Knight's Whatever Version Is Next", as well as future Xeons. Not from CPU -> GPU.

Someone correct me if I'm wrong though.
 

meloz

Senior member
Jul 8, 2008
320
0
76
Intel's answer to this technology is to get rid of dGPUs altogether. That's their endgame.

True, but they are mving so slowly.
dGPUs are much powerful then iGPU for now. Also, there is the thermal and die size issue. If iGPU is to be as powerful as current dGPUs the die would have to be huge. And the combined CPU+iGPU thermal budget would have to be >250 watts.
 

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
I thought the following info on Nvidia Pascal was rather interesting:

http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/



Based on that (and other info I have read about Pascal) NVlink would be a means to allow CPU and GPU access to each other's memory without PCEe 3.0 being a bottleneck.

This combined with the unified memory feature of Pascal sounds similar to how an AMD APU works.

Since I don't expect Intel to add Nvlink to their future CPUs I wondering what kind of timeline folks expect for Intel's answer to this technology?

Technologies like DirectX 12 are making bandwidth less of a bottleneck.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
True, but they are mving so slowly.
dGPUs are much powerful then iGPU for now. Also, there is the thermal and die size issue. If iGPU is to be as powerful as current dGPUs the die would have to be huge. And the combined CPU+iGPU thermal budget would have to be >250 watts.

Think of it this way: Intel probably realizes that they can keep die-shrinking and reducing power for their iGPUs at least down to 7nm, and probably beyond. It might take awhile, but it's going to get there eventually.

At some point, iGPUs are going to serve the needs of enough Intel users that they'll eclipse a rate of 50% or more end-users considering their iGPUs to be "good enough", even for many games. dGPUs get pushed into a niche, making it difficult (if not impossible) to continue their development without a secondary market to prop them up.

Nvidia and AMD use "professional" buyers and GPGPU customers to prop up dGPU development already. So Intel attacks that market with Xeon Phi to peel off many of the GPGPU guys, leaving the pro buyers who might need/want the oomph from a pro card. Intel tries to peel away some of those guys with Iris Pro (albeit not that many, but anything they can get furthers their endgame).

All they have to do is damage enough of AMD's and Nvidia's dGPU marketshare to the point that they can comfortably EoL PCIe support on many of their platforms without instigating a buyer revolt. Or, they EoL full-size PCIe slots and include the smaller slots for controller cards and what have you.

After that, the market for dGPUs would dry up except for niche buyers that are not buying gamer cards . . . the people for whom Phi could (theoretically) never work but that need more than they're going to get from Iris Pro and the like. People doing 3D rendering on pro cards, or what have you. Who knows, maybe rendering will work out okay on Phi someday too, but I won't hold my breath waiting for the market to make that kind of migration anytime soon.

And that's their endgame. No more (or significantly less) PCIe, no more dGPUs for anything but some pro customers, and iGPUs everywhere. It wouldn't matter to Intel that their iGPUs can't replace consumer cards in terms of raw power. All they have to do is wreck the dGPU market and everything falls into place.

They might not succeed, but they're gonna try.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
At some point, iGPUs are going to serve the needs of enough Intel users that they'll eclipse a rate of 50% or more end-users considering their iGPUs to be "good enough", even for many games. dGPUs get pushed into a niche, making it difficult (if not impossible) to continue their development without a secondary market to prop them up.

Again, this has already happened. Something like >90% of PCs have shipped with integrated graphics for 10+ years... :cool:
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
Yeah they're shipping with them. That is not really the point. Most people who try to do something graphically intensive (games, oftentimes) find those integrated solutions to be inadequate. There is still a large enough group of Intel users out there that use dGPUs - even budget products costing $100 or less - for Intel to continue to bring pressure to the market with superior integrated products.

What will be the bellweather for when that point finally arrives? Maybe it will be when someone finally admits that it is no longer cost-effective to use a small-iGPU version of an Intel CPU + dGPU versus a similar Iris Pro product + no dGPU, at least for low-to-midrange gaming. When that happens, the "budget" dGPU market dies, and the dominos start to fall.
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Yeah they're shipping with them. That is not really the point. Most people who try to do something graphically intensive (games, oftentimes) find those integrated solutions to be inadequate. There is still a large enough group of Intel users out there that use dGPUs - even budget products costing $100 or less - for Intel to continue to bring pressure to the market with superior integrated products.

On the contrary, in my experience. Most people are perfectly happy with something like a HD4000. I even know a few who game on the things. They don't view them as "inadequate", not even for gaming.

Why? Because the game runs on the thing, and they simply do not know what they're missing. At least until I show them... :D

...and then they baulk at spending a 1000DKK ($166) on a graphics card... :rolleyes:

(~$166 is about what a 750/260X is going for here currently)

What will be the bellweather for when that point finally arrives? Maybe it will be when someone finally admits that it is no longer cost-effective to use a small-iGPU version of an Intel CPU + dGPU versus a similar Iris Pro product + no dGPU, at least for low-to-midrange gaming. When that happens, the "budget" dGPU market dies, and the dominos start to fall.

Again, already happened. Discrete GPUs are a niche, and have been for a long time (early '00). People have been predicting the end for them for just as long.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,024
136
On the contrary, in my experience. Most people are perfectly happy with something like a HD4000. I even know a few who game on the things. They don't view them as "inadequate", not even for gaming.

Again, already happened. Discrete GPUs are a niche, and have been for a long time (early '00). People have been predicting the end for them for just as long.

There aren't enough of those people yet, and the dGPU market is still big enough for Nvidia and AMD to continue marketing consumer cards in a wide range of prices. Intel's goal is to continue to marginalize more and more people until there isn't enough money in dGPU sales for anyone to continue making them for "non-professional" use.
 

MagnusTheBrewer

IN MEMORIAM
Jun 19, 2004
24,122
1,594
126
When will Intel develop a higher speed interconnect between CPU and GPU?
When there is a widespread need for one. What Nvidia doesn't say in the link you posted is that most users in most cases don't saturate the existing pcie 3.0 at 16x. You can raise the speed limit on the highway to 200 mph but, very few would approach that speed.