When will Intel develop a higher speed interconnect between CPU and GPU?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dealcorn

Senior member
May 28, 2011
247
4
76
And that's their endgame. No more (or significantly less) PCIe, no more dGPUs for anything but some pro customers, and iGPUs everywhere. It wouldn't matter to Intel that their iGPUs can't replace consumer cards in terms of raw power. All they have to do is wreck the dGPU market and everything falls into place.

One likely evolution in demand merits attention. Gamers appear to be the primary market segment that demands large numbers of PCI-e lanes on the desktop. However, as Intel IGD's improve, the demand for large numbers of PCI-e lanes gradually migrates exclusively to high end gamers who also value reasonably high ipc and a reasonable number of cores. Eventually, I expect Intel may reduce the number of PCI-e lanes on high end desktop SoC's and migrate high end gamers to the Xeon platform. Reducing the number of desktop PCI-e lanes makes it cheaper to produce desktop chips. Xeon will continue to offer strong performance and offer many PCI-e lanes. Xeon tends to be a higher profit margin product but high end gamers may still save from no IGD they do not need. The primary driver of demand for increased ipc is migrating to Xeon customers. Unlike gamers, server customers are always ready to pay for stronger ipc. Conceptually, it makes sense to satisfy both segments with Xeon.
 

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
does thunderbolt have higher/wider bandwidth, or whatever, than pci-e? if so, i wonder if there would be some way to have on-motherboard thunderbolt from the gfx card to cpu?
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
One likely evolution in demand merits attention. Gamers appear to be the primary market segment that demands large numbers of PCI-e lanes on the desktop. However, as Intel IGD's improve, the demand for large numbers of PCI-e lanes gradually migrates exclusively to high end gamers who also value reasonably high ipc and a reasonable number of cores. Eventually, I expect Intel may reduce the number of PCI-e lanes on high end desktop SoC's and migrate high end gamers to the Xeon platform. Reducing the number of desktop PCI-e lanes makes it cheaper to produce desktop chips. Xeon will continue to offer strong performance and offer many PCI-e lanes. Xeon tends to be a higher profit margin product but high end gamers may still save from no IGD they do not need. The primary driver of demand for increased ipc is migrating to Xeon customers. Unlike gamers, server customers are always ready to pay for stronger ipc. Conceptually, it makes sense to satisfy both segments with Xeon.

But.. all the Intel folks on this board kept assuring IGDs (or iGPUs) are not a useless part for the enthusiasts but a new CPU "feature" that comes free of charge, like a new instruction set.. Rofl.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel's goal is to continue to marginalize more and more people until there isn't enough money in dGPU sales for anyone to continue making them for "non-professional" use.

At the same time Intel would be trying to marginalize dGPU by adding extra iGPU to their mainstream processors another competitor could attempt to marginalize Intel's CPU by not adding bonus amounts of iGPU (ie, provide more CPU as a proportion of the die compared to Intel*).

So for that reason I don't think we will see dGPU die out the way some people think.

*Think four or more big ARM cores with a small iGPU (and mainstream desktop I/O) at a smaller die size than a 2C/2T Intel Celeron or 2C/2T Pentium.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
does thunderbolt have higher/wider bandwidth, or whatever, than pci-e? if so, i wonder if there would be some way to have on-motherboard thunderbolt from the gfx card to cpu?

Not really no. Thunderbolt is displayport+PCIe. And Thunderbolt currently is 10Gbit or 20Gbit. PCIe 3.0 x16 is 160Gbit or so.

Thunderbolt looks dead after USB3.1.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
When there is a widespread need for one. What Nvidia doesn't say in the link you posted is that most users in most cases don't saturate the existing pcie 3.0 at 16x. You can raise the speed limit on the highway to 200 mph but, very few would approach that speed.

Yep. And when PCIe 3.0 one day become the limit, we will have PCIe 4.0 or higher.
 

DrMrLordX

Lifer
Apr 27, 2000
22,937
13,023
136
One likely evolution in demand merits attention. Gamers appear to be the primary market segment that demands large numbers of PCI-e lanes on the desktop. However, as Intel IGD's improve, the demand for large numbers of PCI-e lanes gradually migrates exclusively to high end gamers who also value reasonably high ipc and a reasonable number of cores. Eventually, I expect Intel may reduce the number of PCI-e lanes on high end desktop SoC's and migrate high end gamers to the Xeon platform. Reducing the number of desktop PCI-e lanes makes it cheaper to produce desktop chips. Xeon will continue to offer strong performance and offer many PCI-e lanes. Xeon tends to be a higher profit margin product but high end gamers may still save from no IGD they do not need. The primary driver of demand for increased ipc is migrating to Xeon customers. Unlike gamers, server customers are always ready to pay for stronger ipc. Conceptually, it makes sense to satisfy both segments with Xeon.

HEDT platforms aren't going anywhere, and yes, currently, that's where Intel features large numbers of PCIe lanes. That eventuality sort of plays into the "oh noes everything is going to be BGA" rumor that pops up every now and then. Maybe someday it'll be true. All BGA CPUs for non-HEDT platforms, with no appreciable number of PCIe lanes on the same platforms.

At the same time Intel would be trying to marginalize dGPU by adding extra iGPU to their mainstream processors another competitor could attempt to marginalize Intel's CPU by not adding bonus amounts of iGPU (ie, provide more CPU as a proportion of the die compared to Intel*).

. . . no. Just, no.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
One likely evolution in demand merits attention. Gamers appear to be the primary market segment that demands large numbers of PCI-e lanes on the desktop. However, as Intel IGD's improve, the demand for large numbers of PCI-e lanes gradually migrates exclusively to high end gamers who also value reasonably high ipc and a reasonable number of cores. Eventually, I expect Intel may reduce the number of PCI-e lanes on high end desktop SoC's and migrate high end gamers to the Xeon platform. Reducing the number of desktop PCI-e lanes makes it cheaper to produce desktop chips. Xeon will continue to offer strong performance and offer many PCI-e lanes. Xeon tends to be a higher profit margin product but high end gamers may still save from no IGD they do not need. The primary driver of demand for increased ipc is migrating to Xeon customers. Unlike gamers, server customers are always ready to pay for stronger ipc. Conceptually, it makes sense to satisfy both segments with Xeon.


HEDT platforms aren't going anywhere, and yes, currently, that's where Intel features large numbers of PCIe lanes. That eventuality sort of plays into the "oh noes everything is going to be BGA" rumor that pops up every now and then. Maybe someday it'll be true. All BGA CPUs for non-HEDT platforms, with no appreciable number of PCIe lanes on the same platforms.

Having just the HEDT platform and some kind of BGA chip with reduced PCIe lanes and I/O (for desktop) leaves too much of a gap in between them.

And that large gap is something that could be exploited by one of many potential competitors.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Its quite possible that the desktop will split into HEDT and mobile. The mobile is already eating a huge chunk with NUCs, AIOs etc that uses mobile CPUs on the desktop.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
PCIE 2.0 is somewhat limiting in certain situations.
http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/15.html

I wonder how fast a GPU needs to be for PCIE 3.0 to be a limit? Certainly much faster than a 980 and probably also Titan X.

Just look at the timeframe where PCIe 1.0 became the limit and extrapolate. With PCIe 4.0 coming with Skylake-E. GPUs may need a node they will never reach to become bottlenecked. PCIe 2.0 goes back to 2007. And PCIe 2.0 to 3.0 is only 60% increase, while 3.0 to 4.0 is 100% increase.

Thats also why this thread is really based upon trying to fix an issue that isnt there.
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
One likely evolution in demand merits attention. Gamers appear to be the primary market segment that demands large numbers of PCI-e lanes on the desktop. However, as Intel IGD's improve, the demand for large numbers of PCI-e lanes gradually migrates exclusively to high end gamers who also value reasonably high ipc and a reasonable number of cores. Eventually, I expect Intel may reduce the number of PCI-e lanes on high end desktop SoC's and migrate high end gamers to the Xeon platform. Reducing the number of desktop PCI-e lanes makes it cheaper to produce desktop chips. Xeon will continue to offer strong performance and offer many PCI-e lanes. Xeon tends to be a higher profit margin product but high end gamers may still save from no IGD they do not need. The primary driver of demand for increased ipc is migrating to Xeon customers. Unlike gamers, server customers are always ready to pay for stronger ipc. Conceptually, it makes sense to satisfy both segments with Xeon.

Seeing as there is already a fair degree of overlap between Mobile/Desktop and Desktop/HEDT (Xeon), this could be the outcome. High-end "Mobile" SKUs will serve the bottom of the desktop market, while the HEDT will perhaps move down a notch to serve the high-end desktop. There are already a couple of 4 core LGA-2011v3 Xeon SKUs in the same pricing segment as the 4790K. (E5-1620v5 and 1630v5)

I'm already ready to move up to the HEDT platform with Skylake-E. The desktop variety Skylake just has too many compromises for me, and too much focus on the IGP.
 

Dave2150

Senior member
Jan 20, 2015
639
178
116
Seeing as there is already a fair degree of overlap between Mobile/Desktop and Desktop/HEDT (Xeon), this could be the outcome. High-end "Mobile" SKUs will serve the bottom of the desktop market, while the HEDT will perhaps move down a notch to serve the high-end desktop. There are already a couple of 4 core LGA-2011v3 Xeon SKUs in the same pricing segment as the 4790K. (E5-1620v5 and 1630v5)

I'm already ready to move up to the HEDT platform with Skylake-E. The desktop variety Skylake just has too many compromises for me, and too much focus on the IGP.

Thats a long wait until late 2016 or early 2017 for Skylake-E.

What hardware are you making do with in the meantime?
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Thats a long wait until late 2016 or early 2017 for Skylake-E.

What hardware are you making do with in the meantime?

A 3770non-K@4.3GHz in an Asus P8Z77-V, 16GB 1866MHz memory and a 970. Only current itch is a PCIe SSD, so I'm working on scratching that. With such an upgrade I'm good for a few more years. Its still got plenty of life left.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Seeing as there is already a fair degree of overlap between Mobile/Desktop and Desktop/HEDT (Xeon), this could be the outcome. High-end "Mobile" SKUs will serve the bottom of the desktop market, while the HEDT will perhaps move down a notch to serve the high-end desktop. There are already a couple of 4 core LGA-2011v3 Xeon SKUs in the same pricing segment as the 4790K. (E5-1620v5 and 1630v5)

I'm already ready to move up to the HEDT platform with Skylake-E. The desktop variety Skylake just has too many compromises for me, and too much focus on the IGP.

If Intel moved HEDT more into the mainstream, they would have to offer some kind of feature reduced chipset to get the price of the motherboards down.

Another option for Intel would be to offer a consumer version of Xeon-D will higher clocks and features disabled:

http://www.anandtech.com/show/9070/intel-xeon-d-launched-14nm-broadwell-soc-for-enterprise

ASRock-Rack-D15400D4X_678x452.jpg
 
Last edited:

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
If Intel moved HEDT more into the mainstream, they would have to offer some kind of feature reduced chipset to get the price of the motherboards down.

They could very well do just that. Every Intel PCH (even HEDT) use DMI, so there is little stopping Intel from coupling a H81-analoge to a HEDT CPU. Other then marketing of course... :D

Then there is the other part of the market. If we assume that entry/mainstream desktop ends up as supped-up SoCs, then the PCH will already be integrated. The integrated PCH will properly contain enough I/O to keep a "normal" user happy.

Think about it; most users just need 1 or 2 SATA3 ports for their HDD/SSD/ODD. If we assume PCIe SSDs will take off, then even that requirement will cease to be important.

Another option for Intel would be to offer a consumer version of Xeon-D will higher clocks and features disabled:

http://www.anandtech.com/show/9070/intel-xeon-d-launched-14nm-broadwell-soc-for-enterprise

I would think the Xeon-D far too specialized to ever end up in the hands of consumers. But who knows what'll happen... :hmm:
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I would think the Xeon-D far too specialized to ever end up in the hands of consumers. But who knows what'll happen... :hmm:

Why do you think the die is too specialized? (Remember consumer processors like the i7-5820K and i7-5960X are based on a Xeon E5 die, they just have various features from the Xeon disabled)

Comparing Xeon-D to the eight core HEDT/Xeon E5:

1. Eight core Xeon-D has 12 MB of L3 cache while the eight core LGA 2011-3 die has 20 MB of L3 cache
2. Xeon-D has dual channel memory controller while LGA 2011-3 has quad channel.
3. Xeon-D has 24 PCIe 3.0 lanes while LGA 2011-3 has for 40.
4. Xeon-D has six SATA 6 gbps ports while LGA 2011-3 has ten SATA 6 Gbps ports.

So if wanting to make a lower cost platform from a higher end part it is probably a better place to start than LGA 2011-3.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
According to S/A the Xeon-D SOC has a die size of 160mm2:

https://semiaccurate.com/2015/03/09/intel-re-enters-1s-server-market-xeon-d-1500-line/

Intel_D-1500_block_diagram-617x537.jpg


This compared to a die size of 356mm2 for 22nm Haswell-E 8C (which doesn't include the I/O):

http://www.anandtech.com/show/8426/...review-core-i7-5960x-i7-5930k-i7-5820k-tested

(Not sure how big the X99 PCH is, but it is based on 32nm)

And a die size of 82mm2 for 14nm Broadwell 2C GT2 (which also doesn't include PCH)

http://anandtech.com/show/8355/intel-broadwell-architecture-preview/5
 

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,695
136
Why do you think the die is too specialized? (Remember consumer processors like the i7-5820K and i7-5960X are based on a Xeon E5 die, they just have various features from the Xeon disabled)

First of, its an SoC, not a traditional CPU. Second, it appears to be targeted at stuff that needs a lot of multi-threaded performance on a tight power budget. Thirdly, it doesn't seem geared for particularly high frequency. After all the D1520 is running at 2.0GHz, the top model D1540 only at 2.2GHz.

Comparing Xeon-D to the eight core HEDT/Xeon E5:

1. Eight core Xeon-D has 12 MB of L3 cache while the eight core LGA 2011-3 die has 20 MB of L3 cache
2. Xeon-D has dual channel memory controller while LGA 2011-3 has quad channel.
3. Xeon-D has 24 PCIe 3.0 lanes while LGA 2011-3 has for 40.
4. Xeon-D has six SATA 6 gbps ports while LGA 2011-3 has ten SATA 6 Gbps ports.

So if wanting to make a lower cost platform from a higher end part it is probably a better place to start than LGA 2011-3.

Intel is quite happy to sell 4 core E5-16xxv3's (quad-channel memory controller, 40 PCIe lanes and all) at about the same price as a 4790K. That's what you get for loosing the IGP. They just aren't that well known in the enthusiast community. If I had to build a new system right now, it'd properly be a a Xeon E5-1630v3 in a low-end X99 board. But everyone has different needs of course.

Again, if we assume they hook such a CPU to something like a H/Z97 PCH, which there is absolutely no technical reason they couldn't do, you'd get something that doesn't cost much more then what high-end LGA-1150 is going for currently. If there would even be a difference. The only "problem" would then be the quad channel memory controller, but it doesn't require 4 DIMMs to work. Two would work fine on a budget. Check the new Asrock X99 ITX board f.x. Only two DIMM slots.

Anyway, this has turned into a really interesting discussion... :D
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Second, it appears to be targeted at stuff that needs a lot of multi-threaded performance on a tight power budget. Thirdly, it doesn't seem geared for particularly high frequency. After all the D1520 is running at 2.0GHz, the top model D1540 only at 2.2GHz.

There would be nothing to prevent Intel from clocking the cores up for a consumer version. (These are Broadwell cores, not atom based cores).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel is quite happy to sell 4 core E5-16xxv3's (quad-channel memory controller, 40 PCIe lanes and all) at about the same price as a 4790K. That's what you get for loosing the IGP. They just aren't that well known in the enthusiast community. If I had to build a new system right now, it'd properly be a a Xeon E5-1630v3 in a low-end X99 board. But everyone has different needs of course.

Again, if we assume they hook such a CPU to something like a H/Z97 PCH, which there is absolutely no technical reason they couldn't do, you'd get something that doesn't cost much more then what high-end LGA-1150 is going for currently. If there would even be a difference. The only "problem" would then be the quad channel memory controller, but it doesn't require 4 DIMMs to work. Two would work fine on a budget. Check the new Asrock X99 ITX board f.x. Only two DIMM slots.

If you look at the Xeon-D's integrated I/O it is very similar to a H97/Z97 (eg, Six SATA 6 Gbps ports).

Basically a higher clocked Xeon-D with four cores disabled would be very close to what you are describing above. Just with less silicon needing to be fused off.
 

kimmel

Senior member
Mar 28, 2013
248
0
41
There would be nothing to prevent Intel from clocking the cores up for a consumer version. (These are Broadwell cores, not atom based cores).

Why do you assume that the process flavor they are using is the same as the HEDT skus and can actually clock that high?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Why do you assume that the process flavor they are using is the same as the HEDT skus and can actually clock that high?

It is very unlikely Intel would use a special process tech for Xeon-D.

For example, this 65 watt 12 core Xeon E5 comes from the same die as the higher clocked E5 Xeons.

And the ULV consumer chips come from the same die as the high clocked desktop chips.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Currently die variations for LGA 2011-3 (22nm) are 8C,12C and 18C.

If 14nm brings the core count on the smallest die up to 12C for LGA 2011-3 (and/or beyond), I have to imagine that would be a good time frame for Intel to make some type of 8C consumer chip (either Broadwell or Skylake based) on the Xeon-D format LGA.

Example would be LGA Skylake-D (from Skylake based Xeon-D), etc.
 
Last edited: