Why hasn't Intel moved the PCH to the leading edge?

Mar 10, 2006
11,715
2,012
126
Here's a question that has bugged me for a while. Why doesn't Intel integrate the PCH of its PC platforms onto the chip directly?

Is this a risk management move? A way to fill old fabs? I can't imagine it's lower cost, especially given how much of a margin wonder Bay Trail-M/D have been (it's a single SoC).

Would love to hear any/all thoughts on this.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
It will happen eventually. BTW, it's already a MCP, so the difference is pretty small.

From the AT Broadwell preview:

Meanwhile Broadwell-Y’s partner in crime, the on-package PCH, has received its own optimizations to reduce power consumption on the SoC’s total power consumption. The PCH itself is not much of a power hog in the first place – it’s still made on Intel’s 32nm process for this and cost reasons – but with such a strong focus on power consumption every watt ends up counting. As a result the Broadwell PCH-LP has seen optimizations that cut its idle power consumption by 25% and its active power consumption by 20%.

I also remember another article from AT that covers this topic, but couldn't find it.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,013
136
All about that fab utilization. Moving PCH to newest process increases number of leading edge fabs required, while reducing utilization of the N-1 fabs (of which there would be more to fill, due to previously mentioned effect).
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Exactly. I found the article, it was about Iris Pro graphics, obviously. Here's the excerpt:

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested
 
Mar 10, 2006
11,715
2,012
126
All about that fab utilization. Moving PCH to newest process increases number of leading edge fabs required, while reducing utilization of the N-1 fabs (of which there would be more to fill, due to previously mentioned effect).

NTMBK,

Thanks for the response. That's what I was thinking, but I also have the following potential counterargument.

If Intel integrated the PCH onto the die of the main chip, then obviously the new chip gets a bit bigger, and you see power consumption improvements. Further, Intel can charge CPU + PCH prices for the new SoC.

In that case, Intel gets a shorter usable life from its old fabs, but since you would presumably need more leading-edge wafers per product generation, then your fabs -- though they have a shorter effective life -- get depreciated faster.
 

Abwx

Lifer
Apr 2, 2011
10,939
3,440
136
I can't imagine it's lower cost, especially given how much of a margin wonder Bay Trail-M/D have been (it's a single SoC).

On their early process this would double the cost/die of their core M line, grossly estimated numbers suggest that they want the CPU die cost to be within 20-25$/chip, at 32nm the PCH cost is negligible , something like 3$, while its integration inside the CPU die would had skyroofed the whole thing cost at 40-45$/SKU.
 

meloz

Senior member
Jul 8, 2008
320
0
76
AFAIK all non 'S' Skylake CPUs will be SoC: PCH on the same die as CPU and iGPU. So they are getting there. Until now they could keep PCH on older node and make more profit, but the need for improving energy efficiency is forcing Intel to finally give what we always wanted.
 
Mar 10, 2006
11,715
2,012
126
On their early process this would double the cost/die of their core M line, grossly estimated numbers suggest that they want the CPU die cost to be within 20-25$/chip, at 32nm the PCH cost is negligible , something like 3$, while its integration inside the CPU die would had skyroofed the whole thing cost at 40-45$/SKU.

I'm curious, Abwx, why would it raise the cost that much? I quite honestly have no clue how big most the PCH dies are (the Haswell-ULT/Broadwell-ULT can be estimated from the various picture of the die on AnandTech and elsewhere, but I doubt this applies to something like a Z97 or an X99).

Also, I'm wondering why Intel chose 32nm instead of 22nm for the Broadwell PCH. I would imagine 22nm is very mature at this point, and of course much lower leakage/active power.
 
Last edited:
Mar 10, 2006
11,715
2,012
126
AFAIK all non 'S' Skylake CPUs will be SoC: PCH on the same die as CPU and iGPU. So they are getting there. Until now they could keep PCH on older node and make more profit, but the need for improving energy efficiency is forcing Intel to finally give what we always wanted.

That makes sense. Integrate in the products that need the be the most power efficient while keeping separate for things like desktop that don't care so much.

I also believe Intel is integrating the PCH (along with a host of other stuff) with Broadwell SoC for micro-servers.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
I'm curious, Abwx, why would it raise the cost that much? I quite honestly have no clue how big the PCH dies are.
http://www.anandtech.com/show/7322/a-closer-look-at-broadwell-its-new-small-form-factor-package

It would raise the cost because all fabs would become obsolete after just 2-3 years.

Also, I'm wondering why Intel chose 32nm instead of 22nm for the Broadwell PCH. I would imagine 22nm is very mature at this point, and of course much lower leakage/active power.
Read the first reply.
 

Abwx

Lifer
Apr 2, 2011
10,939
3,440
136
I'm curious, Abwx, why would it raise the cost that much? I quite honestly have no clue how big the PCH dies are.

Also, I'm wondering why Intel chose 32nm instead of 22nm for the Broadwell PCH. I would imagine 22nm is very mature at this point, and of course much lower leakage/active power.

On 32nm it is about 50mm2, if integrated its footprints in the die would be 15mm2 at least hence increasing the die size by 20%, defect rate on waffers rise exponentialy with area, this would increase the die cost by 40-50% on a pure area basis but it would also introduce more variability as the PCH caracteristics would had to been accounted for when binning the chips, this would reduce the negociable price of the production and act as an added cost.

Now they could have done it on 22nm but for some reasons they estimated that it was neither cost nor technicaly significantly more efficient, the FCH is running at low frequencies so it s not as demanding in respect of transistors caracteristics, not counting that going 22nm would had required a complete redesign of the chip due to a different transistor geometry , a simple shrinkage would had not been possible so they used the previous design in the waiting of better yields that would allow integration in the CPU die.
 
Mar 10, 2006
11,715
2,012
126
http://www.anandtech.com/show/7322/a-closer-look-at-broadwell-its-new-small-form-factor-package

It would raise the cost because all fabs would become obsolete after just 2-3 years.


Read the first reply.

Well, it's not technically true that the fabs would become "obsolete."

Intel has stated in the past that ~80% of equipment for an n-1 node fab get reused for a leading-edge fab and that this is "done intentionally."

For example, during the financial crisis when demand dropped off sharply, instead of building inventory on older nodes that nobody would buy, they simply rolled over entire fabs to 32nm. This meant that when the world emerged from the crisis and was ready to buy chips, Intel could deliver in full-force.
 
Mar 10, 2006
11,715
2,012
126
On 32nm it is about 50mm2, if integrated its footprints in the die would be 15mm2 at least hence increasing the die size by 20%, defect rate on waffers rise exponentialy with area, this would increase the die cost by 40-50% on a pure area basis but it would also introduce more variability as the PCH caracteristics would had to been accounted for when binning the chips, this would reduce the negociable price of the production and act as an added cost.

Now they could have done it on 22nm but for some reasons they estimated that it was neither cost nor technicaly significantly more efficient, the FCH is running at low frequencies so it s not as demanding in respect of transistors caracteristics, not counting that going 22nm would had required a complete redesign of the chip due to a different transistor geometry , a simple shrinkage would had not been possible so they used the previous design in the waiting of better yields that would allow integration in the CPU die.

Thanks for the insight, Abwx.
 

Abwx

Lifer
Apr 2, 2011
10,939
3,440
136
Thanks for the insight, Abwx.

You re welcome.

Now since you re generaly well informed in numbers and if i can ask a rethorical question , what is the monthly 300mm waffers output of an average Intel leading hedge fab.?.;)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel has stated in the past that ~80% of equipment for an n-1 node fab get reused for a leading-edge fab and that this is "done intentionally."

I wonder how much a transition of 300mm to 450mm wafers is going to change that figure of 80%.

(Eg, Fab 42 was built from the ground up as 450mm 14nm, but how much of existing equipment from previous nodes using 300m wafers can be used in it?)
 

krumme

Diamond Member
Oct 9, 2009
5,952
1,585
136
We have seen Intel reducing capex the last years. Keeping pch off die as long as possible could be part of that strategy?

It seems Intel is extremely good at optimizing everything between design, fab utilization and marketing, coordinated perfectly - as long as its within their firm control and they can set the rules. Its just a machine here and it shows in the solid profit each year.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Also, I'm wondering why Intel chose 32nm instead of 22nm for the Broadwell PCH. I would imagine 22nm is very mature at this point, and of course much lower leakage/active power.

At some point you become pad limited and can't make the die any smaller because you need a minimum amount of space for pin connections. In such a case, using a better process will probably be more expensive.

Also, some technologies (like aspects of analog, RF, etc) don't scale nearly as well with smaller nodes. They could even have worse performance. There are also some cases where parts are better off being isolated. For example, nobody that I know of is integrating any kind of RF radios onto an apps processor SoC along with baseband processors.

Not sure how much any of this actually applies to PCH. What I do know is that Lynx Point has several SKUs (http://en.wikipedia.org/wiki/Platform_Controller_Hub#LYNX-POINT), meaning that if they integrate it onto the CPUs they either give up market segmentation or they explode the already high number of CPU SKU permutations.
 

dealcorn

Senior member
May 28, 2011
247
4
76
In that case, Intel gets a shorter usable life from its old fabs, but since you would presumably need more leading-edge wafers per product generation, then your fabs -- though they have a shorter effective life -- get depreciated faster.

You use good words in a bad way. For depreciation purposes, I assume the estimated useful life of current generation Intel fab equipment is based on economic obsolescence caused by a new node(s) rather than physical wear. Integration of the PCH on SoC will not alter the estimated useful life of new fab equipment. It will increase total depreciation because more fab capacity is required. Because old fab equipment is already fully depreciated, discussion of old fab "usable life" is a red herring.

I recall but misplaced an old SA article that suggests 60% of current generation chip cost is depreciation. I interpret that as meaning the cost per transistor of n-2 capacity is less than current generation. If it has a plug, a separate PCH is cheaper. If there is no plug, a separate PCH is fatal.
 

dealcorn

Senior member
May 28, 2011
247
4
76
I can't imagine it's lower cost, especially given how much of a margin wonder Bay Trail-M/D have been (it's a single SoC).

Failure to integrate PCH on SoC killed prior generation Atom's mobile ambitions. Use of a CPU too wimpy for Windows with marginally adequate graphics were not helpful on the desktop. Silvermont fixed those three problems. Why is it surprising that a competent design built with the world's best transistors delivers reasonable margins? For whatever reason(s), it costs Intel boatloads of money to develop and validate a new design. As long as Intel had to design and validate an integrated PCH for the mobile side, it was probably cheaper and lower risk to recycle/implement the on SoC design for desktop use. As a not shocking, side benefit, the small form factor desktops enabled by superior Silvermont efficiency have consumer appeal. It does not hurt that the competition in this space is niche'y.
 
Last edited:

escrow4

Diamond Member
Feb 4, 2013
3,339
122
106
Fix the old ass DMI 2.0 too. You can saturate it if you have something hammering USB 3 at the same time as an SSD or two. Yes its not usual, but fix it already. Its creaky.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Fix the old ass DMI 2.0 too. You can saturate it if you have something hammering USB 3 at the same time as an SSD or two. Yes its not usual, but fix it already. Its creaky.

There doesnt seem to be any changes to DMI/A-Link interfaces in the near future. At best they move to a faster PCIe speed.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,013
136
Fix the old ass DMI 2.0 too. You can saturate it if you have something hammering USB 3 at the same time as an SSD or two. Yes its not usual, but fix it already. Its creaky.

Why are your SSDs going over DMI? You should be using the PCIe links off the CPU, especially by the time Skylake comes around.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
I wonder how much a transition of 300mm to 450mm wafers is going to change that figure of 80%.

(Eg, Fab 42 was built from the ground up as 450mm 14nm, but how much of existing equipment from previous nodes using 300m wafers can be used in it?)

Zero. But that is true for any new fab that is being tooled with new tools for the very first time.