• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

[Motley Fool] Intel should consider going fabless

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
x86 in phones died because OEM's didn't want to be tied to a single provider. All you have to do is look at computer OEM's to understand why.

x86 phones didn't die. They never lived, neither AMD nor Intel had any kind of x86 processor that could be used in a phone when the Smartphone revolution took off.

Apple actually wanted Intel in the running for the first iPhone chip:
http://www.iphoneforums.net/threads...-offer-to-make-first-iphones-processor.57153/

Once you miss the start of the market, and the standard is set, it's VERY hard to displace the incumbent (just like trying to displace x86 on the desktop). Intel literally paid to have a couple of x86 phones built, but the ship had already sailed by then.
 
While I'm sure there are some advantages to being fabless, you are also highly dependent on other companies to produce your product, and what happens if they don't want to?
 
While I'm sure there are some advantages to being fabless, you are also highly dependent on other companies to produce your product, and what happens if they don't want to?

What, turn away a massive contract and send it to your rival instead? That's not generally how you make money 😉
 
Yes but what happens if fab companies fabs are completely full with orders? Like I said I rather keep my own fabs so I'm not dependent on anyone else.
 
Probably not at this time, even though their fabs have been stumbling on the last couple of nodes. They are going to have a hard time as nodes get smaller and more expensive though. They might want to at least revisit the thought at 7 and 5 nm.
 
Probably not at this time, even though their fabs have been stumbling on the last couple of nodes. They are going to have a hard time as nodes get smaller and more expensive though. They might want to at least revisit the thought at 7 and 5 nm.

Stumbling compared to who? Intel simply isn't under pressure to switch to 10nm until it is more profitable than their arguably superior 14nm process, that is more mature than any competitor.

IMO there is ZERO chance Intel will spin off their fabs in the foreseeable future.


There are many advantages to having world class captive fabs. Having a world class Fab and CPU design in house means they can drive fabs in directions that suit CPU design and vice-versa, they total inside knowledge of all fab parameter to better plan CPU design, etc...

AMD only divested their fabs, because they couldn't afford them.

When Intel starts losing money, we can start to seriously consider them spinning off their Fabs.
 
IMO there is ZERO chance Intel will spin off their fabs in the foreseeable future.

I will agree with this as Intel is really defined by their process technology over their processors. From the beginning Intel has used manufacturing technology as their competitive advantage. In the latest iteration it was 'tick tock' to utilize their size to manufacture the lowest cost transistors. A node is a big advantage in the cost of transistors and allows for 'misses' in architecture.

If they won't spin off fabs then they may spin off X86. The care and feeding of X86 is expensive with Intel spending a lot on marketing for 'reference' designs and R&D to capture value for the architecture.

TSMC is getting close to 50% gross margins on manufacturing on what Intel thinks is an inferior process. Intel is getting 60-65% gross margin with manufacturing and X86. How much value is X86 adding to the manufacturing? Could Intel get 60% margin on wafers to third parties with their 'superior' transistors?
 
Yes but what happens if fab companies fabs are completely full with orders? Like I said I rather keep my own fabs so I'm not dependent on anyone else.
Why are the Fabs filled with orders. The answer is because they have Priority customers (like Apple). Intel would be a priority customer. They would have fabs by other companies dedicated and maybe even built specifically to build Intel's products.
 
It is semantic game if you are a shareholder as you would get shares from each company. However if you are the CEO or an employee I will guess it might have more significance. Brian Krzanich came out of manufacturing and that really is where the money is made. Robert Colwell was a chip architect at Intel in the early years and made a presentation at Hotchips. Here is the algorithm from one of his slides;
  • Chip architect’s job was to remove obstacles from Moore’s Law silicon bounty

    – System perf = f(CPU clock, CPU uArch, sys)
    – From 1980 to 2010, clocks are 3,500x faster
    – What did Arch/uArch achieve beyond that? Maybe 50x?
If you could own the architecture or the manufacturing, which would you own.

You can argue that they shouldn't be divided, however, x86 is what is holding back the manufacturing side of Intel and allowing the foundries to catch up. Intel optimizes their processes for high performance CPU's and the x86 architecture. They have been claiming that being an IDM with architecture and process designed under the same roof is a competitive advantage. The customers, EDA tool vendors and fabs are working together more effectively than Intel expected.

The diversity of customers at the fabs and the incremental improvements being made on an annual basis in the foundries has allowed them to adapt more rapidly to the current market. The wafer volumes at the fabs has given them the capital to essentially kill the Intel 'tick tock' strategy of out running the competition with overwhelming R&D and capex. Intel's wafers volumes have stagnated at best or are shrinking while the fab volume have increased dramatically.
Intel has made half hearted attempts at providing fab services for about four to five years. It appears that Intel is getting increasingly concerned about wafer volumes and ARM and the EDA vendors are now on board. Intel still needs to figure out when third parties will get access to nodes and how many wafers will be set aside for foundry work. Manufacturing capacity and access will continue to be an issue even if they are spun off. There would have to be a 'wafer supply agreement' between the split companies.
 
I don't think this will help either of the groups.

Currently, the architecture group is struggling to stay relevant. And the process group is struggling to maintain the lead(at this point they might be even in a catch-up mode).

If you split the two now, you get two mediocre teams. Architecture group that can't compete, and process that nobody wants.

The problem is elsewhere, and they need to fix that without resorting to a road they can't turn back from, like the article suggests.
 
The problem is elsewhere, and they need to fix that without resorting to a road they can't turn back from, like the article suggests.
This I think we both can agree with. The time to allow 3rd party manufacturing was when they were the top dog. Whether it be ARM CPU's or IBM's PowerPC lineup there are guys out there that would have been willing to pay above the going rate, to get Intel's superior process so they could have the best.

Now who is going to want to go to Intel, basically get the same work done, if they already have a great relationship with their current Fabs. In reality it wouldn't even matter much. I don't know what Intel's capacity vs. use is but I am guessing that Intel post Fab sale would still be pretty much Intel only. At most it would allow Intel in the near to mid-term future allow them the opportunity to keep margin up if sales slump, since they wouldn't be tied into the fab costs and lose money on both lack of CPU sales and lack of CPU manufacturing. Once that happens maybe the Fab business could grow outside of Intel to do 3rd party manufacturing. But that process would take years after the decline started.

Intel's best solution is probably to do better. To find their brightest engineers and work them up the ladder and right the ship. Work on more competitive designs and head hunt for the best process guys. To do things they haven't in well over 10 years.
 
I don't think this will help either of the groups.

Currently, the architecture group is struggling to stay relevant. And the process group is struggling to maintain the lead(at this point they might be even in a catch-up mode).

I think many people are presenting an overly pessimistic view of the state of Intel.

The architecture group has been completely unchallenged on the desktop for a decade, Ryzen doesn't really exceed Skylake, it just finally creates a competitive product after so long of essentially being irrelevant. Go read any reviews for Ryzen, and it usually behind intel thread per thread, clock per clock. It is only because AMD ditched the IGP, and included double the core count that Ryzen makes a dent.

The process group likewise has gone unchallenged for even longer. And it is really only the 10nm chips in the latest Apple products that show any sign of Intel not being ahead. Intels process is denser so when their 10nm ships the will regain the lead again until someone ships 7nm...

Both of the above were likely driven by the complete dominance of the desktop CPUs for a decade, which may have lead to some complacency.

Ryzen is the best thing to happen in a decade, because it will shake up some the complacency that has developed at Intel.
 
Except the problem is that Intel doesn't really have any real avenues of growth beyond servers; at a time when their bread and butter (ie: PCs) is dying. And servers is even kind of dicey when you factor the lurking ARM threat. ARM is so much better at IoT that Intel's basically given up already like they did on mobile. They've done okay with HPC but clearly they are far behind nVidia.

Ryzen is the best thing to happen in a decade, because it will shake up some the complacency that has developed at Intel.

All Ryzen is going to do is hurt Intel's revenue. Part of the reason things weren't as bad for Intel as it looks was the inflated prices they were able to get to what's left of the PC market.
 
I think many people are presenting an overly pessimistic view of the state of Intel.

The architecture group has been completely unchallenged on the desktop for a decade, Ryzen doesn't really exceed Skylake, it just finally creates a competitive product after so long of essentially being irrelevant.

Ryzen can be compared to the original Athlon. It didn't have a drastic lead. And it didn't make a huge impact to marketshare. But it started the road to success for AMD. The peak of AMD was reached with Athlon 64, 4 years after the original Athlon. Success is never instant.

"because AMD ditched the IGP,"

This saying is one of my pet peeves in tech forums. People say this like the iGPU has any practical costs for the companies making them. The development for the iGPU has been paid multiple times by the people who use them, the 80% of the population. On the contrary, if AMD/Intel did not have the iGPU, they wouldn't be able to sell the CPU at all to the vast majority of the market. It would cost MORE to make a part that doesn't have an iGPU.

Which is why it won't be until Raven Ridge AMD will start making real impact. The rest of the market won't care until then.

The process group likewise has gone unchallenged for even longer. And it is really only the 10nm chips in the latest Apple products that show any sign of Intel not being ahead. Intels process is denser so when their 10nm ships the will regain the lead again until someone ships 7nm...

I see the density argument come up every time. The question I want to ask is: Where does that density advantage manifest itself? Their cores are larger than ARM chips. What's the point of touting density when it only applies to their products, and the density gain isn't greater than historic gains?

I also have a feeling the reason they stumbled so much on 14nm is because of focus on density. When you are on top, and fundamental problems start cropping up, you'd hit that barrier ahead of anyone else. EUV has been delayed for over a decade, and Intel and others have to resort to using triple, or even quadruple patterning techniques to get by. If you want have the most dense process, you'd hit the ceiling quite readily.

With 22nm Intel themselves admitted what many others have been suspecting for years. 22nm Intel process is only 30% denser than 28nm of the competitors. With competition's 20nm being basically being marketing driven and bringing no performance, and requiring "14nm" to bring the performance (but virtually no density gains), Intel finally gained the advantage. On top of that, with 14nm Intel focused on density.

But up until 22nm the density part wasn't that important, because Intel only makes it for themselves. The big advantage for Intel transistors were that they were extremely high performance. It did not matter if the competition went copper first, or used new materials first. The transistor drive current leader was Intel.

Now, what advantage do they have? Performance? That's not showing up in their products. Density? Again who cares? They don't make products that can directly compare.

Ryzen is the best thing to happen in a decade, because it will shake up some the complacency that has developed at Intel.

We'll see better products from Intel if they live through this.
 
Except the problem is that Intel doesn't really have any real avenues of growth beyond servers; at a time when their bread and butter (ie: PCs) is dying. And servers is even kind of dicey when you factor the lurking ARM threat. ARM is so much better at IoT that Intel's basically given up already like they did on mobile. They've done okay with HPC but clearly they are far behind nVidia.

Intel is no doubt in a period of transition trying to expand into multiple areas. This their view of where they want to go:

A1_Sermiconductors_INTC_Addressable-Market-2.png


I am not saying the will achieve this, but they are attempting to transition to a broader base of products, and either way nothing makes the case for going fabless.
 
I am not saying the will achieve this, but they are attempting to transition to a broader base of products, and either way nothing makes the case for going fabless.

I am not sure what the guys making these slides are thinking? They are separating 3D XPoint and Optane, which in reality should be same. I guess they are looking at when full promises are realized but we may be talking about 15 years down the road.

I can't see how they are going to see any meaningful revenue with 3D NAND and NAND in general. The NAND market is pricing themselves to the bottom and that doesn't fit with Intel. They gave up the idea of fabbing solar panels because they found it won't be profitable for them. All the solar panel makers sell their products at break-even prices.

With 3D NAND what's the limit for stacking? It's at 64+ layers already. It can't be far from a time when you can't stack more layers. Then they have to resort to shrinks, which are at its limits too. So they resort to tricks like TLC and QLC, which are going to be sold for bargain basement prices.

Mobile premium modem? They seem to release products that are just a year late from Qualcomm's. They needed class-leading modems yesterday because manufacturers are increasingly opting to integrate them with the SoC.

I'm hoping they'll put their FULL weight beyond 3D XPoint and successive Storage Class Memory technologies. It is often said that the reason Andy Grove was seen as a legendary leader is because he transitioned Intel from being a DRAM focused company to a MPU focused one. Can we at least see effort put in that validates them believing 3D XPoint has a $100 billion TAM?

Apple actually wanted Intel in the running for the first iPhone chip:
http://www.iphoneforums.net/threads...-offer-to-make-first-iphones-processor.57153/

Apple would have went to competitors the instant they felt Intel won't deliver even if the deal went through with the first iPhone chips. Apple is immensely flexible considering the company of that size. Dual sourcing fabs and switching fab suppliers almost every generation was unthinkable prior to Apple.
 
Ryzen can be compared to the original Athlon. It didn't have a drastic lead. And it didn't make a huge impact to marketshare. But it started the road to success for AMD. The peak of AMD was reached with Athlon 64, 4 years after the original Athlon. Success is never instant.

I disagree. Athlon came out definitively ahead, and followed up with blockbuster Athlon 64.
http://www.anandtech.com/show/355/24

Ryzen is merely catching up to slightly behind, it's not nearly an Athlon level win.

This saying is one of my pet peeves in tech forums. People say this like the iGPU has any practical costs for the companies making them. The development for the iGPU has been paid multiple times by the people who use them, the 80% of the population. On the contrary, if AMD/Intel did not have the iGPU, they wouldn't be able to sell the CPU at all to the vast majority of the market. It would cost MORE to make a part that doesn't have an iGPU.

Which is why it won't be until Raven Ridge AMD will start making real impact. The rest of the market won't care until then.

The point here, is that the only advantage Ryzen really is presenting, is more cores, and they can provide more cores because they skipped the iGPU and used the die area for more cores. The cost of the iGPU is the die size penalty. Something close to half the die is iGPU these days.

Raven Ridge is very important, but there they won't have more cores than Intel, because they are making the same trade-off as Intel, cutting off the cores to include the iGPU. Intel will likely have the CPU advantage, and AMD the GPU advantage.

I see the density argument come up every time. The question I want to ask is: Where does that density advantage manifest itself? Their cores are larger than ARM chips. What's the point of touting density when it only applies to their products, and the density gain isn't greater than historic gains?

If people are going to simplistically say Intel is behind because some competitor reached 12nm before Intel, then density should play into that argument, if their competitors 12nm is actually the same density as Intels 14nm...


Now, what advantage do they have? Performance? That's not showing up in their products. Density? Again who cares? They don't make products that can directly compare.

People are easily running Intels unlocked chips at 5GHz where AMD's are topping out at 4GHz. It's a very substantial difference in clock speed. How much of that is process vs architecture?
 
Last edited:
Intels problem isn't their fabs, it's the management or better said CEO. It's pretty simple to see when things started to go south.

- process delays
- lack of progress CPU performance
- 3D Xpoint where is it?
- Skylake-X - TIM fiasco
 
I disagree. Athlon came out definitively ahead, and followed up with blockbuster Athlon 64.
http://www.anandtech.com/show/355/24

Ryzen is merely catching up to slightly behind, it's not nearly an Athlon level win.
He means that Ryzen will have a similar impact for AMD in making it see profits again like it happened with the Athlon.
The point here, is that the only advantage Ryzen really is presenting, is more cores, and they can provide more cores because they skipped the iGPU and used the die area for more cores. The cost of the iGPU is the die size penalty. Something close to half the die is iGPU these days.

Raven Ridge is very important, but there they won't have more cores than Intel, because they are making the same trade-off as Intel, cutting off the cores to include the iGPU. Intel will likely have the CPU advantage, and AMD the GPU advantage.
You're forgetting the cost advantage AMD has with it's small-die strategy.
If people are going to simplistically say Intel is behind because some competitor reached 12nm before Intel, then density should play into that argument, if their competitors 12nm is actually the same density as Intels 14nm...
Intel will lose the density advantage once its competitors deliver on 7nm.
People are easily running Intels unlocked chips at 5GHz where AMD's are topping out at 4GHz. It's a very substantial difference in clock speed. How much of that is process vs architecture?
Intel is going to spend four years at 14nm doing Process-Architecture-Optimization-Optimization, while Global Foundries is going to spend 2.5 years doing the same, and then switch to 7nm.

How well did the i7 5775C overclock?
 
I disagree. Athlon came out definitively ahead, and followed up with blockbuster Athlon 64.
http://www.anandtech.com/show/355/24

They did have a lead in the beginning but you have to remember it was in the golden age of CPUs where advancements came quick. They were leapfrogging each other pretty much until 1.13GHz Pentium III when Intel stalled.

Also Athlon has the same feel to Ryzen in that both were seen as products that could usurp Intel after years of being far away from doing so.

Raven Ridge is very important, but there they won't have more cores than Intel, because they are making the same trade-off as Intel, cutting off the cores to include the iGPU. Intel will likely have the CPU advantage, and AMD the GPU advantage.

Nah, they are just using a different strategy. You shouldn't confuse not willing/wanting to do so as a technical reason.

And tell me about this "cutting off the cores". The top end configuration is 4C + GT2. They derive every consumer line CPU off the top end. The top is not 6C with no iGPU. If you are talking about HEDT its a derivative of the server line.
 
They did have a lead in the beginning but you have to remember it was in the golden age of CPUs where advancements came quick. They were leapfrogging each other pretty much until 1.13GHz Pentium III when Intel stalled.

Also Athlon has the same feel to Ryzen in that both were seen as products that could usurp Intel after years of being far away from doing so.

It might feel that way to you, but it doesn't to me. Ryzen is behind on IPC and behind on clock speed, and considering how poorly AMD evolved the Bulldozer family, I have ZERO expectations that AMD is going to start leapfrogging Intel.

Nah, they are just using a different strategy. You shouldn't confuse not willing/wanting to do so as a technical reason.

Nah what? It's not a different strategy, Raven Ridge is the exact same strategy as Intel. 4 core + IGP for the mainstream market. It will be interesting to see it play out, but without the extra cores, it will be harder to show a competitive advantage. By then Intel will have Coffee Lake with 6C + iGPU. For RR I guess AMD will lean heavily on iGPU benchmarks.

And tell me about this "cutting off the cores". The top end configuration is 4C + GT2. They derive every consumer line CPU off the top end. The top is not 6C with no iGPU. If you are talking about HEDT its a derivative of the server line.

Tell you what? I didn't see a question in there.
 
Nah what? It's not a different strategy, Raven Ridge is the exact same strategy as Intel. 4 core + IGP for the mainstream market. It will be interesting to see it play out, but without the extra cores, it will be harder to show a competitive advantage. By then Intel will have Coffee Lake with 6C + iGPU. For RR I guess AMD will lean heavily on iGPU benchmarks.

Yes, AMD should have better iGPU. But on CPU I do not think they are far behind. Biggest issue on desktop is clocks. But if you take out 7600k and 7700k, then Ryzen looks pretty good in any aspect. And on mobile AMD might actually be ahead. As was show is that sweet spot for Ryzen performance/watt is at lower clocks, around 2-3 ghz. There it's actually better than intel! (well after all GF 14nm is a low power process). I think the bigger issue here could actually be Vega iGPU taking Vega FE as indication.
 
Back
Top