Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 191 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,779
6,798
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,808
6,742
136
Note that TSMC is not rushing to high NA EUV without ensuring it is cost effective.

High NA is a real problem because the machines cost more than double standard EUV, but even in the best case scenario they only double throughput - and require a LOT more electricity. Set aside the issue of the half sized reticle (apparently there is work afoot to address that) the best case in its favor is that allows wafers that would require double exposure in standard EUV be processed with one. So maybe you can make the case to use it in a limited fashion for that, but again you're at best breaking even unless the double exposure is significantly hitting your yields. TSMC was doing quadruple DUV exposures in N7 at very high yields/throughput, and have undoubtedly experimented with EUV SAQP and are comfortable with it, hence their slow walking high NA EUV. They'd rather buy more (comparatively) cheap standard EUV scanners that they're very familiar/comfortable with, and let someone else work out all the issues with the high NA EUV scanners.

ASML is already working on hyper NA EUV to supplant high NA EUV which will further explode the cost and be even less likely to make economic sense. Their real problem is that the design they finally got working a decade ago is not pretty. It is an astounding technical achievement they made something so complex work as well as it does, Rube Goldberg would be proud! But they painted themselves into a corner. They can't justify R&D on new/better/cheaper methods because they have such a massive sunk cost in developing what they have now. No one else could justify the billions in R&D that would be required to unseat ASML, given the tiny potential customer base and the fact that customer base is all-in on ASML.

We all know the answer to that, and whether China achieves it before the end of the decade or in the middle of the next, it is only a matter of time. Not only are there better ways to generate the EUV light than ASML's tin droplet solution, a less powerful light output will suffice if they can reduce the number of mirrors the light bounces around on. Or they could make a total departure from the way things have always been done - 10-15 years ago there was a fair amount of academic research into using free election lasers for "EUV" lithography. There's a rather large fixed cost of needing an on site particle accelerator, but it could be shared by multiple fabs and we tend to cluster a number of fabs on the same campus anyway. There are some people a lot more knowledable than me who think that could end up a far more cost effective solution, not only because you share the accelerator but because it would require far less power than ASML's scanners.

If I was China I'd have been researching a "quick" plan (hoping for 2030) which basically does ASML's EUV on the cheap - fewer mirrors and without the 50K tin droplets per second, and a "long term" plan (targeting 2035-2040) using free electron lasers, either in case the quick plan doesn't pay off or to continue to advance once the wavelengths get too extreme even for EUV.
 

DrMrLordX

Lifer
Apr 27, 2000
23,196
13,279
136
@511

All that engineering knowledge and yet the (former) Intel engineer still blames capitalism for his layoff. Meanwhile, capitalism created and funded every product that put Intel into a position of dominance years ago, as well as every product that threatens to unseat them now. Total lack of perspective.
 

Saylick

Diamond Member
Sep 10, 2012
4,115
9,620
136
@511

All that engineering knowledge and yet the (former) Intel engineer still blames capitalism for his layoff. Meanwhile, capitalism created and funded every product that put Intel into a position of dominance years ago, as well as every product that threatens to unseat them now. Total lack of perspective.
Fwiw, Intel in the past would take profit and reinvest it within the company, e.g. train employees and purchase capital, to keep their lead. That’s how capitalism should work. Modern capitalism is more about employees taking the back seat and using profits on stock buybacks. Stock buybacks weren’t even allowed until the 80s or so if I’m not mistaken. It also especially doesn’t help that CEOs are incentivized to buy back stock with earnings since the vast majority of their income is in the form of company stock and they usually have a pay structure where they make bonuses depending on if the stock price reaches a certain amount. None of this philosophy puts the employee first.
 

511

Diamond Member
Jul 12, 2024
5,382
4,792
106
@511

All that engineering knowledge and yet the (former) Intel engineer still blames capitalism for his layoff. Meanwhile, capitalism created and funded every product that put Intel into a position of dominance years ago, as well as every product that threatens to unseat them now. Total lack of perspective.
Yeah
Fwiw, Intel in the past would take profit and reinvest it within the company, e.g. train employees and purchase capital, to keep their lead. That’s how capitalism should work. Modern capitalism is more about employees taking the back seat and using profits on stock buybacks. Stock buybacks weren’t even allowed until the 80s or so if I’m not mistaken. It also especially doesn’t help that CEOs are incentivized to buy back stock with earnings since the vast majority of their income is in the form of company stock and they usually have a pay structure where they make bonuses depending on if the stock price reaches a certain amount. None of this philosophy puts the employee first.
Intel of the past was run by legendary guys like Moore/Noyce/Grove who knew industry Craig Barrett was fine as well he lead the effort for copy exactly and they maintained their process and tech lead but after Otellini.
It has been a clown show with CEOs doing anything without thinking for 3 Consecutive CEOs you can see the collapse started with 14nm with Otellini and than it kept on and here we are now
 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
I know that Nvidia hasn’t been first on a node for awhile but they are in the highest growth market, with the biggest need for more energy efficiency and have the highest margins to pay for the next node. At some point they may need to break to smaller multi-die modules for economics and energy efficiency.

Nvidia has deep pockets at $4 trillion in valuation.
A chiplet that connects multiple GPU DIEs that are just the last minute of the size limit of such a stupid big mask…
It's hard to think of it as a good chiplet...
 

Win2012R2

Golden Member
Dec 5, 2024
1,325
1,358
96
No one else could justify the billions in R&D that would be required to unseat ASML, given the tiny potential customer base and the fact that customer base is all-in on ASML.
Their unseating will be due to higher prominence of etching in use with new transistors, Intel already seems to be having a lot of second thoughts regarding high NA
 
  • Like
Reactions: Io Magnesso

511

Diamond Member
Jul 12, 2024
5,382
4,792
106
Their unseating will be due to higher prominence of etching in use with new transistors, Intel already seems to be having a lot of second thoughts regarding high NA
it's even more ironic that intel funded the initial EUV R&D and fumbled the ball so hard
 
  • Like
Reactions: Win2012R2

DrMrLordX

Lifer
Apr 27, 2000
23,196
13,279
136
it's even more ironic that intel funded the initial EUV R&D and fumbled the ball so hard
Intel may have funded it, but their steadfast refusal to use it on their 10nm node was a choice. EUV in-and-of-itself probably didn't factor too heavily in all the delays associated with their first node to actually use it (Intel 4) except for the fact that they simply didn't order many EUV machines upfront while TSMC did.

edit: The main drivers of 10nm's delays were (apparently) quad-patterning and the extensive use of cobalt metal layers. Using EUV would only have alleviated the need for quad-patterning, and waiting for EUV equipment to be available might have made things worse (or just as bad).
 
  • Like
Reactions: Io Magnesso

511

Diamond Member
Jul 12, 2024
5,382
4,792
106
TSMC used Quad Patterning for N7 but they were at similar time frame when 10nm launched late but they were smart enough to insert EUV into it Intel should have done so as well insert EUV in a already predefinded process should have alleviated the cost issue they are facing with the node
 
  • Like
Reactions: Io Magnesso

LightningZ71

Platinum Member
Mar 10, 2017
2,673
3,372
136
Base N7 was supposedly a fully DUV node. EUV didn't appear in the stack until N7+ and N6 (though I don't see any products that actually used N7+). Even then, EUV usage in N6 was very light.
 
  • Like
Reactions: Io Magnesso

511

Diamond Member
Jul 12, 2024
5,382
4,792
106
Base N7 was supposedly a fully DUV node. EUV didn't appear in the stack until N7+ and N6 (though I don't see any products that actually used N7+). Even then, EUV usage in N6 was very light.
Yeah but that was part of the learning curve and i think the EUV Version ended up cheaper as well
 
  • Like
Reactions: Io Magnesso

Doug S

Diamond Member
Feb 8, 2020
3,808
6,742
136
TSMC used Quad Patterning for N7 but they were at similar time frame when 10nm launched late but they were smart enough to insert EUV into it Intel should have done so as well insert EUV in a already predefinded process should have alleviated the cost issue they are facing with the node

They didn't insert EUV in for N7+ because they needed to, they inserted it in the critical layers to gain experience so they could have a smooth rollout of N5. It probably did save a little money since replacing four steps with one is an obvious win plus it reduces the wafer completion time which customers like.
 

Saylick

Diamond Member
Sep 10, 2012
4,115
9,620
136
View attachment 127151

Looks like N2 is in full swing. Apple and Qualcomm have a huge chunk. The mobile market is so big lol
Interesting to see Intel use more N2 than AMD. So much for 18A...

Also, no N2 use from Nvidia, eh? I have to wonder what's going on with their chiplet development plans. At some point, they cannot keep slapping 600mm2 dies together and call it "chiplet" when it's more MCM. That and the reticle limit getting cut in half when High-NA EUV becomes the norm.
 

johnsonwax

Senior member
Jun 27, 2024
469
674
96
Moore’s Law is mainly about economics and cost. Reducing the cost of ops or flops. If costs aren’t going down the market isn’t expanding and the cost of fabs can’t be covered.
At the end of the day, Moores law is entirely about economics and cost, and it's not really about reducing the cost of ops - that's an assumption built into the law, not a conclusion derived from it. Moores law doesn't stand in opposition to economics, it assumes that the density will increase because the economic problem will always have been solved.

Moores law has great faith in the engineers. It assumes that whatever wall compute hits, the engineers will chart a path through it. And that's proven true. The law also assumes that the marginal cost of ops will always decline. That's axiomatic if the engineers assumption is correct. The problem is the fixed cost. That's the thing that will scale in some way with performance and the thing that needs to be controlled - and something Moore understood. Because you need to move quickly, you're going to have to take those costs as they come, and the only sustainable solution is scale. You spread those fixed costs across more compute demand, and the assumption is there will always be demand for more compute (the law of accelerating returns, on which Moores law is anchored).

Where Moores law intersects with a business is that it doesn't promise this will be distributed evenly. If Intel isn't expanding its markets to keep pace with the growth in fixed costs (you could argue they could go upmarket instead and get by with higher prices but fewer units, but per the law, you only get to do that so many times before the market catches up with you) relative to it's competitors, then it's going to fail. Now there's another way to succeed and that's to violate Moores law in a different way by differentiating compute. If a market needs x86 and you are the only supplier of it, you are somewhat exempt from that law so long as that need doesn't change. I think everyone here has an intuition of how far you can protect a business behind proprietariness.

Intels double moat strategy was that they were, indeed, a limited supplier of proprietary x86 which captured a certain market regardless of whether the world around them was moving faster. That was the first moat. The 2nd moat was that because x86 was the majority of the market, that afforded them the scale, and therefore they had the ability to sustain a process advantage. That was the 2nd moat. 1) x86 was needed for Windows, and 2) x86 was fastest as a byproduct of 1) which made Windows the thing to want, which funded the process allowing it to be the thing to want.

There are two key events here:
1) The iPhone in 2008 kicks the mobile compute market into high gear, threatening to overtake Intels traditional market. That's where the future volume will be. Intel rejects making those processors, likely recognizing that would risk their x86 moat, and decides to try winning mobile by scaling x86 down. This doesn't work. Moores law's faith in the engineers doesn't apply to each individual company.
2) Apple goes with ARM, which bypasses the proprietary x86 market in favor of a more open architecture (opening the proprietary lock) and goes with foundries which allow them to pool resources with other companies (opening the volume lock). At first that's Samsung (a competitor with the industry on devices) which allows TSMC (not a competitor on devices) to build a larger consortium of customers. GF is spun out in 2012, Apple signs on with TSMC around that time, the coalition is formed.

1) was the opportunity (and signal) for Intel to open the business that 17 years later they're finally trying to do and 2) was the moment that the clock really started ticking like this (clip of Marisa Tomei stomping her foot on the porch) because that was the moment Intel became the small fish in the big pond. Same for Samsung. By 2014 or so, the writing was on the wall for everyone but TSMC. TSMC could always screw up, but if they didn't they were going to win this - Moores law says so. Even if Intel had nailed 10nm, they would have still landed here, just a little later. Otellini (or the board) is the one who forgot the consequence of Moores law, and every CEO after that (again, or the board) failed to correct the error. They deserve some lenience for not seeing it right away (most people didn't), and the failure on 10nm meant that they weren't in a position to open the business because they were uncompetitive, but still. There are a lot of signs they understand what needs to happen, but we're hitting a point where only governments, not even tech giants, have the resources to do it.

It doesn't entirely matter what the yields are on 18A, the business isn't stable enough for someone to take the risk without a real discount, and their volume isn't going to be high enough to get an Apple here even if they could. Yeah, it's good news for Intel processors, but that's not a big market any longer. Intels total revenue last year was 50% higher than Apples revenue last year - for just Watch and Airpods. Like, I think there is a nostalgia for Intel being a juggernaut, and they've really been run over quite badly by mobile and now AI.

Ultimately Moores law is a law of monopolization because any effort to fragment that market will inevitably happen unevenly and the smaller party will be unable to cover their fixed costs and will become uncompetitive (this was easier to avoid when those fixed costs were smaller, but now they're massive and continuing to grow). You can force a split and subsidize the effort but the cost of that subsidy will forever increase. In order to hold Moores law, the industry will force itself into consolidation. If it doesn't, Moores law breaks - at least to some degree. And maybe slower technological advancement is something we're willing to trade, but that's the trade. There is another fragmentation at work here - that's the political isolation of China from the industry and their potential to rise to overtake TSMC, but it's hard to imagine a scenario where they can overcome that kind of advantage without a significant global realignment, which given the state of the US I guess isn't totally out of the cards - things are b-u-s-t-e-d.
 

Joe NYC

Diamond Member
Jun 26, 2021
4,171
5,715
136
Interesting to see Intel use more N2 than AMD. So much for 18A...

Also, no N2 use from Nvidia, eh? I have to wonder what's going on with their chiplet development plans. At some point, they cannot keep slapping 600mm2 dies together and call it "chiplet" when it's more MCM. That and the reticle limit getting cut in half when High-NA EUV becomes the norm.

NVidia will be using N3P for Vera Rubin in 2026 while AMD will be on N2P with Mi400.
 
  • Like
Reactions: Tlh97 and Saylick