Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 142 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,787
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
Didn't they go to "thems larger transistors" because of power? 32 GHz sounds great until you calculate how much power it will draw.
They went FinFETs for the higher current at a specific Vnom. However, the cost is decreasing and reliability is increasing with SOI and sSOI. Where as 64CPP and lower unstrained-wafer planar FD-SOI device has higher current than FinFET and 56CPP and lower for GAAFETs. Where as the 100CPP and lower strained-wafer planar FD-SOI has higher current than FinFETs/GAAFETs.

DARPA continuation of OHPC/UHPC stuff;
Group A: 130nm/90nm with a single-core/RISC 32-bit at ~30 GHz. With their current 2022+ work shifted to ~50 GHz FPGAs on 45nm PDSOI/22nm FDSOI.
Group B: Target plan is doing octo-core/RISC 32-bit at ~50 GHz on 22FDX/12FDX.
FDSOI substrate bias showed Ic/Vbe to be shifted but they ran the benchmarks at standard 2.5V~3.3V. Rather than pushing the Vbe=Vt which they can lower Vdd and get the boost. Bulk Vertical HBT = ~30+ GHz Superscalar MPU is 1600W while FDSOI Lateral HBT = ~30+ GHz Superscalar MPU target is 16W.

The ones we will probably see way before the above is standard FD-SOI CMOS at 3-5 GHz around 0.3V~0.6V range. Since, that will be possible with Advanced SOI/eSoC3+ being available to 22FDX/18FDS chips. Specifically, we also have to watch for smaller Lg with same CPP. Since, planar can be asymetrical to BEOL. Have to really watch out for Next-gen 22FDX having a 7nm FEOL process with that 22nm BEOL process. This is when striped devices become present acting like Fin pitch; "Also, in embodiments, the optimized slice width and smaller space make the slotted active region possible for logic/SRAM design using FDSOI technology" - https://patents.google.com/patent/US10497576B1 Which would be visible in standard cells for true 5T~6T track height.

Single-issue OoO 3 GHz~5 GHz RVA23 multi-core solution can be smaller than a Multi-issue InO 1.6 GHz~2 GHz RVA23 multi-core solution.
 
Last edited:

Doug S

Diamond Member
Feb 8, 2020
3,384
5,990
136
You're talking about some pretty old research. 130nm/90nm was back when Intel thought they'd take P4 to 10 GHz in a few years, until they ran headfirst into the power wall. DARPA funding some research claiming "we think this is possible" and engineers actually trying to make it work in the real world are two very different things.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
No need to argue with Nosta.

He will still be talking about imminent Bulldozer derviates with CMT on FD-SOI processes long after everybody has moved to RISC-V (or whatever) in 2040+
It is labeled under ultra low power and it is still connected to the FDSOI team at AMD. Still being a "Client Ultra Low Power Product" from StoneyX till now. With some overlap with the x86->RISC-V hires with the Mile High/Boston Design Center -> Hudson Valley Design Center. As well with some dead-ends to Semi-Custom Business Unit, so it might not be AMD doing it by themselves. Steam Deck+Magic Leap 2 Compute circle-thing, for example.

There are repeated instances of it being targeted on 12FDX from 2018 to 2022, no indication of it changing back to some other node.
Excavator -> Zen4 -> Post-XV/Zn4 core/StoneyX product for Ultra-low-power client laptop/tablet/embedded/automotive.
You're talking about some pretty old research. 130nm/90nm was back when Intel thought they'd take P4 to 10 GHz in a few years, until they ran headfirst into the power wall. DARPA funding some research claiming "we think this is possible" and engineers actually trying to make it work in the real world are two very different things.
The research for 130nm/90nm chips spans 2008-2014. As 2014 was the date for IBM's SiGe 8XP (130nm) and SiGe 9HP (90nm). Most of the demonstrated chips were aimed with 9HP or 8XP for 2015-2018. Plus GF/IHP 9HP+ SiGe is a 2024 product.

While the 45nm PDSOI/22FDX/future 12FDX chips spans 2018-2022+.
45nm date = 05-08 December 2021, 22nm date = 23-24 September 2024, with the LSige FDSOI from STM = 09 March 2023 -> May 2023. Which should be close to US R&D version for GloFo's FDX-LSiGe HBT.
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
These are still a few years away, right?
10nm/7nm FDSOI for Europe should be done before this range of dates; May 2027 ~ Dec 2028.

With GlobalFoundries it is Malta that needs to be watched for FDSOI. Basically, soon after 12FDX at Malta should be 7FDX. We should hear more about it during March 2025 and/or June 2025 conferences. Expect the FEOL process to be Gen 1 (10nm/7nm) for e.SoC3. As they deleted the 14nm node in the more new roadmaps;
fdsoi.jpeg
 
Last edited:
  • Like
Reactions: name99

lakedude

Platinum Member
Mar 14, 2009
2,778
528
126
No need to argue with Nosta.

He will still be talking about imminent Bulldozer derviates with CMT on FD-SOI processes long after everybody has moved to RISC-V (or whatever) in 2040+
Speaking of Bulldozer I have a question about it but don't want to be OT so I started a new post. Y'all are so smart, maybe you could check it out and voice your opinions?

 

RTX

Member
Nov 5, 2020
179
131
116
10nm/7nm FDSOI for Europe should be done before this range of dates; May 2027 ~ Dec 2028.

With GlobalFoundries it is Malta that needs to be watched for FDSOI. Basically, soon after 12FDX at Malta should be 7FDX. We should hear more about it during March 2025 and/or June 2025 conferences. Expect the FEOL process to be Gen 1 (10nm/7nm) for e.SoC3. As they deleted the 14nm node in the more new roadmaps;
View attachment 112595
Still the same full strained cmos in the above pic?
 

OneEng2

Senior member
Sep 19, 2022
744
994
106
I already replied to you in another thread, but it has long been known that Moore's Law would end due to economics. I think you are underestimating when that will happen. It won't be in the next couple decades, IMHO.
Transistor density is NOT doubling every 18 months. Each node improvement costs more in NRE and more per wafer (exponentially so). Each node improvement provides less PPA than the node before it.

This is the trend I see ..... which is proof that Moore's Law (or observation ;) ), is no longer even close to true ..... and even if it were, the economics no longer work even today.

Current Zen 5 (desktop and laptop) are on N4P (basically a tweaked N5 node). The price of a N2 wafer is double that of N5. Indeed, AMD is staying away from N2 except for DC where they can justify the price. Even in DC, there will soon become a point where the exponentially higher cost of a new node doesn't make financial sense.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,809
1,289
136
Still the same full strained cmos in the above pic?
Gen 1 shouldn't be strained-wafer CMOS. While Gen 2 should be strained-wafer CMOS.

22FDX/22FDX+ ~ uniformity thickness control at +/-4Ã… and a roughness < 8Ã
12FDX(L-sSOI)/10nm FDSOI(L-sSOI) ~ eSoC.3 platform will be to go further for those performances and reach a control at +/-3.5Ã… with an homogeneous roughness < 6Ã….
12FDX(+/HP/smaller CPP)(Wafer sSOI)/7nm FDSOI(Wafer sSOI) ~ eSoC3+(now, Advanced SOI) booster technology is also considered in the FDSOI substrate roadmap;
specific SOI substrates(sSOI) able to generate global constraints on transistors NMOS device couple with study to relax the strain locally for PMOS devices.
7nm FDSOI has options of doing Gen1 after Gen2 for lower cost NFET/wafer.
 

maddie

Diamond Member
Jul 18, 2010
5,151
5,537
136
I think it's 24 months actually. Not sure where this 18 months thing started, but I see lots of people quote 18 instead of 24. ;)

Edit: Moore's Law has been dead for a long time now. It has already deviated/slowed down. Time for everyone to stop worrying abt Moore's Law. It's time for Jensen's Law, Lisa's Law & Ex-Pat's Law.
I think that's missing the forest for the trees.
 

OneEng2

Senior member
Sep 19, 2022
744
994
106
I think it's 24 months actually. Not sure where this 18 months thing started, but I see lots of people quote 18 instead of 24. ;)

Edit: Moore's Law has been dead for a long time now. It has already deviated/slowed down. Time for everyone to stop worrying abt Moore's Law. It's time for Jensen's Law, Lisa's Law & Ex-Pat's Law.
Granted. I thought the quote was 18-24 months.... Still, this is no longer true, and it is becoming increasingly not true every new node release.

The real metric that is missed in all this is the cost per wafer. When cost per wafer barely moved from one gen to the next, things were good. When cost per wafer is now skyrocketing ..... this is a big problem.

Furthermore, the NRE cost for developing a new node (both time and equipment) is skyrocketing. This means that the same number of wafers that the equipment will have more $ allocated to each wafer.

In order to keep COGS at the same price, the more expensive equipment pay off time has to occur over more wafers .... ie, the same process needs to stick around longer.

None of this can help doubling the number of masks and scans being performed to make the wafer. This is a forever cost. Double the time = double the cost.
I think that's missing the forest for the trees.
Yes, exactly.
 
  • Like
Reactions: SiliconFly

Doug S

Diamond Member
Feb 8, 2020
3,384
5,990
136
Transistor density is NOT doubling every 18 months. Each node improvement costs more in NRE and more per wafer (exponentially so). Each node improvement provides less PPA than the node before it.

This is the trend I see ..... which is proof that Moore's Law (or observation ;) ), is no longer even close to true ..... and even if it were, the economics no longer work even today.

Current Zen 5 (desktop and laptop) are on N4P (basically a tweaked N5 node). The price of a N2 wafer is double that of N5. Indeed, AMD is staying away from N2 except for DC where they can justify the price. Even in DC, there will soon become a point where the exponentially higher cost of a new node doesn't make financial sense.


Moore's observation was actually about COST per transistor, not transistor density.

Anyway, we all know Moore's Law ended like 15 years ago so I'm using the modern definition. That you can still make a node that improves performance/power in a meaningful way. Even if cost per transistor begins to rise there will still be demand for new nodes - there will be fewer interested customers as a result but not everyone is trying to win on razor thin margins and they can afford the cost.

Consider that the last Snapdragon with ARM designed cores cost $240, and they'll reportedly be over $300 with the one coming out next year. Apple is paying less than a quarter of that. If those Android OEMs are making it work (and Samsung is quite profitable on their high end smartphones, just not Apple profitable) there's plenty of room to run. I stick by my assessment that things won't peter out until about 2040. Once there are not enough customers to amortize the development of a new node and the new semi tools required to make it, that's when the party is over.

Though I expect "something" will eventually save us well before that date. Nanoimprint, free electron lasers, or whatever will replace the ever climbing cost of manufacturing with EUV. China is apparently going into FEL rather than trying to replicate the moonshot effort it took to make EUV work, if we aren't looking at the same we may be the ones left behind in a decade.
 

desrever

Senior member
Nov 6, 2021
301
774
106
Consider that the last Snapdragon with ARM designed cores cost $240, and they'll reportedly be over $300 with the one coming out next year. Apple is paying less than a quarter of that. If those Android OEMs are making it work (and Samsung is quite profitable on their high end smartphones, just not Apple profitable) there's plenty of room to run. I stick by my assessment that things won't peter out until about 2040.
Thats not considering the smartphone market is saturated and performance is "good enough" now. I expect it to implode if they keep trying to push the cost.
 

desrever

Senior member
Nov 6, 2021
301
774
106
Nope. Check the article again. It says:

“… logic density is more important than HDC SRAM density. For now, we cannot compare this metric for Intel's 18A and TSMC's N2. Furthermore, logic density is hard to estimate …”

Both 18A & N2 are expected to have logic densities in excess of >200 MTr/mm2 with N2 being +15% than 18A (like -230 & -260 respectively for example). But 18A is expected to have more performance due to BSPDN.

18A is shaping up to be an excellent node. At this point, the only question that remains is, whether it’ll hit yield & volume on time.
lol if you say so. I love how confident you are every time Intel has anything in the pipeline.
 

ajsdkflsdjfio

Member
Nov 20, 2024
185
133
76
lol if you say so. I love how confident you are every time Intel has anything in the pipeline.
Why everything gotta be black and white in your head? Intel isn't doing great and is unlikely to be suddenly saved by 18A, but at the same time 18a is impressive technologically-wise. Saying it is in no way going to beat N3(you say N3E which is even more absurd) or even match it is just massive cope. At the end of the day most of us have 0 inside information and therefore cannot double down on anything, but most of the rumors and publicly known information point to 18a matching N3 at the very least and beating it in some areas in a best-case scenario. If you just look at advertised SRAM scaling we already know from Intel's own IEDM presentation notes that 18a matches N3 in sram scaling. That alone points to the idea that 18a is likely competitive with N3. The discussion of 18a's specifications has no relation to Intel's future success in the foundry or otherwise.

But sure, put on your blinders, Intel sucks and is doing bad therefore anything Intel is likely to release in the future is also shit!!

"skymont isn't more efficient than lions cove tho"

"His actions of tweeting Bible verses publicly as Intel's CEO is worthy of criticism. Which successful CEO does this?"

"PTL is going to be more like RPL and will use way more power to win 5% vs AMD."

"18A will not match N3E in density or performance/watt"

FK intel amirite?!?

Also it's funny in your criticism of Pat you mention how he was the one who missed the AI train. Are you some kind of dullard or something, to expect Pat to be able to take advantage of the AI train that hit the industry only 1 year into his position as CEO with intel having comparatively 0 experience in AI vs Nvidia and even AMD. Intel's own bread and butter CPU design teams have been struggling for decades and somehow Pat is expected to magically create AI products that can rival those of Nvidia in 1 year when most product design cycles are 5+ years(if you are smart enough you can also extrapolate that this means many of the products release under Pat were designed long before he took position as CEO). On the contrary, if you read about Pat's involvement in the Larrabee project you would realize Pat was actually one of the people trying to advance Intel's GPGPU efforts all the way back in 2009. The upper management back then underfunded and eventually gave up Pat's Larrabee project because they weren't able to produce an Nvidia/AMD competitor within 1 generation(remember this concept). Much of the same upper management (Yeary for example) later decide Pat is to blame for not being able to recreate Nvidia's decades long experience in GPGPU in only three years. Pretty Ironic. It's arguable that he didn't make the best decisions for Intel's AI roadmap while he was CEO, but at the same time Intel was trying to step into the AI game a day late and a dollar short.
 
Last edited:

OneEng2

Senior member
Sep 19, 2022
744
994
106
Why everything gotta be black and white in your head? Intel isn't doing great and is unlikely to be suddenly saved by 18A, but at the same time 18a is impressive technologically-wise. Saying it is in no way going to beat N3(you say N3E which is even more absurd) or even match it is just massive cope. At the end of the day most of us have 0 inside information and therefore cannot double down on anything, but most of the rumors and publicly known information point to 18a matching N3 at the very least and beating it in some areas in a best-case scenario. If you just look at advertised SRAM scaling we already know from Intel's own IEDM presentation notes that 18a matches N3 in sram scaling. That alone points to the idea that 18a is likely competitive with N3. The discussion of 18a's specifications has no relation to Intel's future success in the foundry or otherwise.

But sure, put on your blinders, Intel sucks and is doing bad therefore anything Intel is likely to release in the future is also shit!!

"skymont isn't more efficient than lions cove tho"

"His actions of tweeting Bible verses publicly as Intel's CEO is worthy of criticism. Which successful CEO does this?"

"PTL is going to be more like RPL and will use way more power to win 5% vs AMD."

"18A will not match N3E in density or performance/watt"

FK intel amirite?!?
I generally agree with your sediment.

Intel isn't all bad all the time and in fact, has been outright brilliant on many occasions.

I am increasingly not convinced that 18A is one of them though. Let me elaborate.

I believe that for some applications, 18A will be best-in-class by some margin. It will be almost like the days of old at Intel with a 1 to 2 node advantage for these applications.

The process guys over at SemiWiki seem to think that BSPD is more expensive than FSPD and that it can require up to 30 deg lower outside temps to achieve the same inside temps for hot spots. Now, I don't know what all that means with respect to which applications will work better than others, but it sure sounds like BSPD has some technical issues that need some solving.

A quick search on the estimated cost of 18A to Intel gets a general consensus of around 10 Billion USD. Another 3 Bn and Intel could have bought a brand new Ford class aircraft carrier!

This is important because this cost needs to be shared by a great number of Intel fab customers over many years to make Intel profitable again. I am concerned that 18A is not "customer friendly" enough to win new designs with as the cost of implementation and production will be prohibitive. Further, unlike TSMC which is opting for a less dense implementation of BSPD with A16 that can support both BSPD and FSPD libraries, Intel is reaping the entire benefit of BSPD in 18A (which is a good thing), but is making it harder for customers to buy into it.

The prevailing thought about BSPD is that it is needed for the next step in process tech (CFET). TSMC may offer their combination BSPD/FSPD nodes clear down to A10 which would then be full BSPD only. When I read this, I thought about just how customer centric and risk adverse this approach really is compared to the more "no-holes-barred" "Regain Process Leadership" approach Intel is using.

I think that by the end of 2025 the story should be much more clear :)