Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 241 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,779
6,798
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:

Thunder 57

Diamond Member
Aug 19, 2007
4,294
7,099
136
They need to make money somehow, or they'll end up like Anandtech 😐

Anandtech had more than money problems. They were mismanaged for awhile before giving up. If you watch som videos from Dr. Cutress from around that time he pretty much says the same thing in a slightly nicer way.
 

511

Diamond Member
Jul 12, 2024
5,452
4,879
106
I'm still not clear about the reasons why. Supposedly it is a lot more difficult for it to shed heat so yeah let's not put it in a phone where is dissipating a single digit number of watts, and most of the time milliwatts, instead let's put it in an AI server on a reticle sized die burning 1000W+ nonstop. Someone please make that make sense!

Now I could accept financial arguments that the marginal benefit for a phone SoC in terms of smaller die and reduced power consumption simply doesn't justify the added cost. Meanwhile packing more transistors into an AI server's reticle sized die to increase the available performance, and offering just a few percent more computation per watt would easily repay the added cost on a TCO basis at the insane power draw and duty cycle of Nvidia AI servers.

But the cooling argument alone just makes no sense. I get that having wires on both sides on a BSPDN die traps the heat and makes it more difficult to dissipate, but that sure sounds like a MUCH bigger problem when you have 2-3 orders of magnitude more heat getting trapped in the AI servers and not so much of a problem for the passively cooled device in my hand that isn't even warm to the touch unless I'm really pushing it hard.
Simple reason cooling and cost.Mobile devices don't have active cooling and the cost of BSPDN. Intel's 18A process is unique that they added BSPDN as a way of cost cutting their Non BSPDN process is expensive.
 

maddie

Diamond Member
Jul 18, 2010
5,204
5,613
136
I'm still not clear about the reasons why. Supposedly it is a lot more difficult for it to shed heat so yeah let's not put it in a phone where is dissipating a single digit number of watts, and most of the time milliwatts, instead let's put it in an AI server on a reticle sized die burning 1000W+ nonstop. Someone please make that make sense!

Now I could accept financial arguments that the marginal benefit for a phone SoC in terms of smaller die and reduced power consumption simply doesn't justify the added cost. Meanwhile packing more transistors into an AI server's reticle sized die to increase the available performance, and offering just a few percent more computation per watt would easily repay the added cost on a TCO basis at the insane power draw and duty cycle of Nvidia AI servers.

But the cooling argument alone just makes no sense. I get that having wires on both sides on a BSPDN die traps the heat and makes it more difficult to dissipate, but that sure sounds like a MUCH bigger problem when you have 2-3 orders of magnitude more heat getting trapped in the AI servers and not so much of a problem for the passively cooled device in my hand that isn't even warm to the touch unless I'm really pushing it hard.
Besides the density improvement, BSPD allows a smaller V-droop and thus higher actual operating voltages. Equates to higher performance and rapidly higher heat output (V^2). The increased thermal resistance compounds the problem.

Mobile don't operate at max V , but do with minimal cooling (min size and cooling power), so don't need/want the extra few mV available through (more expensive) BSPD.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
Besides the density improvement, BSPD allows a smaller V-droop and thus higher actual operating voltages. Equates to higher performance and rapidly higher heat output (V^2). The increased thermal resistance compounds the problem.

Mobile don't operate at max V , but do with minimal cooling (min size and cooling power), so don't need/want the extra few mV available through (more expensive) BSPD.

It's more about signal integrity than higher operating voltages. You could always push voltages higher without it but your signal won't be "clean" and you'll get more overshoot leading to other issues. In terms of power, the parasitic resistance improvements lead to less power loss in the bias network and overall better chip efficiency.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
Simple reason cooling and cost.Mobile devices don't have active cooling and the cost of BSPDN. Intel's 18A process is unique that they added BSPDN as a way of cost cutting their Non BSPDN process is expensive.

BSPDN tech adds cost. . .
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
Intel's actually implemented In way that it's not a cost adder cause it allowed them to relax the pitches.

Sounds like a marketing twist.

You could argue that BSPDN allows for a cheaper path to higher density than without it, but no one sane would try to match BSPDN density, all else being equal, without a BSPDN network. You would take a little less density and have a cheaper process.
 

Geddagod

Golden Member
Dec 28, 2021
1,681
1,720
136
Intel's actually implemented In way that it's not a cost adder cause it allowed them to relax the pitches.
Idk if that's what this slide is saying, but I haven't seen it contextualized
1761833941933.png
You could argue that BSPDN allows for a cheaper path to higher density than without it, but no one sane would try to match BSPDN density, all else being equal, without a BSPDN network. You would take a little less density and have a cheaper process.
TBH Idek if this is true...
 

511

Diamond Member
Jul 12, 2024
5,452
4,879
106
Sounds like a marketing twist.

You could argue that BSPDN allows for a cheaper path to higher density than without it, but no one sane would try to match BSPDN density, all else being equal, without a BSPDN network. You would take a little less density and have a cheaper process.
you are right can't believe the cost comparison is with BSPDN vs power via vs standard flow and look like I mixed up M0-M2 for the entire process.
bJaS6h7ZekCHHDa26d4x36.jpg
 

dangerman1337

Senior member
Sep 16, 2010
440
77
91
I know Intel's 18A has issues but 18A-P apparently seems they "solved" BSPDN issues since HPC and similar will be using it especially with Crescent Island.
 

adamge

Member
Aug 15, 2022
128
249
86

Doug S

Diamond Member
Feb 8, 2020
3,837
6,787
136
That BSPDN would be more cost effective in increasing density than shrinking the node in more traditional ways.

Actually shrinking physical transistors is running out of gas. They used to be single story buildings spread out over an area and over time we've been making them more vertical until with CFET they will become skyscrapers. It will be pretty hard to shrink any more at that point, at least not without abandoning the materials everyone has become comfortable with for many years.

The other knob we can turn to increase density is reducing the spacing, and the underlying wiring heavily influences that spacing. What I've been led to understand (someone who knows this stuff to a deeper level please correct me if I'm wrong) BSPDN is sort of a gift that keeps on giving as we shrink transistors more and those wiring induced spacing limitations dominate to a greater degree. Sort of like FinFET in that respect, at first the benefit in terms of leakage reduction was moderate, but as processes shrank it become a bigger deal to where you wouldn't have been able to live without it even if you were able to achieve the same densities with planar.

BSPDN is the same way, it is sort of an optional boost that for now you can choose to live without but when we move on to CFET in the next decade it will be pretty much mandatory as the scaling difference between a process with and without it would be much larger than it is currently. As a "bonus" (depending on how you look at it lol) because per wafer costs will continue to rapidly increase (especially if we're stuck with EUV) the contribution of BSPDN to the total cost will become smaller over time.
 
  • Like
Reactions: maddie

Geddagod

Golden Member
Dec 28, 2021
1,681
1,720
136
Actually shrinking physical transistors is running out of gas. They used to be single story buildings spread out over an area and over time we've been making them more vertical until with CFET they will become skyscrapers. It will be pretty hard to shrink any more at that point, at least not without abandoning the materials everyone has become comfortable with for many years.

The other knob we can turn to increase density is reducing the spacing, and the underlying wiring heavily influences that spacing. What I've been led to understand (someone who knows this stuff to a deeper level please correct me if I'm wrong) BSPDN is sort of a gift that keeps on giving as we shrink transistors more and those wiring induced spacing limitations dominate to a greater degree. Sort of like FinFET in that respect, at first the benefit in terms of leakage reduction was moderate, but as processes shrank it become a bigger deal to where you wouldn't have been able to live without it even if you were able to achieve the same densities with planar.

BSPDN is the same way, it is sort of an optional boost that for now you can choose to live without but when we move on to CFET in the next decade it will be pretty much mandatory as the scaling difference between a process with and without it would be much larger than it is currently. As a "bonus" (depending on how you look at it lol) because per wafer costs will continue to rapidly increase (especially if we're stuck with EUV) the contribution of BSPDN to the total cost will become smaller over time.
Maybe that inflection point will come later, but I think BSPDN being a cost effective way to scale density won't actually be signaled till the rest of the foundries also start using BSPD as the standard for nodes.
Because it also seems like BSPD so far is very design specific. At least what TSMC is claiming about A16 being a very HPC specific node, and how designs with complex routing are the ones that will benefit, and less so mobile.
I am actually interested to see if TSMC is going to talk about A14 vs A16 cost though. Because this by itself could help answer BSPD cost from increasing density vs just shrinking traditionally. A14 still seems to be a good bit better than A16 even for HPC oriented products, but again, if TSMC says something along the lines of A14 not being too much more expensive to manufacture than A16....
 

511

Diamond Member
Jul 12, 2024
5,452
4,879
106
I am actually interested to see if TSMC is going to talk about A14 vs A16 cost though. Because this by itself could help answer BSPD cost from increasing density vs just shrinking traditionally. A14 still seems to be a good bit better than A16 even for HPC oriented products, but again, if TSMC says something along the lines of A14 not being too much more expensive to manufacture than A16...
Not happening ... Also Design rule is entirely different for A16 vs A14 and is akin to N2.
 

Doug S

Diamond Member
Feb 8, 2020
3,837
6,787
136
Maybe that inflection point will come later, but I think BSPDN being a cost effective way to scale density won't actually be signaled till the rest of the foundries also start using BSPD as the standard for nodes.
Because it also seems like BSPD so far is very design specific. At least what TSMC is claiming about A16 being a very HPC specific node, and how designs with complex routing are the ones that will benefit, and less so mobile.
I am actually interested to see if TSMC is going to talk about A14 vs A16 cost though. Because this by itself could help answer BSPD cost from increasing density vs just shrinking traditionally. A14 still seems to be a good bit better than A16 even for HPC oriented products, but again, if TSMC says something along the lines of A14 not being too much more expensive to manufacture than A16....


That's sort of my point in comparing it to FinFET, where the benefits are marginal/niche with the N2 node family, but increase with each process generation. If only HPC can get a good cost/benefit return from BSPDN in the N2 generation it makes sense to offer it as a two track proposition, where A16 is basically "N2P + BSPDN" and the customers who benefit less or perhaps not at all will use N2P with ordinary frontside power. As you say we don't know their plans yet for A14 and A10, but if the relative benefits of BSPDN increase with each generation then at some point (either A14 or A10) those two tracks merge back to one where everyone gets BSPDN.

Intel is going the opposite way of TSMC, making PowerVIA part of the standard/only process flow with 18A - though it is a "lesser" version of BSPDN and they don't deliver the full treatment until 14A. It isn't yet clear which direction Samsung will go.
 
  • Like
Reactions: Geddagod

regen1

Senior member
Aug 28, 2025
363
456
96
That's sort of my point in comparing it to FinFET, where the benefits are marginal/niche with the N2 node family, but increase with each process generation. If only HPC can get a good cost/benefit return from BSPDN in the N2 generation it makes sense to offer it as a two track proposition, where A16 is basically "N2P + BSPDN" and the customers who benefit less or perhaps not at all will use N2P with ordinary frontside power. As you say we don't know their plans yet for A14 and A10, but if the relative benefits of BSPDN increase with each generation then at some point (either A14 or A10) those two tracks merge back to one where everyone gets BSPDN.

Intel is going the opposite way of TSMC, making PowerVIA part of the standard/only process flow with 18A - though it is a "lesser" version of BSPDN and they don't deliver the full treatment until 14A. It isn't yet clear which direction Samsung will go.
Initially TSMC was going to introduce BSPDN at N2P but their plans changed. Presently for A14 they will have two versions, coming out first will be without BSPDN then later with BSPDN.
Samsung's last public roadmap(June 2024) had BSPDN("Backside Contact" version, also to be used in TSMC's A16 and Intel's 14A) for SF2Z(in 2027), though from the same presentation it seems SF1.4 might not have it(?), not sure.
Intel's situation is different to TSMC and Samsung, they have not specifically developed for smartphone chips(for a long time) and lack IP there as well. "PowerVIA" seems a good choice for 18A's timing(trade off of easier implementation with lesser scaling relative to "Backside contact" ).
For mobile phone chips the cost-benefit ratio and complexity of BSPDN at least initially is not worth it though they might eventually(or have to) transition to it.
 
Last edited:
  • Like
Reactions: Geddagod

regen1

Senior member
Aug 28, 2025
363
456
96
Yeah, most likely most smartphone chip vendors using TSMC will transition from TSMC N2 variants to A14(non-BSPDN) for their mobile phone chips.
A14(non-BSPDN) should be quite fine(on paper) for other use cases as well.
 
Last edited:

regen1

Senior member
Aug 28, 2025
363
456
96
As far as I can tell, the only plans that changed was the name. A16 is N2P with BSPDN.
That's mostly correct just that N2P is now a optimized N2 instead of what it originally was planned to. Now A16 is more like N2P + BSPDN and some optimizations

No the technique was changed as well it was not SPR with N2
Hmm, did TSMC ever publicly say about using anything other than "Backside Contact" aka "Super Rail" on N2(back then) or A16? There were speculations back in 2022 and so on as to what they will use but I guess nothing concrete was known till they revealed using "Backside contact" for A16 in their 2024 Tech Symposium.
 
  • Like
Reactions: 511