Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 194 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,779
6,798
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
As long as we do business in open source, As a Consideration companies are also contributing to open source.
No matter how much it is for business, a win-win relationship is established to some extent.
The problem is freeride to open source
It's a company that reports activities by making only one's own projects open source without contributing to being open sourced.
I've gotten a little off topic
 
Last edited:

Thibsie

Golden Member
Apr 25, 2017
1,175
1,383
136
ARM can be licensed. What's an x86 license?

That's my point. If anyone wants to build ARM, they can. If anyone wants to build x86, they can't. There's a reason why ARM has won out.

On theory, an x86 license to a competitor is absolument possible.
In practice, highly doubtful as everybody knows. Remember VIA still has one but what does it cover exactly, I dunno.

I believe both AMD and Intel cross liscensed about everything to the point a potentiel customer might have access to x86 (basics) and easily every expired patent but would be blocked from modern extensions.

I guess an x86 competitor should be in fact possible. All those all patents expired a long time ago. Wouldn't prevent Intel legal team to take actions though.
 

johnsonwax

Senior member
Jun 27, 2024
469
674
96
That's why you're saying that the architecture isn't tied together,
ARM alone didn't move Moore's law forward.
There are multiple factors, Moore's law has moved forward.
Moore's Law never moved forward by one factor
We've made progress by a wide variety of causal relationships You said earlier that you said that the GPU is not a big number, but the GPU is a fine contributor, regardless of the manufacturer's GPU. It's even in the SOC

It's not that I moved alone, the industry itself is moving.

What do you want to make? It's not a matter of what you make
it's good to have a chip to make there
I think you're completely missing the context. Keep in mind, my whole comment was in relation to Intel from ~2006-2015.

From Intels perspective, Apple comes to them, asks them to make ARM processors as a foundry, Intel says no. Intel wants to protect x86, and they move x86 downmarket to compete. There is no flexibility within Intel on architecture - it's x86 or the highway. They reject volume in order to protect x86. Then, 10nm falls apart, and x86s ability to contribute to Moores law ends because Intel's engineers didn't solve the problem but Samsung and TSMCs engineers did, but Intel is trapped inside their own fabs.

From Apple's perspective, they don't want to be locked into x86 because their interest in additional compute is in low power and x86 sucks ass at that, so they go back to ARM. They ask Intel to fab, Intel says no, but Apple's not blocked - they go to Samsung, and later to TSMC. They chase whoever's engineers do solve the problem. And when Intels x86 no longer meets their compute needs because its garbage on laptops, Apple jumps to their own ARM designs, because ARM is available for that job. x86 never was despite Apple asking.

Intel failed all of the steps (not all at the same time) they needed to stay on top of Moores law - they were oppositional to new architectures that the industry needed to meet their demand, and to operating a foundry business, and their engineers blew it on 10nm which left their own designs trapped. In the process, their contribution to the industry declined - a LOT. And now they find themselves with the same revenue as they had in 2012 and a fab cost which has scaled somewhat with Moores law.

I'm not saying ARM is uniquely instrumental to the success of Moores law, but accessible architectures like ARM are, as it was for the entire mobile industry and for Apple to leave Intel. There are multiple factors for why Moores law has moved forward, but the one you can't put aside is the economics. You still gotta pay for the damn thing, and the cost consistently climbs. The contributors have changed, but at the start of this story ~2006, Intel WAS able to carry it pretty much solo because x86 was so dominant, and because Intel had sufficient revenue to meet the economic need. They no longer do, and haven't for a LONG time now. Their place in the economic landscape is quite limited because the business was so closed off. AMD spun off GF which liberated it to chase the engineers. GF is also liberated to fab whatever comes along just like TSMC. Intel is the only player that wasn't doing that - they were closed on architecture and they were closed to customers and they trapped themselves. If they really understood that the underlying economic condition was key to Moores law functioning, they should have realized ages ago that they were screwed by staying that course, because the overall industry could keep up via all these market elements, but Intel couldn't and they really only allowed themselves one avenue to keep up and that was an ever growing PC industry. So even if they hadn't screwed up 10nm, they couldn't. No matter how hard the engineers work, they were going to lose, because the board, the CEOs, etc either didn't understand their own law, or they chose to ignore it.

Who understands Moores law best? Apple (though it seems like pretty much everyone else is on the same page now). They detached themselves from specific architectures. For all we know Apple is going to switch to RISC-V next year, and not a soul here would question they could have their entire lineup running on RISC-V because Apple has made so many architecture changes and abstracted their software from architecture to such a degree that they make it look easy. So if ARM doesn't allow for the next form of compute, they'll just pick the one that does. And they're doing their own GPU and AI silicon for precisely that reason. They shipped some of the first NPUs in the industry - they can create the compute form that they need. And they aren't beholden to a single foundry. They can switch back to Samsung or Intel whenever they want. So they have control of the form the compute will come in because of their own design team and detachment from specific architecture, and they have control of which foundry to choose. And they have a greater ability to solve the economic problem than anyone else. In theory, there's no reason why Apple should be either the party moving the law forward or tight among that group. They have no structural deficiencies, though of course they can make mistakes and blow that position - and some day they probably will.
 

johnsonwax

Senior member
Jun 27, 2024
469
674
96
On theory, an x86 license to a competitor is absolument possible.
In practice, highly doubtful as everybody knows. Remember VIA still has one but what does it cover exactly, I dunno.

I believe both AMD and Intel cross liscensed about everything to the point a potentiel customer might have access to x86 (basics) and easily every expired patent but would be blocked from modern extensions.

I guess an x86 competitor should be in fact possible. All those all patents expired a long time ago. Wouldn't prevent Intel legal team to take actions though.
Exactly, and why jump through those hoops when ARM is sitting right there with this lovely menu of cores you can choose from and an architectural license you can use to design your own. Why fight through the thicket of the last 30 years of legal infighting to keep x86 as proprietary as possible when ARM will put their arm around your shoulder, hand you a latte, and say 'just sign right here'.

It doesn't matter if x86 can be licensed in theory, when in practice we all know it's functionally impossible. It doesn't matter if ARM is proprietary if anyone with a checkbook can get in on it. And yes, there will need to be a space free of ARMs license as well - RISC-V, etc. There's a lot of stuff already rattling around, just not that throw off a ton of money to be big factors here. But that may change.

But why even create an x86 competitor? Apple has shown ARM can outperform it. Linux has run on ARM forever. Microsoft has already gotten Qualcomm to give them an ARM/Windows avenue. Like, why even deal with Intel/AMDs garbage? Just run past them. That's already happening in the datacenter. Compute has been shifting into GPUs and other asymmetrical compute for a while. At some point swapping out the x86 element will be pretty easy. x86 is like that Star Trek episode with the black/white guys fighting each other forever on an empty planet while the Enterprise just flies away to do their own thing.
 

jpiniero

Lifer
Oct 1, 2010
17,146
7,533
136
And when Intels x86 no longer meets their compute needs because its garbage on laptops, Apple jumps to their own ARM designs, because ARM is available for that job. x86 never was despite Apple asking.

Apple's kind of always been a proprietary company... and using their own processors in their PCs was the logical next step from the custom phone SoC development. The 10 nm disaster made the decision easy but they were always dumping Intel.
 

poke01

Diamond Member
Mar 8, 2022
4,802
6,128
106
It's not just a dialect to get goodwill
In fact, for example, open source systems are often used in the server and embedded markets.
In order to have the users choose their own products, we make them compatible with our own products. We can also contribute to the project itself beyond the framework of our own products.
As long as you are doing business with open source, you are obliged to contribute to open source.
However, in fact, thanks to the large-scale strength of the company, the developers are saved, and the developers can get around where they can't handle it.
The developer's environment is also better, so it's hard to say that it's bad.
Contribution to open source by corporate forces

It's the same everywhere regardless of Intel or AMD

Since 2010, Microsoft, which has been conspicuously hostile to open source in the past, has contributed again since 2010.
The developer's attitude has softened to some extent.
Again, these corps support open source for their own benefit.
Exactly, and why jump through those hoops when ARM is sitting right there with this lovely menu of cores you can choose from and an architectural license you can use to design your own. Why fight through the thicket of the last 30 years of legal infighting to keep x86 as proprietary as possible when ARM will put their arm around your shoulder, hand you a latte, and say 'just sign right here'.
Even better than ARM if you are a new comer, RISC-V.
 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
Again, these corps support open source for their own benefit.
Even better than ARM if you are a new comer, RISC-V.

You see… Do you understand what I'm saying?
As I said earlier, there is no problem even if a company is involved in the open source community.
Even if you use open source software to make a business, we are okay with it as long as you pay the price (your contribution to open source software).
Corporate forces are not necessarily enemies
The idea that companies are enemies in open source is an old way of thinking.

in any form Anyone who can contribute is welcome

For example Even RISC-V ISA is open source, but you can use it to do business with it.
There is no obligation or mandate to publish anything using RISC-V as open source
There is no problem with proprietary
You are free to use it, including how to use it


If you want to continue, go to the software thread from here.
 
  • Like
Reactions: Tlh97 and poke01

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
Exactly, and why jump through those hoops when ARM is sitting right there with this lovely menu of cores you can choose from and an architectural license you can use to design your own. Why fight through the thicket of the last 30 years of legal infighting to keep x86 as proprietary as possible when ARM will put their arm around your shoulder, hand you a latte, and say 'just sign right here'.

It doesn't matter if x86 can be licensed in theory, when in practice we all know it's functionally impossible. It doesn't matter if ARM is proprietary if anyone with a checkbook can get in on it. And yes, there will need to be a space free of ARMs license as well - RISC-V, etc. There's a lot of stuff already rattling around, just not that throw off a ton of money to be big factors here. But that may change.

But why even create an x86 competitor? Apple has shown ARM can outperform it. Linux has run on ARM forever. Microsoft has already gotten Qualcomm to give them an ARM/Windows avenue. Like, why even deal with Intel/AMDs garbage? Just run past them. That's already happening in the datacenter. Compute has been shifting into GPUs and other asymmetrical compute for a while. At some point swapping out the x86 element will be pretty easy. x86 is like that Star Trek episode with the black/white guys fighting each other forever on an empty planet while the Enterprise just flies away to do their own thing.

Basically, you can't get an ARM ISA license easily. The number of companies that own the license is generally few.

To be honest, if I give you an ARM ISA license...
The ecosystem may expand…
However, for the IP business shop ARM, earning money is not so good.
The license contract system is also different from the ISA license and IP.
 
Last edited:

johnsonwax

Senior member
Jun 27, 2024
469
674
96
Apple's kind of always been a proprietary company... and using their own processors in their PCs was the logical next step from the custom phone SoC development. The 10 nm disaster made the decision easy but they were always dumping Intel.
Sort of. Apple is pretty predictable where they will do something proprietary and where they won't. They go proprietary where they can get a strategic IP advantage that they can hold for a long period of time. If it only confers a short term advantage they typically go commodity and use prepayment agreements to lock up the market for 2-3 years after which everyone catches up, and they rotate onto the next thing. They go open source/commodity when there are network effects that they can benefit from. Note Apple Silicon is a little of each - having ARM compatibility means they get the benefits of the open source community maintaining code that they rely on, and the proprietary GPU, etc. give them control with what direction they take their platforms.

I think there's something poetic that Apple cofounds ARM with this structure and then decades later relies on them to break out of the trajectory the industry was trying to remain on.
 

oak8292

Member
Sep 14, 2016
199
215
116
2008 calls and it takes 17 years for people to pick up.
Here iis TSMC talking about AI on leading node, A16 due to power issues.

“Jeff Su

Okay. His question -- okay, maybe let me rephrase it. I think I understand better. His question is really about AI adoption of the leading-edge node, the N node. We see smartphone, we see HPC. This question very specifically, how do we see the AI adoption of the most leading node for TSMC. He observed in the past, it has generally been one node behind. So how do we see that going forward with things such as A16?

C. C. Wei

Well, Mehdi, you are right. Usually, the HPC's customers are always one step behind using N+1 or N+2 technologies. Now because of AI demand is so strong, that's one thing. But the most important thing is we need some kind of performance, but the power consumption is very, very important. And when we talk about A16, we have another power efficiency improvement close to 20%. That's a big value for all the AI data center applications. So that help my customer moving faster because of -- every time when we talk about the AI data center, if you notice that the first thing they talk about is power supply, electricity, right? So they did not tell you say the power efficiency is very important, but they tell you that we have to build a very big electricity power plant to support the AI data centers. So that tells you how important it is. And TSMC is the technology, by the way. A16 is a further improvement of the N2 node. So it's not a surprise for TSMC to expect for those people in AI data centers industry, they want to use in A16.”

 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
Here iis TSMC talking about AI on leading node, A16 due to power issues.

“Jeff Su

Okay. His question -- okay, maybe let me rephrase it. I think I understand better. His question is really about AI adoption of the leading-edge node, the N node. We see smartphone, we see HPC. This question very specifically, how do we see the AI adoption of the most leading node for TSMC. He observed in the past, it has generally been one node behind. So how do we see that going forward with things such as A16?

C. C. Wei

Well, Mehdi, you are right. Usually, the HPC's customers are always one step behind using N+1 or N+2 technologies. Now because of AI demand is so strong, that's one thing. But the most important thing is we need some kind of performance, but the power consumption is very, very important. And when we talk about A16, we have another power efficiency improvement close to 20%. That's a big value for all the AI data center applications. So that help my customer moving faster because of -- every time when we talk about the AI data center, if you notice that the first thing they talk about is power supply, electricity, right? So they did not tell you say the power efficiency is very important, but they tell you that we have to build a very big electricity power plant to support the AI data centers. So that tells you how important it is. And TSMC is the technology, by the way. A16 is a further improvement of the N2 node. So it's not a surprise for TSMC to expect for those people in AI data centers industry, they want to use in A16.”


Well, since the latest N-1 nodes are first released, the production volume is limited.

Relatively, there are many large dies for HPC chips, which inevitably leads to a smaller amount of chips taken from one wafer, so there are few customers who will jump immediately after a new process is released.

Basically A16 is not a new process node,
Although the name has changed, it is classified as an N2 process family.
It's like N2P+BSPDN

C.C Wei argues that, but I don't think HPC customers will jump to the N-1 node right away
At least the timeline until the transition will be reduced.   Production volume is also important.
 

Doug S

Diamond Member
Feb 8, 2020
3,812
6,747
136
Apple's kind of always been a proprietary company... and using their own processors in their PCs was the logical next step from the custom phone SoC development. The 10 nm disaster made the decision easy but they were always dumping Intel.

There's a difference between being a proprietary company (which I interpret as a company that specifically seeks solutions because they are proprietary) and a company that's interested in control - control of the features and specs of all the components that go into their products to meet their standards. Apple is the latter, not the former.

For example, Apple didn't design their own cores because they wanted proprietary cores others couldn't buy from ARM. They did it because ARM's cores were not that good, and it wasn't until recently that ARM finally decided to invest the resources into actually trying to compete. Though I think that may have been more due to fear of being outclassed by Qualcomm's cores than anything to do with Apple, since they'd been outclassed by Apple's cores since A7 and didn't seem overly bothered by that.
 

Io Magnesso

Senior member
Jun 12, 2025
578
165
71
Rapidas is It seems that the 2nm process has been prototyped and the operation has been confirmed.
 

oak8292

Member
Sep 14, 2016
199
215
116
There's a difference between being a proprietary company (which I interpret as a company that specifically seeks solutions because they are proprietary) and a company that's interested in control - control of the features and specs of all the components that go into their products to meet their standards. Apple is the latter, not the former.

For example, Apple didn't design their own cores because they wanted proprietary cores others couldn't buy from ARM. They did it because ARM's cores were not that good, and it wasn't until recently that ARM finally decided to invest the resources into actually trying to compete. Though I think that may have been more due to fear of being outclassed by Qualcomm's cores than anything to do with Apple, since they'd been outclassed by Apple's cores since A7 and didn't seem overly bothered by that.
I think this has more to do with economics and volume or the amount of engineering you think you can afford. Apple was selling 200 million+ processors into ‘hero’ phones and a license for architecture might have been cheaper than per core royalties.

In fourth quarter of 2024 about 7 billion ARM based processors were sold with a revenue of $900 million to ARM. That is about $0.13 cents per processors (not core). ARM had less than a billion per year of revenue back when Apple started designing cores. Apple could easily afford the engineering and did not need to share that IP with their competition.

Neither Samsung or Qualcomm has the volume of ‘hero’ phones that Apple has and the processors are typically cheaper with lower royalties. Both started down the path of architectural design but didn’t have the same economics to make it work. The IP for ARM designed cores is really inexpensive. If you have access to a foundry and even minimal volume it can pay to use ARM IP.

I doubt RISC-V is actually much cheaper. The design houses like SiFive for RISC-V need to pay for engineering and need to develop a base of customers. About 50% of ARMs revenue is from designs over 10 years old. In house design needs volume like Apple.
 
  • Like
Reactions: Tlh97
Jul 27, 2020
28,173
19,210
146
Rapidas is It seems that the 2nm process has been prototyped and the operation has been confirmed.
Hey, you can get a job there and fill us in on everything that goes on there :p
 

oak8292

Member
Sep 14, 2016
199
215
116
Well, since the latest N-1 nodes are first released, the production volume is limited.

Relatively, there are many large dies for HPC chips, which inevitably leads to a smaller amount of chips taken from one wafer, so there are few customers who will jump immediately after a new process is released.

Basically A16 is not a new process node,
Although the name has changed, it is classified as an N2 process family.
It's like N2P+BSPDN

C.C Wei argues that, but I don't think HPC customers will jump to the N-1 node right away
At least the timeline until the transition will be reduced.   Production volume is also important.
I agree that A16 is an N2 variant and it will be interesting to see who uses it. The next node A14 will also have a version without BSPD. Who are the BSPD processes designed for? They are more expensive and probably have lower volumes. The question makes it sound like maybe it is for AI. Nvidia margins would not be hit too hard with a more expensive wafer if it will yield for them.
 

511

Diamond Member
Jul 12, 2024
5,394
4,816
106
On theory, an x86 license to a competitor is absolument possible.
In practice, highly doubtful as everybody knows. Remember VIA still has one but what does it cover exactly, I dunno.

I believe both AMD and Intel cross liscensed about everything to the point a potentiel customer might have access to x86 (basics) and easily every expired patent but would be blocked from modern extensions.

I guess an x86 competitor should be in fact possible. All those all patents expired a long time ago. Wouldn't prevent Intel legal team to take actions though.
x86 is copyrighted iirc not patented and copyrights lasts for the authors age+50 years
 

511

Diamond Member
Jul 12, 2024
5,394
4,816
106
Rapidas is It seems that the 2nm process has been prototyped and the operation has been confirmed.
Sorry to say but it is based on IBM's 2nm when was the last time you saw IBM made a design on their own process? They make all of their own processor on external Fabs and an IBM process in HVM
 

511

Diamond Member
Jul 12, 2024
5,394
4,816
106
I agree that A16 is an N2 variant and it will be interesting to see who uses it. The next node A14 will also have a version without BSPD. Who are the BSPD processes designed for? They are more expensive and probably have lower volumes. The question makes it sound like maybe it is for AI. Nvidia margins would not be hit too hard with a more expensive wafer if it will yield for them.
I think it depends on how BSPDN is being done we should find out soon with 18A Panther Lake with the hotspot and all that stuff can't comment unless we see real examples in wild
 
  • Like
Reactions: Io Magnesso