Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel) - [2020 - 2025]

Page 240 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,779
6,798
136
TSMC's N7 EUV is now in its second year of production and N5 is contributing to revenue for TSMC this quarter. N3 is scheduled for 2022 and I believe they have a good chance to reach that target.

1587737990547.png
N7 performance is more or less understood.
1587739093721.png

This year and next year TSMC is mainly increasing capacity to meet demands.

For Samsung the nodes are basically the same from 7LPP to 4 LPE, they just add incremental scaling boosters while the bulk of the tech is the same.

Samsung is already shipping 7LPP and will ship 6LPP in H2. Hopefully they fix any issues if at all.
They have two more intermediate nodes in between before going to 3GAE, most likely 5LPE will ship next year but for 4LPE it will probably be back to back with 3GAA since 3GAA is a parallel development with 7LPP enhancements.


1587739615344.png

Samsung's 3GAA will go for HVM in 2022 most likely, similar timeframe to TSMC's N3.
There are major differences in how the transistor will be fabricated due to the GAA but density for sure Samsung will be behind N3.
But there might be advantages for Samsung with regards to power and performance, so it may be better suited for some applications.
But for now we don't know how much of this is true and we can only rely on the marketing material.

This year there should be a lot more available wafers due to lack of demand from Smartphone vendors and increased capacity from TSMC and Samsung.
Lots of SoCs which dont need to be top end will be fabbed with N7 or 7LPP/6LPP instead of N5, so there will be lots of wafers around.

Most of the current 7nm designs are far from the advertized density from TSMC and Samsung. There is still potential for density increase compared to currently shipping products.
N5 is going to be the leading foundry node for the next couple of years.

For a lot of fabless companies out there, the processes and capacity available are quite good.

---------------------------------------------------------------------------------------------------------------------------------------------------


FEEL FREE TO CREATE A NEW THREAD FOR 2025+ OUTLOOK, I WILL LINK IT HERE
 
Last edited:

Thunder 57

Diamond Member
Aug 19, 2007
4,294
7,099
136
There seems to be a lot of people with the sentiment that all China can do is copy. I remember the same being said about Japan was I was a kid in the 70s. America ignored their engineering progress to the eventual cost of the then-dominance of US manufacturing.

China has way more people than Japan, way more engineers, so it'll be a multiple of that scenario. They are doing plenty of original research, they "copy" because from the standpoint of individual products that can be the fastest way to get to market. But they are fully capable of developing what they are denied. The strategy of denying them the best tech is penny wise and pound foolish - it helps the west in the short run by protecting our companies but in the long run (say a decade or so) they're going likely to end up with technology that's superior and much cheaper per wafer than EUV. How are the Apples and Nvidias going to cope when Chinese companies are able to ship chips that are roughly on par (we'll assume Apple/Nvidia still beat them in design, but if the process they have access to is behind its basically a wash) for a fraction of the price?

That's a pretty well earned reputation.

They've head plenty of people and engineers for awhile now and haven't really done anything too surprising. Their home grown GPU is a turd, "their" x86 CPU is crap. I guess their fabs have made some progress. Now it would be foolish to discount them but I don't see them leading the world in the semiconductor industry anytime soon.

And for a country that relies on stealing IP or artifically selling products on the cheap to attempt to kill off the competition (seeing this with memory) I doubt they would be so generous as to allow the West to use their advanced fabs (should they come to exist) to compete with them. China is all in one becoming the worlds only superpower. Don't forget that.
 

desrever

Senior member
Nov 6, 2021
341
832
136
If given the opportunity. I don’t think they’ll get that opportunity. You basically have a dichotomy of a lot of Chinese engineers for Chinese companies against a lot of various ethnicities (including Chinese folks) in Western companies. Just because China has 1.3 billion people doesn’t guarantee Xiaomi wins in the end. Combine the US, Europe, Japan, South Korea, India, and there are more people to choose from.

And that Xiaomi chip is not even used in their newly released flagship phones. It’s been surpassed by the Elite Gen 5. Now maybe when the Xiaomi 17 Ultra comes out they’ll use the next-gen XRing and it maybe competitive again, but even that is still half a year behind. It’s a good chip I’ll give them that especially for a first product release, but it doesn’t guarantee the next version has *a lot of* low-hanging fruit it can gobble up and match Apple for example.
Its true that China might not be able to compete with everyone else combined but "the West" is not a single entity. The countries you listed all have an varied opinion of doing business with China and if forced, they might not all align with American interests. Also the "best of the best" in the west are not doing semiconductor design, very few are actually doing it, they are all lured either to finance or to big tech with minimum overlap with hardware engineering. China actually has way more talent graduating than all of them combined at the current time.

The Xiaomi chip is pretty much their first real chip which is already pretty impressive, not unlikely they will be able to improve significantly, they are also just 1 company in China. There is competition brewing in every corner imo.
 

poke01

Diamond Member
Mar 8, 2022
4,872
6,201
106
There is competition brewing in every corner imo
China likes competition when it’s behind and when it’s not behind it will kill any competitor that tries to catch up.

The U.S also does this but I rather the US do it since you know I can at least talk crap about the president
 
  • Like
Reactions: Tlh97 and Win2012R2

marees

Platinum Member
Apr 28, 2024
2,273
2,891
96
The A16 process, scheduled for mass production in 2027, is TSMC’s first 2-nanometer node to incorporate backside power delivery network (BSPDN) technology — one of the most advanced innovations in semiconductor manufacturing.

BSPDN is a groundbreaking process technology with no commercial precedent. Traditionally, both power and signal interconnects are placed on the front side of a chip. However, as circuit dimensions shrink, interference increases, complicating design and fabrication. BSPDN flips this structure by routing the power network on the backside and the signal network on the front, thereby alleviating interconnect bottlenecks and improving power efficiency.

Samsung Electronics and Intel are also preparing BSPDN adoption, and industry consensus expects both companies to implement it at the 2-nanometer node as well.

NVIDIA’s GPU roadmap follows the sequence Hopper → Blackwell → Rubin → Feynman. The Blackwell series is currently in shipment, with Rubin expected next year. The Feynman GPU, planned for release in 2028, is believed to be the first to use TSMC’s A16 process. Although the product launch is slated for 2028, production using A16 will likely begin in the second half of 2027, allowing about a year for ramp-up to improve yield and productivity.




 
  • Like
Reactions: Elfear

Joe NYC

Diamond Member
Jun 26, 2021
4,216
5,824
136
There seems to be a lot of people with the sentiment that all China can do is copy. I remember the same being said about Japan was I was a kid in the 70s. America ignored their engineering progress to the eventual cost of the then-dominance of US manufacturing.

China has way more people than Japan, way more engineers, so it'll be a multiple of that scenario. They are doing plenty of original research, they "copy" because from the standpoint of individual products that can be the fastest way to get to market. But they are fully capable of developing what they are denied. The strategy of denying them the best tech is penny wise and pound foolish - it helps the west in the short run by protecting our companies but in the long run (say a decade or so) they're going likely to end up with technology that's superior and much cheaper per wafer than EUV. How are the Apples and Nvidias going to cope when Chinese companies are able to ship chips that are roughly on par (we'll assume Apple/Nvidia still beat them in design, but if the process they have access to is behind its basically a wash) for a fraction of the price?

Good points. Additional factor is that China has a domestic market that Japan did not have. The size of Chinese market allows Chinese home grown companies to continue to operate, catch up independently, even after being cut off from the West.

Being cut off from the West actually adds to the profits of these home grown companies, so that they can hire more engineers and catch up faster.

Two most glaring examples of the West cutting off China, only to see it backfire are:
1. datacenter GPUs, where China is catching up rapidly, only at cost of not being super power efficient (which is not a problem in China). The sanctions seems to have completely collapsed the Western company dominance in Chinese market - which is a sizeable market.

David Sacks of the Trump administration has finally convinced enough people (so called "hawks" in the administration) that this has been a disastrous course of action, to lose the Chinese market. But the late reversal, be it only partial, is too little too late. The damage is done, the market is lost.

2. fab equipment, where the lead is greater, but there is already an ecosystem of homegrown companies cutting the size of this lead. There is far greater amount of money flowing into these companies as a result of the sanctions.
 

Doug S

Diamond Member
Feb 8, 2020
3,837
6,787
136
Wasn't there just a massive nationwide protest against the prez... in the US? Seems pretty safe to me.

Trump and others in his administration have been talking about designating them as domestic terrorists. They are no doubt unhappy at how peaceful the recent "No Kings" protests were, but that's easily fixed next time by mixing in a few agents provocateur to commit a few token acts of violence and give the authorities an excuse to bring out the tear gas and rubber bullets and run the videos on Fox News nonstop for a few weeks.
 

Doug S

Diamond Member
Feb 8, 2020
3,837
6,787
136
Yeahh that’s the problem putting all of one’s eggs into a single basket. Monopolies suck.

They definitely do have the potential funding for such an endeavor. They would be foolish if they weren’t planning for something past Hyper-NA. It would be rank incompetency if they didn’t. But I do get the incentive to milk EUV and it’s iterations for as long as they can.

I'm skeptical Hyper NA ever sees the light of day. The economics of high NA suck, TSMC is sounding like they're going to push off doing it in A10 and stick with multipatterning and not introduce high NA until A7. Who knows, maybe it gets pushed back even further? The problem is if you pay 2x as much for a high NA machine that cuts the number of patterning steps in half you aren't saving any money. You only switch when the problems you have from multipatterning have enough cost of their own.

But let's say TSMC has no choice but to insert high NA in A7. That would keep them going for several generations, and we're probably out to nearly 2040 before high NA runs out of gas and they'd be forced to consider hyper NA. And costs will have been increasing with every node, not just from all the multipatterning but all the etch/deposition steps as we move to CFET in the 2030s. Companies with thinner profit margins are going to be forced to jump off the treadmill, and with fewer wafers to amortize their fixed costs TSMC will have to charge the remaining customers more so even deep pocketed customers like Apple may cry uncle at some point.

So no, we never see hyper NA in production use. They have to get off the EUV treadmill long before we get there. And if WE don't, China will.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
The A16 process, scheduled for mass production in 2027, is TSMC’s first 2-nanometer node to incorporate backside power delivery network (BSPDN) technology — one of the most advanced innovations in semiconductor manufacturing.

BSPDN is a groundbreaking process technology with no commercial precedent. Traditionally, both power and signal interconnects are placed on the front side of a chip. However, as circuit dimensions shrink, interference increases, complicating design and fabrication. BSPDN flips this structure by routing the power network on the backside and the signal network on the front, thereby alleviating interconnect bottlenecks and improving power efficiency.

Samsung Electronics and Intel are also preparing BSPDN adoption, and industry consensus expects both companies to implement it at the 2-nanometer node as well.

NVIDIA’s GPU roadmap follows the sequence Hopper → Blackwell → Rubin → Feynman. The Blackwell series is currently in shipment, with Rubin expected next year. The Feynman GPU, planned for release in 2028, is believed to be the first to use TSMC’s A16 process. Although the product launch is slated for 2028, production using A16 will likely begin in the second half of 2027, allowing about a year for ramp-up to improve yield and productivity.





This is full of inaccuracies. For instance, Blackwell isn’t on a 3 nm node, it’s on a 4 nm node. Also, Intel isn’t looking to bring BSPDN to a 2 nm node, they are releasing 18a with BSPDN before TSMC will release A16. I can also guarantee NV isn’t the first and only customer to engage TSMC about A16.
 

dangerman1337

Senior member
Sep 16, 2010
440
77
91
This is full of inaccuracies. For instance, Blackwell isn’t on a 3 nm node, it’s on a 4 nm node. Also, Intel isn’t looking to bring BSPDN to a 2 nm node, they are releasing 18a with BSPDN before TSMC will release A16. I can also guarantee NV isn’t the first and only customer to engage TSMC about A16.
I wouldn't be suprised if AMD has been engaging with TSMC but haven't annouced A16 products yet. If Zen 6 CCDs as rumoured is N2P then I wouldn't be suprised if Zen 7 comes aorund Razor Lake they'll go for A16. Zen CCDs are way smaller than HPC Nvidia products.
 

marees

Platinum Member
Apr 28, 2024
2,273
2,891
96
I wouldn't be suprised if AMD has been engaging with TSMC but haven't annouced A16 products yet. If Zen 6 CCDs as rumoured is N2P then I wouldn't be suprised if Zen 7 comes aorund Razor Lake they'll go for A16. Zen CCDs are way smaller than HPC Nvidia products.
What is the expected timeline of TSMC A16 & A14 ?
 

Doug S

Diamond Member
Feb 8, 2020
3,837
6,787
136
Well BSPDN is for HPC Customer not for mobile

I'm still not clear about the reasons why. Supposedly it is a lot more difficult for it to shed heat so yeah let's not put it in a phone where is dissipating a single digit number of watts, and most of the time milliwatts, instead let's put it in an AI server on a reticle sized die burning 1000W+ nonstop. Someone please make that make sense!

Now I could accept financial arguments that the marginal benefit for a phone SoC in terms of smaller die and reduced power consumption simply doesn't justify the added cost. Meanwhile packing more transistors into an AI server's reticle sized die to increase the available performance, and offering just a few percent more computation per watt would easily repay the added cost on a TCO basis at the insane power draw and duty cycle of Nvidia AI servers.

But the cooling argument alone just makes no sense. I get that having wires on both sides on a BSPDN die traps the heat and makes it more difficult to dissipate, but that sure sounds like a MUCH bigger problem when you have 2-3 orders of magnitude more heat getting trapped in the AI servers and not so much of a problem for the passively cooled device in my hand that isn't even warm to the touch unless I'm really pushing it hard.
 

LightningZ71

Platinum Member
Mar 10, 2017
2,694
3,394
136
It's not cooling the whole package, it's about cooling individual transistors and functional units.

BSPDN type solutions allow a big increase in density at the transistor level. You get to pack notably more of them in a unit of space, meaning that the tiny bit of heat that they all generate is all highly concentrated inside of each functional group. The problem is exacerbated by the fact that silicon has limited thermal conductivity, meaning that it's slow to move that thermal energy from that group of transistors out to the surface. So, while the processor package may be rated for 1000 Watts, and the cooling system able to move 1000 watts, you run into a roadblock trying to get the heat away from the individual groups of transistors to that monster cooling apparatus.

Why is this better for DC/cloud servers? They are typically populated with processors that have CPU cores that don't clock very high INDIVIDUALLY. This means that the individual transistor groups don't oversaturate their little sections of silicon with heat, even though they are packed in super tight. You can have even denser processors, giving more throughput for the same physical space.

Why does this suck for desktop and high end mobile devices? They like to boost to high frequencies to accomplish more work quickly. This means that they will rapidly supersaturate their transistors with heat and more rapidly cause thermal throttling. BSPDN will favor lower clocking, wider braniac designs over narrower speed demons. Apple will likely do well with it, but unless Intel and AMD tear up a lot of their cores, or just create a whole new one around the wide braniac design philosophy, they will have difficulty scaling clocks on BSPDN nodes. This is one of the reasons that TSMC sited when they split N2 into BSPDN versions and non-BSPDN versions.
 

Hitman928

Diamond Member
Apr 15, 2012
6,754
12,500
136
It's not cooling the whole package, it's about cooling individual transistors and functional units.

BSPDN type solutions allow a big increase in density at the transistor level. You get to pack notably more of them in a unit of space, meaning that the tiny bit of heat that they all generate is all highly concentrated inside of each functional group. The problem is exacerbated by the fact that silicon has limited thermal conductivity, meaning that it's slow to move that thermal energy from that group of transistors out to the surface. So, while the processor package may be rated for 1000 Watts, and the cooling system able to move 1000 watts, you run into a roadblock trying to get the heat away from the individual groups of transistors to that monster cooling apparatus.

Why is this better for DC/cloud servers? They are typically populated with processors that have CPU cores that don't clock very high INDIVIDUALLY. This means that the individual transistor groups don't oversaturate their little sections of silicon with heat, even though they are packed in super tight. You can have even denser processors, giving more throughput for the same physical space.

Why does this suck for desktop and high end mobile devices? They like to boost to high frequencies to accomplish more work quickly. This means that they will rapidly supersaturate their transistors with heat and more rapidly cause thermal throttling. BSPDN will favor lower clocking, wider braniac designs over narrower speed demons. Apple will likely do well with it, but unless Intel and AMD tear up a lot of their cores, or just create a whole new one around the wide braniac design philosophy, they will have difficulty scaling clocks on BSPDN nodes. This is one of the reasons that TSMC sited when they split N2 into BSPDN versions and non-BSPDN versions.

Silicon itself is not a bad thermal conductor, not as good as metal, but way better than anything else involved. The problem is you go from the device silicon being on top, nearest the heat sink, to the silicon being on bottom, nearest the socket/board, while being thinned to a very small height so you lose the ability of the heat to spread locally in an isometric way.

There are ways to help mitigate this to some degree, namely by adding a bunch of otherwise unnecessary metal, but thus far Intel is the only one saying they can mitigate it completely.
 
Last edited: