Discussion Leading Edge Foundry Node advances (TSMC, Samsung Foundry, Intel)

Page 10 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,117
4,296
116
So you're saying AMD is using some kind of node with the performance of N5 HPC but without the leakage?
They already use their own custom cells - nobody else does what they do with Ryzen (Radeon used standard N7P for RDNA, not sure about RDNA2). I'm saying that'll continue on in the future.
 
  • Like
Reactions: Tlh97 and Saylick

NTMBK

Diamond Member
Nov 14, 2011
9,688
3,511
136
Maybe but I don't think it is likely. If Nvidia tries to step back into the consumer CPU space I think it's much more likely to start in the mobile or laptop spaces rather than jump straight into a market where ARM is non-existent. Maybe a super high performance server CPU to go along with their HPC GPU offerings, but again, I find that highly doubtful given the current lack of ARM ecosystem in that space as well as the all the baggage that comes with trying to start a server line of processors.
A desktop CPU is just a cut down server CPU with the clocks cranked up. Just look at Ryzen!

Nvidia are already building a server CPU, with their Grace CPU. I would not be surprised to see them reuse some of that R&D to launch an ARM desktop, once the Windows for ARM exclusivity period runs out for Qualcomm.
 

Hitman928

Diamond Member
Apr 15, 2012
4,046
4,750
136
A desktop CPU is just a cut down server CPU with the clocks cranked up. Just look at Ryzen!
This doesn't hold true for many of the server CPUs in the past. This is basically what AMD did with the Zen architecture, but I would say most server CPUs in the past have distinct differences between themselves and the desktop CPUs offered by the same company (where applicable). Different cache structures, ring/mesh topologies, feature support, etc.

Nvidia are already building a server CPU, with their Grace CPU. I would not be surprised to see them reuse some of that R&D to launch an ARM desktop, once the Windows for ARM exclusivity period runs out for Qualcomm.
I forgot about Grace, but it seems to be a CPU meant to be highly efficient with lots of memory bandwidth and probably some special purpose instruction support for AI training to support their GPU/DL chips and strictly meant to be sold as a pre-packaged deal in the DXG unit or whatever they call their server offering. Not really a good fit for this type of node.
 

jpiniero

Lifer
Oct 1, 2010
11,307
3,048
136
I think Grace is just the beginning of nVidia's CPU ambitions. And to further that is why they tried to buy ARM in the first place.
 

igor_kavinski

Platinum Member
Jul 27, 2020
2,503
1,280
96
What makes you think they're going to just stop with the DTCO?
AMD doesn't want higher frequencies with more leakage. But Intel's "success" with Alder Lake might convince them to turn the dial up to 11 and release a 300W absolute monster using N4X. I'm just surprised that 11900K and 12900K aren't a complete market flop. If there were dozens of players in the x86 ecosystem, Intel would have inspired them to get into the den heater race.
 
  • Like
Reactions: Tlh97 and Thibsie

Ajay

Lifer
Jan 8, 2001
11,218
5,031
136
AMD doesn't want higher frequencies with more leakage. But Intel's "success" with Alder Lake might convince them to turn the dial up to 11 and release a 300W absolute monster using N4X. I'm just surprised that 11900K and 12900K aren't a complete market flop. If there were dozens of players in the x86 ecosystem, Intel would have inspired them to get into the den heater race.
I would be *very* surprised if AMD pushed Ryzen that hard, especially with the way the CCDs have hot spots. Besides that, I think, but do not know, that Zen4 is going to be dropping some very compelling performance numbers. IIRC, AMD has said +25% perf/watt due to process improvements alone!
 

Saylick

Golden Member
Sep 10, 2012
1,585
1,930
136
I would be *very* surprised if AMD pushed Ryzen that hard, especially with the way the CCDs have hot spots. Besides that, I think, but do not know, that Zen4 is going to be dropping some very compelling performance numbers. IIRC, AMD has said +25% perf/watt due to process improvements alone!
Technically, it was just >25% performance at iso-node. However, they also said it was 2x the power efficiency, so I interpret that as 2x perf/W. Since we know Genoa has 50% more cores, that implies that each core needs to have 35% more performance at the same power to hit 2x perf/W. The bulk of that is rumored to come from IPC gains, so I am not expecting the clocks to go up that much, especially not for high end desktop.
 
  • Like
Reactions: Tlh97 and Ajay

Ajay

Lifer
Jan 8, 2001
11,218
5,031
136
Technically, it was just >25% performance at iso-node. However, they also said it was 2x the power efficiency, so I interpret that as 2x perf/W. Since we know Genoa has 50% more cores, that implies that each core needs to have 35% more performance at the same power to hit 2x perf/W. The bulk of that is rumored to come from IPC gains, so I am not expecting the clocks to go up that much, especially not for high end desktop.
Yes, my bad. Whacks self in forehead. So, it can't actually be 'iso-node', has to be based some actual electrostatic data points. Vs power, Vs voltage, etc. So, I would guess that it's +25% xtor switching speed at iso power for the *node* only. If AMD is able to further get the power consumption down - then that's amazing. There is a slide about this posted amongst the CPU threads here somewhere - can't find it ATM.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,161
3,432
136
AMD advertised these values for process only and not architectural IPC or CPU performance. It is written on the slide itself(small letters on the bottom) and they reiterated that when asked by AT.
When asked if this was a specific statement about core performance, AMD said that it wasn’t, and just a comment on the process node technologies. It is worth noting that 2x efficiency is quite a substantial claim based on metrics provided by TSMC on its N7 -> N5 disclosures.

First slide is from Acclererated Datacenter Event, N7-> N5, 2x density, 2x power efficiency and 1.25x perf

Second slide is from Zen 2 EPYC launch, 14LPP -> N7, 2x density, 0.5x power at iso perf or 1.25x perf at iso power

1639783889113.png

What we know is that AMD's numbers are in line for N7. For instance, Zen 1 achieved ~23 MTr/mm2 in Desktop Ryzen 1000/2000 and ~26 MTr/mm2 in Ryzen Mobile 2000 whereas Zen 2 CCD achieved ~52 MTr/mm2 density.
Excluding the highly pushed SKUs, Zen [1700/2700] --> Zen2 [ 3700 Pro] at iso power @65W there is 400 MHz higher base and boost with the latter obviously expending more power due to more active transistors but still maintaining iso power. Ryzen 5000 series traded some base MHz from Zen2 (due to AVX256) but much much higher boost

Zen 2 achieved >2x density, ~1.12x perf (boosts are much higher, but efficiency did not improve as much due to opting for perf instead).

Zen 4 will not have to trade perf for efficiency from what the process offers, 2x density gain and 2x efficiency and 1.25x perf
Just for comparison sake,
Zen 2 achieved 15% IPC from roughly ~16% more MTr, Core+L2 (1.35x more MTr when comparing at CCD level, but bulk of the increase related to doubled L3 and added GMI2+SMU and other complexities introduced by chiplet design)
Zen 3 achieved 19% IPC from 9% more MTr

Zen 4 CCD will have >1.7x more MTr over Zen 3 CCD.
So it is not unreasonable to be optimistic about Zen 4 perf, given the device perf and efficiency improvements and massive gain in the MTr count.
 

uzzi38

Platinum Member
Oct 16, 2019
2,117
4,296
116
AMD doesn't want higher frequencies with more leakage. But Intel's "success" with Alder Lake might convince them to turn the dial up to 11 and release a 300W absolute monster using N4X. I'm just surprised that 11900K and 12900K aren't a complete market flop. If there were dozens of players in the x86 ecosystem, Intel would have inspired them to get into the den heater race.
You do realise that N4X goes ready for HVM in H1 2024, right? It's far too late to even matter at that point.
 

igor_kavinski

Platinum Member
Jul 27, 2020
2,503
1,280
96

uzzi38

Platinum Member
Oct 16, 2019
2,117
4,296
116



N4X might give AMD the highest performing gaming CPU ever, just like the 5000 series.
So now you're also assuming AMD would be using standard N3 libs.

By the time you get to N3 DTCO is almost mandatory if you want to get a reasonable performance uplift. There's no point in utilising the vastly more expensive node over N4P/N4X otherwise. You should really assume that anyone looking at the nodes will be doing some level of DTCO to make the most of their investment.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,098
207
106
(this is a joking response)

Intel wants TSMC to have a leading edge process that may have absolute numbers high with frequency, but with massive leakage.

In order to make Intel 10SF ===> Intel 7 ==> and Intel 4 look good in comparison when Intel does their "slide magic" =P

-----

Even though most TSMC customers (including Intel) will be willing to surrender 10% of performance in other to get much lower voltages and thus better performance per watt.
 

Roland00Address

Platinum Member
Dec 17, 2008
2,098
207
106
Historically, Intel actually prioritized the opposite. Performance comparisons were made at Vmax, not Vmin.
Yes.

But my argument is slides are marketing and you may make bad comparisons in slides.

Only to want different priorities when actually selling to customers who know more about the products since it is their actual business from which they make their money, aka server customers. (Metaphor time) Why do you want the iPhone 14? Well the iPhone 12 is so old school even though in most ways of measuring the 12 is 80% as good as the iPhone 14. But on camera, and battery life we post the most flattering numbers for we are trying to upsale customers based on hype.

Of course I started the previous post as joking, for while while I would believe Intel (and mother other makers of devices) to do marketing slides which are misleading, I have no clue what they trade offs would be cost wise in money and engineering talent to make a full octane socs while most of the socs will be wanting to chase after performance per watt and not absolute performance.
 

Ajay

Lifer
Jan 8, 2001
11,218
5,031
136
So now you're also assuming AMD would be using standard N3 libs.

By the time you get to N3 DTCO is almost mandatory if you want to get a reasonable performance uplift. There's no point in utilising the vastly more expensive node over N4P/N4X otherwise. You should really assume that anyone looking at the nodes will be doing some level of DTCO to make the most of their investment.
Thanks for the reminder, I keep forgetting about DTCO. Duh.
 

igor_kavinski

Platinum Member
Jul 27, 2020
2,503
1,280
96
Well when its in India or other markets with hundreds of millions of people that don't exactly have $500+ USD to throw around, not sure why that's such a bad thing.
It's not about the money. Nokia also used to make a lot of cheap phones. I had one. A flip phone. Java based. Should have been hell to use and slow. But it wasn't. It was remarkably able to do the things it was supposed to do and it never froze in the middle of a call or stopped responding. And I can bet the CPU it used would now be 10 times slower, if not more, than the crappiest CPU MediaTek makes now, yet the user experience is nowhere near as smooth. Of course, that's more due to forcing a full featured Android OS down the throat of the anemic CPU. Anyway, MediaTek and Rockchip and others like them make money off of giving bad user experiences which then increases the market value of Samsung and Apple. Everybody gets rich, except the poor guy. What's not to hate?
 
  • Like
Reactions: hemedans

naukkis

Senior member
Jun 5, 2002
538
399
136
Mediatek chips aren't any worse than any others, heck they are pretty much exactly same ARM-designs. For given price range Mediatek chips usually offer more power than those from Qualcomm.

And now the fastest Android-soc is Mediatek-based......
 

Ajay

Lifer
Jan 8, 2001
11,218
5,031
136
Mediatek chips aren't any worse than any others, heck they are pretty much exactly same ARM-designs. For given price range Mediatek chips usually offer more power than those from Qualcomm.

And now the fastest Android-soc is Mediatek-based......
Interesting. So, what is Qualcomm doing, I would have thought that they, or Samsung, would have the fastest ARM SoC for Android.
 

ASK THE COMMUNITY