Discussion Zen 4 vs Raptor Lake (Profit and Volume)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

OneEng2

Junior Member
Sep 19, 2022
13
29
51
I know we are all zero'd in on IPC, Clock Rate, ST and MT score leaks etc, etc. I was thinking maybe some time should be given to how these two architectures work from a manufacturing standpoint. We engineers call that "Design for Manufacturing".

It is my thought that AMD's Zen 4 architecture holds a significant advantage over Intel's Raptor lake CPU architecture because of its fundamental "Tile" design vs Raptor lake's monolithic design. Let me explain.

AMD has the ability to produce ONLY the core CPU portion of the processor on the latest process node. The I/O die is done on TSMC's N6 process (that costs less and has less demand for the equipment) That alone makes the Zen 4 less expensive to produce than Raptor Lake, but it doesn't stop there either.

AMD's Zen 4 design also allows even the CPU's to be divided into 2 chiplets (8 cores in each "core" chiplet). This makes for a much much smaller core chiplet die for AMD than Raptor lakes monolithic die. Because the Zen 4 core chiplet is so much smaller, the yields will be larger, and the cost to produce lower.

Finally, AMD's chiplet design will allow AMD to quickly move to a 12 chiplet core design with a central I/O chip for their Zen4 based server chip having 96 cores in a single socket.

It seems to me that AMD's biggest advancement in the last 7 years wasn't necessarily the Zen architecture, but rather their chiplet design methodology.

Thoughts?
 
  • Like
Reactions: alexruiz

maddie

Diamond Member
Jul 18, 2010
4,644
4,462
136
Revenue doesn't cover any cost, you have to get to keep some of the revenue money to cover anything.

That would make some amount of sense if a company would only sell things they made in that quarter and also they sold 100% everything they produced.
You know very well that's not what happens.
COGS is everything they produce in that quarter and it's impossible to draw any parallel to what they actually sell in that quarter, they could be producing for months in advance or still be producing things they presold.
I guess we should have a new acronym, COGP, according to you redefinition. Quickly now, spread the word.
 

DrMrLordX

Lifer
Apr 27, 2000
21,082
10,245
136
if you make a billion in revenue but only put 10 dollars in your pocket you are worse off than somebody that makes $100 but gets to keep it all...

That is ridiculous. Capital expenditures can help with future growth. You ought to know that, you keep claiming all of Intel's losses on paper are from building fabs (hint: that isn't true either).
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
First their anti-competitive practices were caught and they were called on it. And as you said they have had failures before. And my post is not meant to say they are a goner.

BUT, for the first time in decades or ever, they are losing market share and competitive edge and face a real possibility of being in a 50/50 market within 10 years. I highly doubt they have 50 times the resources. Maybe 2-3 times, but that is changing also. THIS is the reality that I see. even a 60/40 would benefit the industry. I think Intel has even admitted that they see 70/30 in a year, and they see several years before they can get ANYTHING back. The question is, will that be at 60/40 or 50/50 (and these are just ballpark estimates)
I believe they did lose market share in the P4 vs K8 days in both desktop and server markets. It wasn't a great deal of market share (5 -10% IIRC), but enough for the trend line to be disturbing for Intel. They did a very (VERY) quick pivot to the Conroe architecture and dumped Itanium which returned them to market dominance Until just recently (read Ryzen generation starting mid 2017).

The difference between the past and the present loss of market share is that there isn't a Conroe architecture waiting in the wings or a 1-2 process node advantage to take advantage of.

I don't believe that the Adler lake processor architecture is a bad one. It is actually quite good; however, it isn't fundamentally superior to Ryzen. Adding to that (and the point of this thread) is that AMD's chiplet design gives them a fundamental advantage in cost and design flexibility.

I could see 70/30 in a year (or two). I can't see beyond that because my concern is that AMD will be constrained by TSMC capacity and I can see Intel making a sweetheart deal with TSMC if for no other reason than it limits AMD market share gain by limiting their production capacity.

The discussion on process technology is also worth analyzing in the same scope of efficiency of design with respect to manufacturing cost.

As for the discussion of the relative size of TSMC vs Intel, I thought this was a really good read: https://www.electropages.com/blog/2021/04/place-your-bets-intel-vs-tsmc

Another concern of mine is that while TSMC make 54% of the worlds semiconductor chips, ASML makes 100% of the worlds EUV lithography equipment! I love that the US has implemented the "CHIPS" act and is feverishly building chip factories in the US, but where are all the EUV machines going to come from? I believe that ASML capacity may well be a problem in the next couple of years: https://seekingalpha.com/article/4502962-asml-q1-review-capacity-expansion-will-drive-growth

Both AMD and Intel (and others) are all lined up for GAA process nodes (every company seems to have its own name for the same thing), but AFAIK they all rely on EUV equipment from ASML. Does that sound right to everyone?
 

DrMrLordX

Lifer
Apr 27, 2000
21,082
10,245
136
Both AMD and Intel (and others) are all lined up for GAA process nodes (every company seems to have its own name for the same thing), but AFAIK they all rely on EUV equipment from ASML. Does that sound right to everyone?

Essentially, yes. Though I think Intel is more interested in MBCFet?
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
Essentially, yes. Though I think Intel is more interested in MBCFet?

Had to look it up :).

MBCFet is still GAA, but they are using sheets instead of nanowire or nano sheet.

I believe that Intel is calling MBCFet "Ribbon FET".

It will beinteresting to see which of these technologies get out the door best. Certainly Samsung has gotten MBCFet out first. The jury is still out on if it is best :).
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,579
2,712
136
I don't believe that the Adler lake processor architecture is a bad one. It is actually quite good; however, it isn't fundamentally superior to Ryzen. Adding to that (and the point of this thread) is that AMD's chiplet design gives them a fundamental advantage in cost and design flexibility.

I could see 70/30 in a year (or two). I can't see beyond that because my concern is that AMD will be constrained by TSMC capacity and I can see Intel making a sweetheart deal with TSMC if for no other reason than it limits AMD market share gain by limiting their production capacity.
I think you are being really kind to Intel here, 48k L1D , 512 entry ROB , 6 wide decode , 5 int execution units, 5 AGU's stupid big load and store queues and for all that massive amount of extra resource what's the performance per clock advantage over Zen4 , nothing.

GC is a A14-16 resource and die size core with Zen4 like perf per clock.
 

DrMrLordX

Lifer
Apr 27, 2000
21,082
10,245
136
I thought Samsung was on GAAFET with 3GAE?

A little off-topic mind you, since neither one will affect Zen4 or Raptor Lake directly, but rather their successors. And really TSMC will be last off FinFETs from what I can tell.
 
  • Like
Reactions: Geddagod

zir_blazer

Golden Member
Jun 6, 2013
1,148
385
136
I guess my only counter to this (and arguably it may simply be from so many years and decades of empirical evidence) is that Intel still has 50 times the resources that AMD has. They have more monopoly power in the industry (although it is fading quickly), and have come back from seriously fatal design decisions in the past (read Pentium 4 and Itanium). My decades of following this dance just wont allow me to count them out :).
Pentium 4 and Itanic being failures is argueable. As far that I remember, the Pentium 4 was great from a marketing standpoint due to the high clock speeds so Intel could show more "progress" even if the actual performance wasn't that impressive, and back then even when the Athlon XP was superior before the Northwood C, AMD marketshare was below 20%. AMD did slighty better before P4.
And Itanium, I have seen it described as a "super effective vaporware attack" since the rest of the big enterprise RISC CPU vendors panicked just by seeing Intel slides claiming ridiculous performance in such a way that discouraged further development because no one thought that Intel could fail to deliver, so they dropped from even ATTEMPING to compete. Then Itanium was like 2-3 years later than expected, and performance was mediocre, yet the Intel announcements were enough to fatally wound its entire competition since no one was ready to resume operations when everyone realized than Itanic was a flop. x86-64 eventually tickling upwards may be thanks to that power vaccum.
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
It is strange to hear anyone accuse me of being kind to Intel ;)

This article seems to lend credence to my assertion that Intel is being pinched financially by their current Raptor Lake vs ZEN 4 matchup: https://www.notebookcheck.net/Rapto...rrible-sales-for-AMD-Zen-4-CPUs.658572.0.html

I doubt there is any way on God's green earth that we will ever know the actual cost of production for each of these chips; however, I think that it may become clear that AMD has an advantage if Intel starts raising prices .... or the channel does.

I have to wonder though. Intel's strategy seems to have some holes in it.

  1. As mentioned, the monolithic die is costly to produce
  2. Intel 7 transistor density is 106million/m^2 vs TSMC (AMD) 173million/m^2
It is clear that Intel has pulled out the stops on power in order to keep performance parity with AMD. I have to admit that they have done a good job in this respect considering their current considerable process disadvantage.

Based on the article I reference above, it seems like this comes at a cost though (as I had thought it might).

The other issue I see with this approach is that desktop processing is a rapidly shrinking market. If you can't compete in the laptop segment, you are in serious trouble IMO. Will Intel be able to re-jig the Raptor Lake processors to perform well in a laptop where they will be limited to MUCH lower wattage draws?

The other issue is that Intel adding lots of little cores may help in desktop/laptop markets, but it really isn't a smart use of die space (IMO) for the lucrative server market. Again, AMD's design choices (infinity fabric, chiplets, cache, etc) will allow them to continue to devastate Intel in the very very lucrative server market.

All of this (to me) doesn't bode well for Intel over the next couple of years.

Can they make their die shrink to 7nm? That will go a long way if they do. The transistor density of the Intel 7nm process is reportedly 300million/m^2. That is a 42% density advantage for Intel. The meteor lake processor will reportedly be using tiles as well. If all of this shakes out for Intel, then it would appear that by the beginning of 2024, Intel will be back to parity with their process. One could only imagine that meteor lake will have serious performance improvements having the ability to triple their transistor budget over Raptor Lake.

I have to imagine that there is some penalty to using tiles (width of connections, latency of connections, etc). I suspect that the core design must be modified to take proper advantage of this kind of an architecture.
 

beginner99

Diamond Member
Jun 2, 2009
5,177
1,558
136
The other issue is that Intel adding lots of little cores may help in desktop/laptop markets, but it really isn't a smart use of die space (IMO) for the lucrative server market.

I actually disagree. client like laptop/desktop would profit much more from few very fats cores. Eg. exactly what Apple does for getting fast responses on user inputs. the average user doesn't do heavy multithreaded stuff. In contrast server /cloud could really benefit from a chips with many "slow" cores. Why exactly are ARM servers "booming" and are a thing? if you have to process tens of thousands of web requests per seconds while each request is very light, all the AVX-512 and ML extensions are useless and waste die space and power. A server chip with many e-cores would actually likley be a good thing for hosters and cloud providers or even company internal Virtual machine servers.
 

VirtualLarry

No Lifer
Aug 25, 2001
55,875
9,797
126
I actually disagree. client like laptop/desktop would profit much more from few very fats cores. Eg. exactly what Apple does for getting fast responses on user inputs. the average user doesn't do heavy multithreaded stuff. In contrast server /cloud could really benefit from a chips with many "slow" cores. Why exactly are ARM servers "booming" and are a thing? if you have to process tens of thousands of web requests per seconds while each request is very light, all the AVX-512 and ML extensions are useless and waste die space and power. A server chip with many e-cores would actually likley be a good thing for hosters and cloud providers or even company internal Virtual machine servers.
Isn't that exactly where Bergamo fits in?
 

itsmydamnation

Platinum Member
Feb 6, 2011
2,579
2,712
136
I was thinking about Bergamo too but how much has it actually removed? We don't really have details or I missed them but yes it could be you are right but I think it will still have bigger cores than needed for just hosting web apps.
Err web apps are one of the biggest benifitters from wide vector instructions, TLS end to end costs.

VM farms are almost always limited by io and memory before CPU, mem bandwidth to mem amount to clock cycles is generally most important.

CPUs make up so little of TCO what matters is performance. Performance per core , performance per socket.

I do lots of nfv , I always by the highest clocking/ performance mid core count CPUs. Because that's what gives the most throughput/performance as licensing is almost always per core.
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
Err web apps are one of the biggest benifitters from wide vector instructions, TLS end to end costs.

VM farms are almost always limited by io and memory before CPU, mem bandwidth to mem amount to clock cycles is generally most important.

CPUs make up so little of TCO what matters is performance. Performance per core , performance per socket.

I do lots of nfv , I always by the highest clocking/ performance mid core count CPUs. Because that's what gives the most throughput/performance as licensing is almost always per core.
You beat me to it ;).

Additionally, DB servers get licensing based on sockets and cores. I am not certain if it matters if it is a weak core, or a very strong core in the pricing.

As you say, the processor is certainly not the biggest cost and higher performance cores appear to be the way things go.

Note how Intel's new server processors are devoid of little cores as well. Only Gracemont cores.

I believe that Intel earnings will be out soon. It will be interesting to see how things are going. Note, I still think it is hard to tell if the cost of the silicon is the problem even if earnings are off.
 

ASK THE COMMUNITY