Discussion Zen 4 vs Raptor Lake (Profit and Volume)

OneEng2

Junior Member
Sep 19, 2022
13
29
51
I know we are all zero'd in on IPC, Clock Rate, ST and MT score leaks etc, etc. I was thinking maybe some time should be given to how these two architectures work from a manufacturing standpoint. We engineers call that "Design for Manufacturing".

It is my thought that AMD's Zen 4 architecture holds a significant advantage over Intel's Raptor lake CPU architecture because of its fundamental "Tile" design vs Raptor lake's monolithic design. Let me explain.

AMD has the ability to produce ONLY the core CPU portion of the processor on the latest process node. The I/O die is done on TSMC's N6 process (that costs less and has less demand for the equipment) That alone makes the Zen 4 less expensive to produce than Raptor Lake, but it doesn't stop there either.

AMD's Zen 4 design also allows even the CPU's to be divided into 2 chiplets (8 cores in each "core" chiplet). This makes for a much much smaller core chiplet die for AMD than Raptor lakes monolithic die. Because the Zen 4 core chiplet is so much smaller, the yields will be larger, and the cost to produce lower.

Finally, AMD's chiplet design will allow AMD to quickly move to a 12 chiplet core design with a central I/O chip for their Zen4 based server chip having 96 cores in a single socket.

It seems to me that AMD's biggest advancement in the last 7 years wasn't necessarily the Zen architecture, but rather their chiplet design methodology.

Thoughts?
 
  • Like
Reactions: alexruiz

Hitman928

Diamond Member
Apr 15, 2012
5,324
8,015
136
Don't you do know that costs of goods sold increases as you sell more? This shows Intel pretty much stagnating as AMD grows quickly. You instead, see this as a negative sign. Amazing.

Exactly. I made two quick charts to illustrate. Obviously as you sell more and more product, your cost of goods goes up because you have to buy more components / pay for manufacturing the additional volume. The problem for Intel that @Tuna-Fish was pointing out was that Intel's COGS actually has gone up slightly while their volume and revenue has dropped significantly. I don't expect the severity of this trend to continue very long, but Intel's ability to get back to their traditional margins and profitability is very much in question. If they can't, then their ability to maintain their own fabs also comes into serious question unless they can start to get significant action from their foundry services which so far they have completely failed to do (maybe this time they'll figure it out, yet to be seen).

1664037255762.png

1664037281287.png


Retained earnings is a pointless comparison as it inherently has years of financial performance baked in. If you look at where they are at today, Intel has more debt than they have cash on hand, versus AMD having more cash than debt. AMD has rising gross margins and profit while Intel's are falling. AMD's gross margin was actually higher than Intel's last quarter and is projected to be again for next quarter as well. Both Intel and AMD are increasing their spend on R&D but AMD's goes directly into product development while Intel is spending more and more on fab R&D. This wouldn't matter so much except AMD's gross margins are now higher than Intel's so, in essence, Intel is paying more for their fab development than AMD is.

None of this is to paint Intel as a doomed company, but they do have an uphill battle at this point with a lot of question marks. I've said for a few years now that AMD's real moment will come when they can start to sell both 5 nm and 7 nm devices at the same time to increase the volume of products that are either industry leading or very competitive in the marketplace. Having product lines on both nodes with competitive products will let them move more volume than ever before. We'll see how things play out, but I don't think Intel has seen the bottom yet and by their own admission, they don't even have a shot at regaining leadership until 2025, so it's going to be a rough few years at minimum.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
Without inside information, it's not really possible to know precise cost per package for AMD or Intel. We also know nothing about yields about 10ESF/Intel 7. There is some published data that hints at yield rates for N5-family nodes, but Intel 7? Not so much.
 
  • Like
Reactions: Kaluan

TheELF

Diamond Member
Dec 22, 2012
3,973
731
126
We engineers call that "Design for Manufacturing".
What do you engineers call "economies of scale" ?
Intel has all of their own FABs so isn't paying someone else for manufacturing and is producing in the range of ten times as many CPUs as AMD is.

Waste is a different matter, like intel may be losing much more dies or whatever, but I would bet that each completed CPU is costing intel much less than what it costs AMD, and that is including the iGPU for intel.
 
  • Like
Reactions: Hotrod2go

Asterox

Golden Member
May 15, 2012
1,026
1,775
136
What do you engineers call "economies of scale" ?
Intel has all of their own FABs so isn't paying someone else for manufacturing and is producing in the range of ten times as many CPUs as AMD is.

Waste is a different matter, like intel may be losing much more dies or whatever, but I would bet that each completed CPU is costing intel much less than what it costs AMD, and that is including the iGPU for intel.

You're forgetting a couple of facts, and in the end it really counts as a big total cost of bussines.AMD is in a much better position, and will continue to be so as long as TSMC is the manufacturing leader.

- FAB-s are very expensive to maintain, regardless of whether they put out a great product or very bad or poor yields

- workers are asking for money, even if the CPU yield is a only desperate 10%.We all now, what happened to the original intel 10nm=a very expensive shot in the fog

- It is no good for Intel, so he still has to pay TSMC for their expensive nm products
 

zir_blazer

Golden Member
Jun 6, 2013
1,166
408
136
The irony is that technically, it should be far easier (Cheaper) to package a monolithic die than AMD chiplet approach. You're transferring the complexity from big dies to overcomplicated package technology.
AMD advantage is that chiplets allows it to reuse the same CPU chiplets across several lines (Consumer and Server) and potentially mix and match IO chiplets without changing the CPU chiplet. They may stock CPU chiplets with some headroom to shift IO chiplet production based on demand.
 

BorisTheBlade82

Senior member
May 1, 2020
664
1,015
106
The irony is that technically, it should be far easier (Cheaper) to package a monolithic die than AMD chiplet approach. You're transferring the complexity from big dies to overcomplicated package technology.
Their current packaging should be dead cheap. I'd be rather shocked if it costs them more than 1USD per unit - supposedly much less.
Intel is going chiplet with arrow lake.
Already with Meteor Lake. There costs for packaging and the interposer will be much higher. I would guess maybe in the range of 5 to 10 USD per unit in the beginning.
 

Kocicak

Senior member
Jan 17, 2019
982
973
136
I would say PCB is 8-20 USD and heatspreader 1-5 USD. Keep in mind that each individual PCB has microscopic connections for silicone and each electrical connection needs to be tested for continuity and isolation. I have no idea if any RF testing is performed too.

If the electrical testing is not done in the end, then each step of production must be extremely carefully optically monitored.

Now I start to think that the PCB may be even pricier than I estimated. Those large PCBs for server CPUs may be even over 100 USD each.
 
Last edited:
  • Like
Reactions: Tlh97 and Kaluan

Tuna-Fish

Golden Member
Mar 4, 2011
1,355
1,550
136
Meanwhile back in reality...
Intel is increasing their profits marginally while amd is decreasing by 9%

Firstly, Intel isn't increasing their profits, they posted a loss. Their profits spiked during the pandemic because demand skyrocketed and supply was constrained, and so they could sell more, especially in the low-end/netbook segment at an inflated ASP. Those conditions are not going to repeat. When you use TTM to compare a company that did well earlier, but is now seeing their sales fall, and a company that is growing (and very recently was growing very rapidly), you get very distorted results. Let's look a pure quarterly revenue: A year or more ago, Intel pulled ~consistent 19-20B a quarter. AMD was rapidly growing, but in 21Q2 they pulled in 3.85B. That is, they were about 1/5th the size of Intel. Last quarter, Intel pulled in 15B while AMD got 6.5B. That is, AMD is now ~42% the size of Intel. If you look at their public expectations, if both hit exactly where they are expecting, AMD will be >45% of the size of Intel this quarter. When do you expect this trend to stop?

Secodly, the deeper trends are really worrying. Intel COGS is actually growing, and they spent more on it for their last quarter than they spent on any of their previous quarters. This is explained by cratering prices, especially in the server market, and there is no sign of them recovering. Even Intel themselves do not expect next quarter to be good for them -- in July Gelsinger claimed that Q3 will be "the bottom". While this is hopeful in that it expects Q4 to be better, given that Q2 already posted a significant loss, just how bad will Q3 be for them? And what is supposed to happen in Q4 to improve the situation? Since Gelsinger claimed that, Sapphire Rapids release date was pushed forwards again.

As to profit, it's a good idea to look at where the money is going. AMD has doubled their quaterly R&D spend in a year. Had AMD not done this, their net profit would be ~2.5 times higher than it was. I doubt many investors are going to mind that they are spending their profits to improve their future. In contrast, Intel R&D has grown only 15%, and while the 4.4B looks large compared to AMD, Intel is also doing all of their fab R&D by themselves, while for AMD this is done by TSMC, and TSMC alone outspent Intel.

tl;dr: your use of trailing figures and concentrating overly on profit gives you a distorted picture of the financial health of the companies. I expect that Intel Q3 will be worse than their Q2, and that AMD Q3 will be better than their Q2. I do not expect this trend to stop. We'll know more in about a month.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,749
4,691
136
That's extremely different from horrendous though.
This is what I answered to.



Revenue means nothing...
if you make a billion in revenue but only put 10 dollars in your pocket you are worse off than somebody that makes $100 but gets to keep it all...

AMD's COGS more than tripled in the last two years...
Intel went from 9.2bil to 9.7bil whoop-de-doo.



So what is your point here ???
AMD spending money on research=good, intel spending money on research=bad??
For AMD it's going to make them money in the future but for intel it's not?

That is an excellent idea.
Intel lost about 2 bil last quarter which isn't even the biggest loss in the last two years, while AMD paid off 440 mill in debt, with a bit of luck AMD is going to become profitable in this 3rd quarter....
Don't you do know that costs of goods sold increases as you sell more? This shows Intel pretty much stagnating as AMD grows quickly. You instead, see this as a negative sign. Amazing.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,573
14,526
136
I don't know why this is not clear from the last 4 Intel and AMD quarterly results. Intel has gotten worse each one, and AMD has been doing better each one. And if you read the last paragraph in Hitman928's post above, its clear that even Intel knows at this point they are facing an uphill battle.

The only other take is when somebody cherry picks a certain stat and tries to make a point about it to paint a good light for Intel, otherwise there is (currently) no good light.
 
  • Like
Reactions: Tlh97 and scineram

OneEng2

Junior Member
Sep 19, 2022
13
29
51
Lots to reply to here....

@TheELF
What do you engineers call "economies of scale" ?
Intel has all of their own FABs so isn't paying someone else for manufacturing and is producing in the range of ten times as many CPUs as AMD is.

Waste is a different matter, like intel may be losing much more dies or whatever, but I would bet that each completed CPU is costing intel much less than what it costs AMD, and that is including the iGPU for intel.

As pointed out, fabs cost an enormous amount of money to build and maintain alone. Adding new capabilities for leading nodes is .... beyond expensive. In this specific discussion, we are not pitting AMD against Intel (at least not since AMD went fabless), we are pitting Intel against TSMC. While I agree that such a discussion is worthy (and interesting), it really isn't the point I was trying to make.

What do you think that happened with 10nm?!

Again, off topic, but interesting. I would say that Intel's total project cost for 10nm has been vastly more expensive per chip than the profit that TSMC has gotten from AMD per chip.

For decades (I have been doing this for a while), I have argued that Intel's greatest strength was they they were able to maintain 1 to 2 process node shrinks over the competition. Not only is this not the case today, but it is not likely to be the case in the next 10 years.

Soooo. This is why I believe that architecture is so important NOW. Intel can clearly no longer rely on maintaining market dominance based on process node alone (not that they haven't had great architecture in the past, it just wasn't the main reason they were dominant IMO).

The irony is that technically, it should be far easier (Cheaper) to package a monolithic die than AMD chiplet approach. You're transferring the complexity from big dies to overcomplicated package technology.
AMD advantage is that chiplets allows it to reuse the same CPU chiplets across several lines (Consumer and Server) and potentially mix and match IO chiplets without changing the CPU chiplet. They may stock CPU chiplets with some headroom to shift IO chiplet production based on demand.

If it is, then it is only marginally so (again, a great discussion topic). A monolithic die still has to have interconnects to the board and maintain high speed trace integrity. And while I agree that I AMD has transferred some of the complexity from the big die to the package technology (and that this is NOT an easy task), it is, in fact, that very change that I am saying is paying off in spades for AMD. Yes, I agree it is likely difficult to design. I simply point out that the total cost is greatly reduced, and as you point out, the ability of the AMD architecture to scale up is nearly unlimited. If Intel attempted to create a 96 core monolithic core that matched AMD's design, I think it would be incredibly cost prohibitive.

Why do you think that a miniature high precision multilayer PCB costs just 1 USD?

Because at the end of the day, it is simply materials cost plus manufacturing cost with materials cost being the lions share of each unit. Yes, they are much more expensive to design and validate, but that cost pales beside the savings of using smaller chips and chips with different process nodes. Why do you believe that these boards are so expensive to produce? After all, it isn't like a monolithic die doesn't need a interface board. The only difference in cost for the interface board comes down to the size. So cost interface board cost really comes down to raw resources required and process time to produce. Both of these items for the interconnect boards are light years below the cost of cutting edge silicon chip production costs.

Intel is going chiplet with arrow lake.

Indeed. This shows that not only is it likely a good design, but that Intel realizes that it is a major contributor to a more profitable process. My suspicions are that using a monolithic design is likely a boon for IPC as I can't imagine signaling through an interconnect board could possibly be as efficient as doing so within the die. Moving from a monolithic design to a chiplet design likely involves some shuffling and design considerations for the narrower and slower connections.

To address the actual CPU core architecture, I believe that the jury is still out on Intel's "Big-Little" design. Non-symmetric processing isn't a new idea. Sony play station asymmetric design ended up being replaced by an AMD processor after all. One would think that a gaming computer that has much more rigid demands would be the ideal place for such a design. Still, Intel's Alder lake processor shows some pretty good benchmarks with some benchmarks that utilize those little cores well. The bigger question to ask IMO on this design is from an overall design and sales perspective, is it still a good idea?

The big-little concept doesn't really appear to work as well for most server loads. AMD has been eating into Intel's VERY lucrative server market for the last few years with the Ryzen processor and their chiplet design. The question I would pose is if it makes sense to go through all the trouble of having little cores when you are making so much more profit on big cores in servers?

As I said, I believe the jury is still out on this one. Still, my point was supposed to be that Intel is a very innovative company. If I went back in time and went through all their architectural 1'st in the market designs, it would far outstrip that of AMD's.

But that was then and this is now. Intel is playing from a deficit IMO. Their design is not as production friendly as AMD's and their previous ability to stay 1 to 2 process nodes ahead of the rest of the industry is clearly at an end.
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
I don't know why this is not clear from the last 4 Intel and AMD quarterly results. Intel has gotten worse each one, and AMD has been doing better each one. And if you read the last paragraph in Hitman928's post above, its clear that even Intel knows at this point they are facing an uphill battle.

The only other take is when somebody cherry picks a certain stat and tries to make a point about it to paint a good light for Intel, otherwise there is (currently) no good light.
I guess my only counter to this (and arguably it may simply be from so many years and decades of empirical evidence) is that Intel still has 50 times the resources that AMD has. They have more monopoly power in the industry (although it is fading quickly), and have come back from seriously fatal design decisions in the past (read Pentium 4 and Itanium). My decades of following this dance just wont allow me to count them out :).
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,355
1,550
136
Revenue means nothing...
if you make a billion in revenue but only put 10 dollars in your pocket you are worse off than somebody that makes $100 but gets to keep it all...
Revenue means plenty of things. Specifically, for companies with massive NRE, like foundries or CPU or GPU makers, it's critical to have sufficient revenue to cover all those costs. Especially when you are used to having a higher revenue and suddenly you don't.

AMD's COGS more than tripled in the last two years...
Intel went from 9.2bil to 9.7bil whoop-de-doo.
Are you for real? COGS are, by definition, the part of costs that grow linearly with amount of goods sold. GOGS going up when you sell more stuff is entirely normal. What's not normal, or okay, is when your revenue falls by a quarter and your COGS just goes up. This implies that their ASPs are cratering.

So what is your point here ???
AMD spending money on research=good, intel spending money on research=bad??
For AMD it's going to make them money in the future but for intel it's not?
No. They both need to spend money. The problem is that AMD is now almost certainly outspending Intel in CPU and GPU R&D, and TSMC is definitely outspending Intel in process R&D.

That is an excellent idea.
Intel lost about 2 bil last quarter which isn't even the biggest loss in the last two years, while AMD paid off 440 mill in debt, with a bit of luck AMD is going to become profitable in this 3rd quarter....
I criticise you for using an excessively lagging indicator and you switch to one that has accumulated over 12 years. Seriously? Do you not understand that past performance is irrelevant for the current and future health of the company? That it's bad when a company that has made >$9B of gross profit per quarter since 2016 suddenly has a $5.5B quarter, and then gives a guidance that they expect things to get worse before they get better?
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,573
14,526
136
I guess my only counter to this (and arguably it may simply be from so many years and decades of empirical evidence) is that Intel still has 50 times the resources that AMD has. They have more monopoly power in the industry (although it is fading quickly), and have come back from seriously fatal design decisions in the past (read Pentium 4 and Itanium). My decades of following this dance just wont allow me to count them out :).
First their anti-competitive practices were caught and they were called on it. And as you said they have had failures before. And my post is not meant to say they are a goner.

BUT, for the first time in decades or ever, they are losing market share and competitive edge and face a real possibility of being in a 50/50 market within 10 years. I highly doubt they have 50 times the resources. Maybe 2-3 times, but that is changing also. THIS is the reality that I see. even a 60/40 would benefit the industry. I think Intel has even admitted that they see 70/30 in a year, and they see several years before they can get ANYTHING back. The question is, will that be at 60/40 or 50/50 (and these are just ballpark estimates)
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,355
1,550
136
COGS is everything they produce in that quarter and it's impossible to draw any parallel to what they actually sell in that quarter, they could be producing for months in advance or still be producing things they presold.

This is incorrect. COGS is specifically the costs directly associated with products sold during a specific period of time.
 

maddie

Diamond Member
Jul 18, 2010
4,749
4,691
136
Revenue doesn't cover any cost, you have to get to keep some of the revenue money to cover anything.

That would make some amount of sense if a company would only sell things they made in that quarter and also they sold 100% everything they produced.
You know very well that's not what happens.
COGS is everything they produce in that quarter and it's impossible to draw any parallel to what they actually sell in that quarter, they could be producing for months in advance or still be producing things they presold.
I guess we should have a new acronym, COGP, according to you redefinition. Quickly now, spread the word.
 

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
if you make a billion in revenue but only put 10 dollars in your pocket you are worse off than somebody that makes $100 but gets to keep it all...

That is ridiculous. Capital expenditures can help with future growth. You ought to know that, you keep claiming all of Intel's losses on paper are from building fabs (hint: that isn't true either).
 

OneEng2

Junior Member
Sep 19, 2022
13
29
51
First their anti-competitive practices were caught and they were called on it. And as you said they have had failures before. And my post is not meant to say they are a goner.

BUT, for the first time in decades or ever, they are losing market share and competitive edge and face a real possibility of being in a 50/50 market within 10 years. I highly doubt they have 50 times the resources. Maybe 2-3 times, but that is changing also. THIS is the reality that I see. even a 60/40 would benefit the industry. I think Intel has even admitted that they see 70/30 in a year, and they see several years before they can get ANYTHING back. The question is, will that be at 60/40 or 50/50 (and these are just ballpark estimates)
I believe they did lose market share in the P4 vs K8 days in both desktop and server markets. It wasn't a great deal of market share (5 -10% IIRC), but enough for the trend line to be disturbing for Intel. They did a very (VERY) quick pivot to the Conroe architecture and dumped Itanium which returned them to market dominance Until just recently (read Ryzen generation starting mid 2017).

The difference between the past and the present loss of market share is that there isn't a Conroe architecture waiting in the wings or a 1-2 process node advantage to take advantage of.

I don't believe that the Adler lake processor architecture is a bad one. It is actually quite good; however, it isn't fundamentally superior to Ryzen. Adding to that (and the point of this thread) is that AMD's chiplet design gives them a fundamental advantage in cost and design flexibility.

I could see 70/30 in a year (or two). I can't see beyond that because my concern is that AMD will be constrained by TSMC capacity and I can see Intel making a sweetheart deal with TSMC if for no other reason than it limits AMD market share gain by limiting their production capacity.

The discussion on process technology is also worth analyzing in the same scope of efficiency of design with respect to manufacturing cost.

As for the discussion of the relative size of TSMC vs Intel, I thought this was a really good read: https://www.electropages.com/blog/2021/04/place-your-bets-intel-vs-tsmc

Another concern of mine is that while TSMC make 54% of the worlds semiconductor chips, ASML makes 100% of the worlds EUV lithography equipment! I love that the US has implemented the "CHIPS" act and is feverishly building chip factories in the US, but where are all the EUV machines going to come from? I believe that ASML capacity may well be a problem in the next couple of years: https://seekingalpha.com/article/4502962-asml-q1-review-capacity-expansion-will-drive-growth

Both AMD and Intel (and others) are all lined up for GAA process nodes (every company seems to have its own name for the same thing), but AFAIK they all rely on EUV equipment from ASML. Does that sound right to everyone?
 

DrMrLordX

Lifer
Apr 27, 2000
21,643
10,860
136
Both AMD and Intel (and others) are all lined up for GAA process nodes (every company seems to have its own name for the same thing), but AFAIK they all rely on EUV equipment from ASML. Does that sound right to everyone?

Essentially, yes. Though I think Intel is more interested in MBCFet?