Lots to reply to here....
@TheELF
What do you engineers call "economies of scale" ?
Intel has all of their own FABs so isn't paying someone else for manufacturing and is producing in the range of ten times as many CPUs as AMD is.
Waste is a different matter, like intel may be losing much more dies or whatever, but I would bet that each completed CPU is costing intel much less than what it costs AMD, and that is including the iGPU for intel.
As pointed out, fabs cost an enormous amount of money to build and maintain alone. Adding new capabilities for leading nodes is .... beyond expensive. In this specific discussion, we are not pitting AMD against Intel (at least not since AMD went fabless), we are pitting Intel against TSMC. While I agree that such a discussion is worthy (and interesting), it really isn't the point I was trying to make.
What do you think that happened with 10nm?!
Again, off topic, but interesting. I would say that Intel's total project cost for 10nm has been vastly more expensive per chip than the profit that TSMC has gotten from AMD per chip.
For decades (I have been doing this for a while), I have argued that Intel's greatest strength was they they were able to maintain 1 to 2 process node shrinks over the competition. Not only is this not the case today, but it is not likely to be the case in the next 10 years.
Soooo. This is why I believe that architecture is so important NOW. Intel can clearly no longer rely on maintaining market dominance based on process node alone (not that they haven't had great architecture in the past, it just wasn't the main reason they were dominant IMO).
The irony is that technically, it should be far easier (Cheaper) to package a monolithic die than AMD chiplet approach. You're transferring the complexity from big dies to overcomplicated package technology.
AMD advantage is that chiplets allows it to reuse the same CPU chiplets across several lines (Consumer and Server) and potentially mix and match IO chiplets without changing the CPU chiplet. They may stock CPU chiplets with some headroom to shift IO chiplet production based on demand.
If it is, then it is only marginally so (again, a great discussion topic). A monolithic die still has to have interconnects to the board and maintain high speed trace integrity. And while I agree that I AMD has transferred some of the complexity from the big die to the package technology (and that this is NOT an easy task), it is, in fact, that very change that I am saying is paying off in spades for AMD. Yes, I agree it is likely difficult to design. I simply point out that the total cost is greatly reduced, and as you point out, the ability of the AMD architecture to scale up is nearly unlimited. If Intel attempted to create a 96 core monolithic core that matched AMD's design, I think it would be incredibly cost prohibitive.
Why do you think that a miniature high precision multilayer PCB costs just 1 USD?
Because at the end of the day, it is simply materials cost plus manufacturing cost with materials cost being the lions share of each unit. Yes, they are much more expensive to design and validate, but that cost pales beside the savings of using smaller chips and chips with different process nodes. Why do you believe that these boards are so expensive to produce? After all, it isn't like a monolithic die doesn't need a interface board. The only difference in cost for the interface board comes down to the size. So cost interface board cost really comes down to raw resources required and process time to produce. Both of these items for the interconnect boards are light years below the cost of cutting edge silicon chip production costs.
Intel is going chiplet with arrow lake.
Indeed. This shows that not only is it likely a good design, but that Intel realizes that it is a major contributor to a more profitable process. My suspicions are that using a monolithic design is likely a boon for IPC as I can't imagine signaling through an interconnect board could possibly be as efficient as doing so within the die. Moving from a monolithic design to a chiplet design likely involves some shuffling and design considerations for the narrower and slower connections.
To address the actual CPU core architecture, I believe that the jury is still out on Intel's "Big-Little" design. Non-symmetric processing isn't a new idea. Sony play station asymmetric design ended up being replaced by an AMD processor after all. One would think that a gaming computer that has much more rigid demands would be the ideal place for such a design. Still, Intel's Alder lake processor shows some pretty good benchmarks with some benchmarks that utilize those little cores well. The bigger question to ask IMO on this design is from an overall design and sales perspective, is it still a good idea?
The big-little concept doesn't really appear to work as well for most server loads. AMD has been eating into Intel's VERY lucrative server market for the last few years with the Ryzen processor and their chiplet design. The question I would pose is if it makes sense to go through all the trouble of having little cores when you are making so much more profit on big cores in servers?
As I said, I believe the jury is still out on this one. Still, my point was supposed to be that Intel is a very innovative company. If I went back in time and went through all their architectural 1'st in the market designs, it would far outstrip that of AMD's.
But that was then and this is now. Intel is playing from a deficit IMO. Their design is not as production friendly as AMD's and their previous ability to stay 1 to 2 process nodes ahead of the rest of the industry is clearly at an end.