If you require only perfect dies, which no one does. So let's say half of those defects are recoverable. Then you're at 95% vs 80% (rounding for laziness). In other words, ~20% more, and with that number shrinking by the month. You seriously think a company can't ship a product with those numbers?
What do you mean by recoverable? Do you mean due to the built in redundancy? What type of design are you talking about, how much area do you have to add to hit that recovery rate on average?
And TSMC has ~50% gross margins. You don't think there's room for transient differences to be priced in? And of course, that assumes two entirely equivalent nodes.
Given the ever increasing costs and difficulties of node progressions, I'd say there's probably not that much room there, no. Gross margin doesn't take into account all of the research and node development costs, the administration overhead, customer engineer support, etc. These are big time costs.
Where are you getting that number from? Why, as a customer, would you even care what the fab's yields are so long as they can get you the number of good dies promised?
Because a fab sells wafers, not dies, and the purchase agreement for those wafers will be done based upon the fab's yield data.
Even if we assume that doesn't matter, say the fab will even cover the remaining yield difference financially at an average yield per design of 95% vs. 65%. At HVM of even 20K WSPM and $8000 per wafer, the fab would then be covering roughly $48M to start the first month alone.
And since cache alone is about half a typical die, that's huge. And the synthesized arrays are also easy to add redundancy, and that's another huge chunk right there (probably a good half+ of the remainder). Again, redundancy is the norm these days.
Fabs tape out way more than cutting edge CPUs and GPUs but even ignoring that, again, you have to calculate the cost of all that redundancy, it is not free and no amount of redundancy will make a 0.5 d0 process attractive versus an industry standard defect rate process.
Amortized yields are factored into the price customers pay. Again, no one would be willing to be first to a node if they had to absorb all the cost and risk. Apple's certainly not paying for TSMC's N3 slips.
Which is why they work hand in hand and Apple aligns schedules as best as possible with actual HVM ready nodes from TSMC.
Yes, none of this is free, but the worst of the costs are absorbed by the fab, and on the design side, it's already the default practice to have mitigations in place. And again, this a "problem" that gets better by the month. I think you're taking some very reasonable economic considerations and blowing them into existential issues.
Default practice to have mitigations for industry standard defect rates, yes. But if you are dealing with a foundry with 5x the defect density as what you are used to, you're stepping outside of default practice. And again, how much redundancy is needed and how effective it can be is design dependent. Not everyone tapes out with half+ of the die being large arrays that are easy to make redundant.
Believe what you want, I don't know what else to say. All I can tell you is that any fab trying to get customers with that defect rate is not going to have many customers or is not going to be competitive long term with that kind of business model. If your argument is that you could do it, well yeah you could. You can do all kinds of things. Doesn't mean it is a winning strategy.