Intel 3 will probably have a pretty long life. If nothing else, they need a reasonably mature node for their 2024 server products, and neither 20A nor 18A fit that bill, clearly. But I assume they'll at least use it for IO dies and stuff going forward, even if IFS never really picks it up.If Intel 18A is that great and able to be pushed up that far great, then why waste your time with TWO short lived nodes ahead of it? ... So why not skip 20A entirely and use Intel 3 for a year?
As for 20A vs Intel 3, 20A will presumably be better for anything that can afford the schedule risk and lower yield, i.e. client, not server. Which is why they just have a single, relatively small die planned for it.
I think it's more useful to think of Intel 4 and 20A as subsets of Intel 3 and 18A respectively. So they're not really different nodes so much as they are early versions that would be expanded/refined in the successors. Given Intel's produce mix, it's probably not a terrible idea.Now sure 20A can make a good "stepping stone" to 18A as far as the first node to use the new transistors and PowerVIA, but if it is only going to last six months they might have what, one product able to use it? Which would presumably be replaced/updated quickly to use 18A?
Though I think you do have a point in that if Intel wasn't an IDM, stuff like ARL 20A wouldn't make any sense. That appears to be an attempt to force the design teams to provide the ramp vehicle the foundry needs to get the node working. Wonder how that works with their new accounting scheme...
I doubt they're actually designing everything for multiple nodes at once just as a backup plan. That would be extremely inefficient, and they probably don't have the staffing for it at this point.There is no "man, should have designed for 20A instead of Intel 18A" since they are designing for both (or more) nodes and cancelling one (or more).