Do we know for certain that it is the same generation of e-core?
No, that's from me. I have extremely high confidence in what I said, but if you want to assume otherwise, doesn't really matter. Happy to wait for my claims to be proven.
But then, why in the world would Intel duplicate them on CPU die, just to get a better Cinebench score?
It seems like Intel is trying to use Atom cores for two very different things at once. On the SoC die, they're for low power compute, while on the compute die, they're for MT performance. Ideally they'd have a core optimized for each, but clearly that's out of scope for now.
This limitation was the necessity drove the partition decisions. Whether it turns out to also be a virtue as well remains to be seen.
Well if it's truly
necessary to get a product out the door, I think any decision on incremental cost is moot.
Remains to be seen. It may need additional steps to fill the gap, or add support to the die. Bigger core chiplet may need additional power delivery to the rest of the die.
Let's compare this with client Zen 3, for example, where there is one type of substrate, it does not have to change to accommodate a 2nd chiplet, 2nd chiplet is identical replica of the 1st.
Intel has to redesign the chiplet to get more cores and also, possibly redesign the interposer. So the re-use is quite limited.
There's no reason to believe there's any issue with a superset die. By way of analogy, the same socket can support all sorts of dies today. At most, they'll need some dummy dies for assembly.
And I find the comparison to AMD odd. They effectively have 3 desktop packages - 1 or 2 compute dies connected to a superset IO die, or the mobile die for mainstream. In theory, Intel could have one base die to support a range of different compute dies without issue. I don't think they'll do that (because interposers are easy), but it should be possible. Anyway, not seeing a substantial difference in overhead there.
You would want much less in terms of packaging losses than to counteract the gains from individual binning of good dies. Because if you just break even, you could have made a monolithic chip. Which Intel could not.
So arguing from the other end, that since Intel chose the approach, it must be good ignores the fact that Intel had no other choice..
Traditionally, packaging losses are typically very low. You want like 99%+ yield. I don't have numbers for Foveros, but again, we have no reason (leak, rumor, etc) to believe it's an issue, so while we also don't yet have proof that it
isn't, this logic seems like a dead end for now. Russel's teacup and all that.