Which don't do them much good until they're ready to go and require serious design considerations that look like a serious gamble. Take for example your idea that bridges replace IFOP. That's an all-in design choice because you now need all IO dies to be fabricated at TSCM because you've built a chiplet that won't integrate with anything that can't be connected by that bridge or you have to build all of your chiplets so that they can work in either configuration.
Even if you're making that decision years ago, you choose Global Foundries because it allows for more production. AMD would know how many wafers that they can buy, they would know that they're going to be making new console SoCs, they would know that they're going to be expanding their GPU offerings, and they would know that they're likely to have a strong advantage in HEDT and servers due to their chiplet-based approach and be able to grow in those markets.
If you move your IO dies to TSMC you need almost twice as many wafers to offset the ~40% of area that the CPUs devote to the IO dies. Even without a crystal ball to predict a pandemic and any supply chain issues you'd have to be foolish to make that move. It just doesn't make sense logistically.
I'm not sure that's happening. I keep pointing out why it doesn't make sense, and the only response is "But what about this other really cool, but unlikely possibility?" I honestly don't think the basic points that strongly suggest continued use of Global Foundries have even been addressed. Recently they extended their commitments to GF and have now agreed to buy $2.1 billion in wafers through the next four years. But rather than address that it's just "But what about cool hypothetical future technology? Wouldn't that be sweet?"
I'm not sure that's necessarily the case. It seems most likely that RNDA3 uses two bridged chiplets if it uses any at all. It may not even do that for all models and because those chiplets are perfectly fine if they aren't connected because they're really just GPUs being put on the same package as opposed to modules that need a separate die in order to function.
Keep in mind that Zen 2 apparently had the technology in place to utilized stacked cache, but it took until the tail end of Zen 3 to actually implement it. We're talking about bleeding edge technologies that haven't been used at scale before and it takes a lot of time to get the kinks worked out. The path to the cool future technology is a slow one with incremental evolution and building on top of previous successes, not a giant leap made with reckless abandon and all caution thrown to the wind.
What, specifically, do you think doesn’t make sense? I posted here quite a while ago that I expect Bergamo will make use of stacked die. It makes a lot of sense for a super high core count product to use stacking to lower interconnect power consumption significantly. It is also suspiciously limited to 8 cpu die instead of 12 like Genoa. I assume Bergamo was developed in parallel with Genoa, with Genoa being very similar to Milan; the low risk path.
There was an anandtech article about TSMC 2.D and 3D stacking from 2020 posted here a long time ago. Some of the stacking tech has been available since 2018. I don’t know how bleeding edge it actually is at this point. TSMC was already demonstrating 12 high SoIC stacks in 2020. AMD has, so far, used 1 high stacks. Is that too risky 2 years later? AMD seems likely to use an MCD (bridge chiplet with cache) for gpus so it doesn’t seem that unlikely that it would be used in other products. Also, if AMD does not make use of TSMC stacking soon, then they will be at risk of falling behind Intel. Use of such stacking tech could allow Intel to surpass AMD’s serdes based MCM designs due to the significantly lower power consumption of the interconnect. AMD’s serdes based packages are actually just MCMs (chip based, not chiplet based), which have been around for a long time. See the IBM Power 5 MCM from 2004. That had 8 chips, 4 cpu die and 4 cache die. Intel is going to jump directly to stacking.
The old article about TSMC stacking tech indicates that they were working on making larger and larger sizes available for the stacking tech using reconstituted wafers. Some of them may go up to 3x reticle size now; may have been at 1.5 in 2020. I suspect Bergamo could be done in a single reticle size; smaller IO die plus 8 cpu chiplets similar in size to current cpu chiplets. That would possibly be less than 600 mm2 for the cpu chiplets with around 230 mm2 left for the IO die. IFOP links are 2x width of IFIS, so eliminating 8 (or 12 if comparing to Genoa) and making it on an advanced TSMC process would save a lot of area. So, while this might be risky, if it can be done in a single reticle size, it may be less risky than you might think.
They would have to make all of the component chips at TSMC, but I don’t know if that is an issue. Bergamo will exist along side Genoa (less risk; you could still get a 96-core Genoa) and probably Milan and Milan-x for a while. It will be a very high end cloud product, likely with very high margins. We are talking about a 128-core processor, possibly with 512 or even 1 GB of cache. How much will that cost? They likely already have all of the design work for the required units since they are also required for other products (APUs, GPUs, etc). AMD has excellent design and implementation reuse.
It would probably still make sense that the Genoa IO die be made at GF. AMD will also be making Milan and Genoa parts for a while. I know some companies spec a system and then buy the same thing for a few years, so they will be buying Milan for a couple years yet; the server market moves slowly. I assume that they would make all of their chipsets at GF. Chipsets are likely to still be essentially an IO die, just without the memory controllers. It would also be plausible to continue to make desktop IO die at GF. I have wondered if it would be plausible for them to make some low end Zen 4 APUs at Samsung or even GF. We still don’t have lower end Zen 3 based parts. Using MCMs for 8 core or less is a bit wasteful.