• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gideon

Golden Member
Nov 27, 2007
1,214
2,270
136
Really interesting stuff!

To me the coolest part was the possibility of having the interconnect bridges (118 on figure) containing the L3 on top of the GPU chiplets (106-X on figure):


I always wondered how they solve the heat issues with active interposers (having caches) underneath the multi-chiplet GPUs.

Putting the interconnects on top seems like a really nice way of doing that. The chips would obviously be design in a way that the toasty parts of GPU chiplets (CUs) and interconnects (L3) are not not under each-other.

On Figure 3 they have a more "classical" solution. I'm not really how big the difference between the two designs would actually be . One would haveTSV's (Through Silicon Via) on GPU chiplets instead of the bridge. To me it would seem that you can fit more L3 on the Fig. 4 design (as L3 can potentially take up more area than just the GAP between the chiplets) but I would be really interested if anyone with actual knowledge about these matters could comment.

Anyway really exciting and I do hope we see this in RDNA3. How capable are TSMCs current processes of handling such packaging at-scale anyway?
 

moinmoin

Platinum Member
Jun 1, 2017
2,369
2,941
106
seems very similar to this patent from earlier this year: https://www.freepatentsonline.com/y2020/0409859.html
which one is the real one? 😏
For AMD the big advantage of using old school package substrate for MCM so far has been twofold: Packaging is a solved issue and very cheap, and the length of the connections is not limited, allowing chiplets to be spread apart and still be connected individually. Bandwidth made that approach not feasible for a chiplet based GPUs though.

Passive interposers, like discussed in the older patent above, seem more akin to Intel's EMIB which as I see it brings two negatives: chiplets need to lie next to each other (so longer connection either need bigger interposers or connections need to be routed through several chiplets), and packaging is more costly with a lower yield.

The new patent is more exciting due to potentially decisively changing the balance: Infinite Cache could be moved to the interposer altogether, making the GPU chiplets smaller and directly scaling the IC with the amount of chiplets used. Packaging may still be expensive by itself, but the dies may be much cheaper to produce as a result (interposer dies could even be done on a cheaper older node like is done with the IOD as well).
 

ASK THE COMMUNITY