This is on absolutely and completely different level than eDRAM or HBM, since it is good old L3 cache, made huge by AMD.
eDRAM or HBM are "L4" solutions, so they either need tags ( that take space that could be taken by L3 cache ) or they serve as so called "system cache" on memory side of things, acting as huge buffer (There is possibility of outright replacing some DRAM with say HBM, so first 16GB of address space are served by HBM and so on but that is different solution ).
This above sounds complex and it is without drawbacks, obvious and hidden. For example tag checking is not free, on every L3 miss you need time and energy to see if your cache line is in L4. So your average memory latency has now grown. System side caches add complexity and also use energy, while not being that effective.
While AMD's solution is good old L3, but made huge. They have 8x4MB slices now, most likely 16 more will be added, with resulting increase in cumulative bandwidth and reducing averagef pending request queues that happen now due to address bit collisions. AMD is citing 2TB/s, that is pretty much BW of their L2 cache at clocks of 5900x. INCREDIBLE achievement to have L3 caches with bandwidth that is near L2 cache bw and it really opens up things for FP perf and MT performance in general due to prefetching.
AMD somehow went from also rans in cache designs, with caches that were either questionable or behind Intel to beating the hell out of everyone in latencies and capacities.