base die precess update(presume)
standard hbm(S-hbm) custom hbm(C-hbm) H(skH) S, M(micron)
S-hbm4(also 4E)
H 12nm S sf4x M dram(tsmc?)
C-hbm4(only amd?)
S sf4x
C-hbm4E
H n3p S sf2 M n3p
AMD are not exactly going to be sharing proprietary knowledge with Samsung (or SK Hynix) about custom base dies they haven't yet sent to fab, so it stands to reason that the off the shelf base die engineer isn't going to know what AMD's capabilities are yet.That's why I know JEDEC exists.
Joe Macri (Board of Directors) is the AMD vice president known for creating HBM.
But the problem here is that memory guys claim JEDEC only sets the physical specifications, and S-HBM is no different from custom.
I've been reading some of the opinions exchanged on Twitter, so it's probably easier to understand.
There are differences of opinion.
Once again, this is wrong.first c-hbm is nvidia c-hbm4e
ThanksOnce again, this is wrong.
MI400's have kustom HBM with LPDDR shoreline stashed in each base die.
this is very very very funny (AMD R&D opex was 600m higher YoY last Q).AMD is a small, hole-in-the-wall company with insufficient capabilities.
The conversation of x above is a stereotype, they talk as if the development cost is 600mthis is very very very funny (AMD R&D opex was 600m higher YoY last Q).
C-hbm Is it worth it?this is very very very funny (AMD R&D opex was 600m higher YoY last Q).
meh.They believe that Nvidia is buying it for $700, and Lisa is buying it for cheap(450) with a lower clock speed
Yeah, it's more shoreline for various nefarious purposes.C-hbm Is it worth it?
custom AMD Instinct GPU based on the MI450 architecturemeh.
Yeah, it's more shoreline for various nefarious purposes.
Means MI455X with a different flexIO split.custom AMD Instinct GPU based on the MI450 architecture
What does this mean? tell me sir
capacity? 32x24 or 576Do we know how LPDDR5X MI450 will have?
Yes sorry for the word salad.capacity? 32x24 or 576
Nah sorry i think so 768GB. below 1tb(48,64)Hmm. That's only ~30% bump vs HBM4 capacity. Would've expected higher LPDDR5X capacity.
That sounds more reasonable. So ~1/2 of Vera CPU LPDDR5X.Nah sorry i think so 768GB. below 1tb(48,64)
this is very very very funny (AMD R&D opex was 600m higher YoY last Q).
Based on adroc's reply I'd assume that this is total R&D expenditure change from 2024 Q4 -> 2025 Q4 for AMD across all silicon µArchs in various compute types (CPU, GPU, FPGA, DPU etc), hw IO (consumer and enterprise/server, including mobo chipsets) and software R&D (ROCm/HIP, gfx drivers, CPU compilers, GPUOpen academic work/collabs etc), not solely on one specific memory interface type like HBM.The conversation of x above is a stereotype, they talk as if the development cost is 600m
If it costs that much, I'd rather use s-hbm