adroc_thurston
Diamond Member
- Jul 2, 2023
- 6,242
- 8,784
- 106
Because it isn't.If the table is accurate a 380W 384-bit card doesn't seem like it could be targeting it
Because it isn't.If the table is accurate a 380W 384-bit card doesn't seem like it could be targeting it
The AT0 XL entry in the table with 36GB GDDR7 clearly says "Desktop Gaming".The halo may not be a "gaming" GPU. I don't think AMD can sell a $2000 gaming GPU . Its primary purpose could be something else. Something like the Radeon VII
If both are near reticle limit and using the same fab process and RAM tech, then it really becomes a challenge between architectures and drivers.Is that really to be a 6090 competitor? Presumably, Jensen is staying near reticle limit, near 600W, and still 512-bit GDDR7 but now 3nm and with 24gbit memory chips.
PS6 would be RDNA 5+ps6 definitely have it
No, it's RDNA5-.PS6 would be RDNA 5+
It's aligned with whatever AMD considers fancy.RDNA 5 is more aligned with xbox
Are those top 4 SKUs apart from the poverty version some sort of dual-chip design (even if this table is legit and not done by some certain people making MLID look like an AI)? all of them have >2 VCNs (is that the video encoding engine or what?), and look like to be cut down for the sake of efficiency?letsgoooo
VideoCoreNext. If this part ships maybe it is because some customer wanted streaming GPUs, for gaming or visualization, and we get the scraps. Xbox/Azure streaming crap?all of them have >2 VCNs (is that the video encoding engine or what?)
The slide references to that chip as AT0-XL. When the full design features 192 CU, AMD could go for an AT0-XT with more CUs, full 512bit and 48 GByte. That could be a 6090 contender. But it doesn't make sense to release such a chip, if you are not at least head-to-head to a 6090. If AMD sees a chance, they might push the full chip to gaming as well.Because it isn't.
That's GB202 as well. Everyone, including AMD, assumes Nvidia will go bigger and/or badder. So, where are they aiming if they target last gen's biggest and baddest?When the full design features 192 CU
AT3 being?So could the medusa halo share the AT3 chip ?
I believe AT3 is the 9060xt replacement of RDNA 5 genAT3 being?
And how does its memory work then?
I had hoped 12GB/44 CU/128 bit was as small as they'd go this time. But I guess it couldn't use the chips shown here because they all list only GDDR7 and Medusa Halo would be using LPDDR6 or 5x.I believe AT3 is the 9060xt replacement of RDNA 5 gen
Such a 192 CU chip will definitely be faster than a 5090, when a 5090 is ~1.8x as fast as a 9070 XT with 64CU. Would it be 1.3x / 1.5x / 1.8x faster? We will see.That's GB202 as well. Everyone, including AMD, assumes Nvidia will go bigger and/or badder. So, where are they aiming if they target last gen's biggest and baddest?
Normally, yes, but is it in this case? Why waste silicon on a feature that's only for some sort of enterprise/niche audience. Thats' why I hypothesized that it's a combo of two cut-down dies (each of which has dual VCNs) with something like 90-ish CUs and 256 bit bus linked via something like the MI3xx or other MCM devices.VideoCoreNext.
Interesting idea. From GPU performance aspects etc. it would match well. But the big advantage of Strix Halo is the big unified LPDDR5 memory pool. There might be ways to keep that, but all solutions I can think seem to be kinda awkward.So could the medusa halo share the AT3 chip ?
Interesting idea as well. Could explain the missing AT1 in the slide. But I somehow doubt, that the Die is split in two. AT1 could simply have been cancelled.Normally, yes, but is it in this case? Why waste silicon on a feature that's only for some sort of enterprise/niche audience. Thats' why I hypothesized that it's a combo of two cut-down dies (each of which has dual VCNs) with something like 90-ish CUs and 256 bit bus linked via something like the MI3xx or other MCM devices.
Because it isn't.
It can't be that niche as GB202 has 4 nvenc/nvdec. MI300 apparently has 4 VCN too.Why waste silicon on a feature that's only for some sort of enterprise/niche audience.
a) We don't know how many perf per CU compare to RDNA4Is that really to be a 6090 competitor?
I bet we get nothing over 96 CU.
Normally, yes, but is it in this case? Why waste silicon on a feature that's only for some sort of enterprise/niche audience. Thats' why I hypothesized that it's a combo of two cut-down dies (each of which has dual VCNs) with something like 90-ish CUs and 256 bit bus linked via something like the MI3xx or other MCM devices.
Yes, ok, but why then the poor man's version has only two, and all the ML/hyperscaler parts have 3 or 4? It'd be another vector of attack for the NV Foundry and the other nv tech PR if the usual super-important things like pathtracing, framegen etc aren't enough to keep the claimed feature advantage over ATiStreaming.
That’s crazy good. If it comes close or matches a 6090380w part,