Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 38 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
I was wondering if these would end up standalone or shared die.

Sharing memory IO would suggest a shared die would be more optimal.
If I grokked MLID correctly the cpu is in the i/o die

Then you have 2 options:
  1. Add another cpu die (replacement to strix point)
  2. Add another gpu die (medusa premium & also halo I think)
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,658
2,505
136
I wonder what will be the smallest LPDDR6 chip available? IIRC the smallest LPDDR5 ones are 8Gb, but I dunno if those are even still in production, the most common ones I see around are 12Gb.

AMD might be forced to put more memory in AT3 cards than on AT2 ones?
 
  • Like
Reactions: marees

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
I wonder what will be the smallest LPDDR6 chip available? IIRC the smallest LPDDR5 ones are 8Gb, but I dunno if those are even still in production, the most common ones I see around are 12Gb.

AMD might be forced to put more memory in AT3 cards than on AT2 ones?
Need a thread for how much vram is too much vram 😉
 

Joe NYC

Diamond Member
Jun 26, 2021
3,430
5,025
136
If I grokked MLID correctly the cpu is in the i/o die

Then you have 2 options:
  1. Add another cpu die (replacement to strix point)
  2. Add another gpu die (medusa premium & also halo I think)

One thing that contradicts is the XBox configuration, shown in previous videos, where there is a base monolithic CPU-SOC die and a separate GPU die

And this video suggests this approach may be shared with laptops.
 

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
One thing that contradicts is the XBox configuration, shown in previous videos, where there is a base monolithic CPU-SOC die and a separate GPU die

And this video suggests this approach may be shared with laptops.
No the Xbox is gddr7 — desktop mode
Medusa halo & premium are lpddr6 — laptop mode

So the architecture changes. But still take all this with mountains of salt as MLID is the source
 

Joe NYC

Diamond Member
Jun 26, 2021
3,430
5,025
136
No the Xbox is gddr7 — desktop mode
Medusa halo & premium are lpddr6 — laptop mode

So the architecture changes. But still take all this with mountains of salt as MLID is the source

It is not spelled out what is on each die, but the way I understand it is that what he calls IOD in Medusa Point is a die that also has the base set of cores ~4 full + 4-8 dense + 2 LP cores.

And this base die would be the base, low cost monolithic laptop die.

Then, you can add on one end the 12 Zen 6 CPU chiplet
And in Medusa Mini, on the other end, you can add, the small GPU chiplet.

On Medusa Full, instead, add the big GPU chiplet.
 

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
Any guesses on (2027) launch prices ??

  1. AT0 (10090xt > 5090) — 384 bit bus so $1500+ ?
  2. AT1 (10080xt) — scrapped
  3. AT2 (10070xt = 5080 > xbox next) — 72 CU 192bit gddr7 so $600+
  4. AT3 (10060xt < 5070) — 48 CU 384bit lpddr6 so $400+
  5. 9060xt 16gb (=ps5 pro) ~ $300
  6. AT4 (10050xt > 3060 12gb in raster) — 24 CU 128bit lpddr6 so $250 ?
My revised estimations / guesstimates (no LLMs used)

(Assuming this lpddr vram thingy is true & also assuming it works out) Imagine this line-up (in 2027)

  • AT0
    • 10090xt+ — Multiple models starting at $1500 plus and huge vram like Radeon VII or titan
  • AT1
    • 10080xt — scrapped (Lisa Su took her toys & went home)
  • AT2 (gddr7)
    • 10070 xtx 24gb = $700 (~5080)
    • 10070 xt 18gb = $600 (~5070 ti)
    • 10070 gre 15gb = $500-$550 (~5070 super)
  • AT3 (lpddr6)
    • 10060 xt 24gb = $450-$500 (~5070)
    • 10060 16gb = $400 (~5060ti 16gb)
  • AT4 (lpddr6/lpddr5x)
    • 10050xt 32gb = $350 (~9060xt 16gb)
    • 10050xt 24gb = $300 (~9060)
    • 10040xt 16gb = $250 (~3060 12gb in raster)
 
Last edited:

basix

Member
Oct 4, 2024
178
346
96
Why adding so much memory to AT2, AT3 and AT4? I would assume 18gb / 16gb / 12gb for these.

I'd like to get more, but it is unlikely that we are seeing that.
 
  • Like
Reactions: SteinFG

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
Why adding so much memory to AT2, AT3 and AT4? I would assume 18gb / 16gb / 12gb for these.

I'd like to get more, but it is unlikely that we are seeing that.
AT3 & AT4 are joke guesses because MLID said lpddr


AT2 I have to give a serious rethink.
I am now thinking xtx will use 4gb gddr7 while xt & gre will use 3gb gddr7
 

Saylick

Diamond Member
Sep 10, 2012
3,962
9,246
136
My revised estimations / guesstimates (no LLMs used)

(Assuming this lpddr vram thingy is true & also assuming it works out) Imagine this line-up (in 2027)

  • AT0
    • 10090xt+ — Multiple models starting at $1500 plus and huge vram like Radeon VII or titan
  • AT1
    • 10080xt — scrapped (Lisa Su took her toys & went home)
  • AT2 (gddr7)
    • 10070 xtx 24gb = $700 (~5080)
    • 10070 xt 18gb = $600 (~5070 ti)
    • 10070 gre 15gb = $500-$550 (~5070 super)
  • AT3 (lpddr6)
    • 10060 xt 24gb = $450-$500 (~5070)
    • 10060 16gb = $400 (~5060ti 16gb)
  • AT4 (lpddr6/lpddr5x)
    • 10050xt 32gb = $350 (~9060xt 16gb)
    • 10050xt 24gb = $300 (~9060)
    • 10040xt 16gb = $250 (~3060 12gb in raster)
I'd be surprised if the 10070 XT or whatever they call it only ends up being a 5070 Ti at $600. That's basically the same as a 9070 XT in perf/$ but with 50% more VRAM.
 
  • Like
Reactions: Mopetar and marees

basix

Member
Oct 4, 2024
178
346
96
AT3 & AT4 are joke guesses because MLID said lpddr


AT2 I have to give a serious rethink.
I am now thinking xtx will use 4gb gddr7 while xt & gre will use 3gb gddr7
Even if they are using LPDDR6, why adding more memory than useful? These things have to be cheap and 16 GByte for a mainstream GPU and 12 GByte for the Low End part seem to be reasonable.
Yes, you can add more memory. But why should AMD do that when not of big benefit for the average gamer?
The same logic applies to AT2 with GDDR7. 18 GByte are most reasonable on 192bit with 24 Gbit chips. And 18 GByte are also perfectly suited for a 1440p card and also fine enough for 4K with upscaling. Most gamers won't benefit from 24 GByte but would have to pay more.

For workstation and professional parts it will be another story. There you could attach 128 GByte to a 2-ch LPDDR6 bus if AMD wants to.

never say the AI bubble didn't do anything for you

Maybe this ML/AI stuff is even the lone reason for exchanging a GDDR7 with a LPDDR6 memory interface on the lower end parts. You can build dGPUs aside of APUs with the same chips and the same humongous amounts of memory for the professional market. Good for ML/AI workloads and probably also nice for other workstation applications where FLOPS of a midrange GPU are enough but more memory is welcome (EDA etc.).
 
Last edited:
  • Like
Reactions: marees

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
Even if they are using LPDDR6, why adding more memory than useful? These things have to be cheap and 16 GByte for a mainstream GPU and 12 GByte for the Low End part seem to be reasonable.
Yes, you can add more memory. But why should AMD do that when not of big benefit for the average gamer?
The same logic applies to AT2 with GDDR7. 18 GByte are most reasonable on 192bit SI with 24 Gbit chips. And 18 GByte are also perfectly suited for a 1440p card and also fine enough for 4K with upscaling. Most gamers won't benefit from 24 GByte but would have to pay more.
Not sure what will happen with AT2

But for AT3 & AT4 the digital phone camera & mega pixels scenario applies, imo
Basically marketing

Anecdotally there was a 2gb vram Nvidia card much slower than a 1gb vram card but my colleague bought the slower 2gb one & very proudly proclaimed that he bought the 2gb. That is the market I have in mind for AT3 & AT4. definitely not forum users
 

basix

Member
Oct 4, 2024
178
346
96
I know what you are thinking of. But does that work today as good as in the past? Anybody can pull out ChatGPT to ask for the better GPU (and might get the correct answer - or not).
For example, the megapixel race has pretty much ended. Many new phones and cameras get released with fewer pixels than their predecessor. People either gained more knowledge (more pixels != more quality), simply don't care because good enough or are not interested in technical details.

Higher VRAM amounts on lower end parts make the more expensive ones less attractive as well.
 

marees

Golden Member
Apr 28, 2024
1,447
2,033
96
I know what you are thinking of. But does that work today as good as in the past? Anybody can pull out ChatGPT to ask for the better GPU (and might get the correct answer - or not).
For example, the megapixel race has pretty much ended. Many new phones and cameras get released with fewer pixels than their predecessor. People either gained more knowledge (more pixels != more quality), simply don't care because good enough or are not interested in technical details.

Higher VRAM amounts on lower end parts make the more expensive ones less attractive as well.
You are logical
But imo AMD needs a marketing trick to beat the 6050 9gb?
This could be it

But now that Jensen knows of this he will be scheming up a riposte for this

If you had these 3 options (for an entry level GPU) then which one are you buying 🤔

  • 10050xt 32gb = $350 (~9060xt 16gb)
  • 10050xt 24gb = $300 (~9060)
  • 10040xt 16gb = $250 (~3060 12gb in raster)
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,658
2,505
136
Why adding so much memory to AT2, AT3 and AT4? I would assume 18gb / 16gb / 12gb for these.

I'd like to get more, but it is unlikely that we are seeing that.

AT3 and AT4 use LPDDR interfaces. AT3 supposedly has 384 bit LPDDR6. This puts a lower limit on the amount of ram, as I doubt there will even be very small LPDDR6 chips.
 
  • Like
Reactions: basix and marees

basix

Member
Oct 4, 2024
178
346
96
Good point. 16...24 Gbit chips are probably the lower boundary. There will for sure be no 8 Gbit Chips.

Which then makes 4 modules and therefore 8...12 GByte on a "Dual-Channel" 192bit LPDDR6 interface and 8 modules resulting in 16...24 GByte at Quad-Channel. Still everything possible ;)