Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 44 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Joe NYC

Diamond Member
Jun 26, 2021
3,648
5,189
136
MLID is claiming that PS6 normal will use GDDR7, the handheld and the "S" version that's based on the handheld chip use LPDDR5X.

The memory sizes will make it challenging. Full console will surely need to have same or more memory, so the memory costs will start to add up...
 

MrMPFR

Member
Aug 9, 2025
103
207
71
Bethesda might for NPC interactions. It will be hanky as anything but it is about the only use I see for it.

The potential for open-ended sandbox and mass scale game worlds to use it is def there such as the virtual Game Master in the upcoming indie game The Waywards Realms.
There are many any uses for AI in video games besides boring NPC interactions.

Impact on AAA low, medium for AA, for indie impact could be huge.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
8,305
9,675
136
The potential for open-ended sandbox and mass scale game worlds to use it is def there such as the virtual Game Master in the upcoming indie game The Waywards Realms.
There are many any uses for AI in video games besides boring NPC interactions.

Impact on AAA low, medium for AA, for indie impact could be huge.

- I'd love to see an "AI Director" that's almost like a DM that tailors the game experience dynamically to keep a player engaged with a game.

Would make huge "static" open worlds so much more fresh than they've been for a very long time.
 

MrMPFR

Member
Aug 9, 2025
103
207
71
  1. 2x Intersection Testing,
  2. unified LDS/L0 Cache,
  3. Dedicated Stack Management and Traversal HW,
  4. Coherency Sorting HW, and
  5. 3-coordinate decompression Geometry HW.
How much of this is confirmed by Kepler?

#3 RDNA3 already added stack management. Does this mean dedicated RTU cache like Intel or something else?

Is #5 DGF? Yes likely.
 
Last edited:

ToTTenTranz

Senior member
Feb 4, 2021
686
1,147
136
There's Medusa Halo with SoC (CPU + I/O) and AT3 GMD (GPU + Memory), Medusa Premium with smaller SoC (CPU + I/O) and AT4 GMD (GPU + Memory) and Medusa Point with SoC (CPU + GPU + Memory + I/O) plus optional CCD.

If each one needs to have their own SoC, it doesn't look like there's much modularity here. AT4 will hardly ever be successful as a dGPU with 135GB/s bandwidth on LP5X.
Even the APU version seems a bit worthless like this, as it's bound to be choked on bandwidth.

Either these things get paired with LPDDR6 or they're a repetition of their predecessors' flaws.
 
  • Like
Reactions: Tlh97 and MrMPFR

marees

Golden Member
Apr 28, 2024
1,754
2,383
96
If each one needs to have their own SoC, it doesn't look like there's much modularity here. AT4 will hardly ever be successful as a dGPU with 135GB/s bandwidth on LP5X.
Even the APU version seems a bit worthless like this, as it's bound to be choked on bandwidth.

Either these things get paired with LPDDR6 or they're a repetition of their predecessors' flaws.
The 9060xt 16gb goes for a street price of $400

Only buyable card below that is the 7400xt 55w

The AT4 can very well fill the niche of $250 to $350 with adequate vram (replacing the 3060 12gb finally)

AT3 is an issue. But here the backup plan would be to continue selling RDNA 4 cards at discounted price
 

dangerman1337

Senior member
Sep 16, 2010
384
45
91
The 9060xt 16gb goes for a street price of $400

Only buyable card below that is the 7400xt 55w

The AT4 can very well fill the niche of $250 to $350 with adequate vram (replacing the 3060 12gb finally)

AT3 is an issue. But here the backup plan would be to continue selling RDNA 4 cards at discounted price
Really depends how AT3 & 4 dGPUs are priced because AT4 sounds way below that 9060 XT tier segment, wouldn't even be suprised if AMD prices it aggressively at $200 or below with 8GB of RAM. AT3 I think could hit $300 or so WW.
 
  • Like
Reactions: Tlh97 and marees

ToTTenTranz

Senior member
Feb 4, 2021
686
1,147
136
The what.
No one's stuffing language models into games.
Agentic AI using smallish 1.5-2B language and text-to-speech models are 100% going into videogames next gen for NPC dialogues, world stage integration, environment adaptability, etc. We're probably talking about 8GB VRAM dedicated to these at the very least.

And they're going into the 2027 consoles for a handful of years before going to PC. Thank Nvidia for being super successful at shoving 8GB GPUs down PC gamers' throats on a $300-400 budget for DIY GPUs and <$2000 laptops, for an entire friggin' decade.
We'll be seeing lots of PC Master Race bros with desktop and laptop 60-series on a meltdown because they're either not being to play the games or are getting toned down experiences (or worse: they must pay a fee to get those features from a cloud service).



The 9060xt 16gb goes for a street price of $400

Only buyable card below that is the 7400xt 55w

The AT4 can very well fill the niche of $250 to $350 with adequate vram (replacing the 3060 12gb finally)

AT3 is an issue. But here the backup plan would be to continue selling RDNA 4 cards at discounted price

I just can't see how an AT4 with 12 WGPs / 24 CUs and 135GB/s bandwidth can have the necessary performance to run newer games adequately, even at just 1080p.
PS6 Canis is going with a 192bit bus for 200GB/s on a handheld with 16CUs at 1.3GHz (i.e. probably less than half the compute throughput of AT4).


Medusa dGPUs and iGPUs with LPDDR5X don't sound like good choices at all, save maybe for running low to mid-sized LLMs on a low budget.
 
  • Like
Reactions: Tlh97

marees

Golden Member
Apr 28, 2024
1,754
2,383
96
1
I just can't see how an AT4 with 12 WGPs / 24 CUs and 135GB/s bandwidth can have the necessary performance to run newer games adequately, even at just 1080p.
1080p max with FSR 4 (or 5,6 etc.) should be doable for AT4 right ?
 

jpiniero

Lifer
Oct 1, 2010
16,810
7,253
136
Agentic AI using smallish 1.5-2B language and text-to-speech models are 100% going into videogames next gen for NPC dialogues, world stage integration, environment adaptability, etc. We're probably talking about 8GB VRAM dedicated to these at the very least.

That would be cool... but yeah not happening.

The Sony handheld is going to flop if the specs are even remotely close to accurate since it wouldn't be fast enough to run PS5 games without developer intervention. Devs wouldn't bother. But since we are talking about MLID, they likely aren't.
 
Last edited:

dangerman1337

Senior member
Sep 16, 2010
384
45
91
I just can't see how an AT4 with 12 WGPs / 24 CUs and 135GB/s bandwidth can have the necessary performance to run newer games adequately, even at just 1080p.
PS6 Canis is going with a 192bit bus for 200GB/s on a handheld with 16CUs at 1.3GHz (i.e. probably less than half the compute throughput of AT4).
Well there's 12.7 Gbps LPRR5X in the works:

Very concievable that a very cheap (no more than $200) AT4 dGPU with 8GB card targetting 1080p online F2P games (think CS2, DOTA, Fortnite etc). And then 256-bit 16GB AT3 dGPU would be enough for 1080p-"entry level" 1440p if could be sold at $300 would bring a new level of performance to the masses Vs now.
 
  • Like
Reactions: Tlh97 and marees

ToTTenTranz

Senior member
Feb 4, 2021
686
1,147
136
A cool idea if you want your FPS to plummet everytime you speak to an NPC

Not if you only need like 8 tokens/s (plentiful for conversations, overkill for dynamic environmental changes and world updates) and properly allocate the resources for it.

The latest developments on hardware-aware optimizations like Jet-Nemotron get the old Orin (50 TOPS INT8) to output 55 token/s on a 2B LLM.
Developers can eventually allocate like the equivalent of 10-20 TOPs INT4 of a GPU or NPU for this and would probably suffice. It just needs enough RAM for having the model loaded and a proper context window.


Videogames, mainly all the high profile sandbox RPGs and action games, will be changing drastically with agentic AI.
Just not for the folks who bought a 3060Ti/3070, or a 4060 8GB, or a 5060 8GB, or a 5070 8GB laptop. I.e. all those top sellers in Steam's Hardware Survey.
 
Last edited:

dangerman1337

Senior member
Sep 16, 2010
384
45
91
Not if you only need like 8 tokens/s (plentiful for conversations, overkill for dynamic environmental changes and world updates) and properly allocate the resources for it.

The latest developments on hardware-aware optimizations like Jet-Nemotron get the old Orin (50 TOPS INT8) to output 55 token/s on a 2B LLM.
Developers can eventually allocate like the equivalent of 10-20 TOPs INT4 of a GPU or NPU for this and would probably suffice. It just needs enough RAM for having the model loaded and a proper context window.


Videogames, mainly all the high profile sandbox RPGs and action games, will be changing drastically with agentic AI.
Just not for the folks who bought a 3060Ti/3070, or a 4060 8GB, or a 5060 8GB, or a 5070 8GB laptop. I.e. all those top sellers in Steam's Hardware Survey.
Probably why AT3 has a 256-bit bus so they can do 16GB Vs 128-bit with 12GB is my guess. Those AT4 peeps who get a dGPU variant of it are the "I only play CS/LoL/Sims/DOTA/Fortnite at 1080p" etc crowd who exist all around the world like in SEA, Latin America, South Asia, Eastern Europe etc.
 

ToTTenTranz

Senior member
Feb 4, 2021
686
1,147
136
Probably why AT3 has a 256-bit bus so they can do 16GB Vs 128-bit with 12GB is my guess. Those AT4 peeps who get a dGPU variant of it are the "I only play CS/LoL/Sims/DOTA/Fortnite at 1080p" etc crowd who exist all around the world like in SEA, Latin America, South Asia, Eastern Europe etc.

Yes, the upcoming exploding demand for VRAM/RAM to load NN models is one of the main reasons AMD is going with smartphone memory for their lower end GPUs. And it's driven by developers (e.g. their close relationship with Sony).