Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Win2012R2

Golden Member
Dec 5, 2024
1,227
1,268
96
They could have easily done Zen 4c but Apple like they are holding CPU and ram capacity for PS6.
No, they would not be easy for three reasons -
1) backwards compatibility with such different IPCs is much harder - deal breaker
2) can't make PS5 Pro much better on CPU level
3) keep this advancement for later - PS6

Plus more expensive for sure.

It was no brainer for them to do it that way.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,346
136
www.teamjuchems.com
No, they would not be easy for three reasons -
1) backwards compatibility with such different IPCs is much harder - deal breaker
2) can't make PS5 Pro much better on CPU level
3) keep this advancement for later - PS6

Plus more expensive for sure.

It was no brainer for them to do it that way.
I am mostly agreeing with you.

The fact they clocked up the CPU cores tells me they are far more elastic on the CPU performance than in the past and they could have handled the new CPU with a SDK update if that. I think they could have resolved point one in software pretty transparently.

The other points are choices, not challenges. I fully believe they kept improvements to GPU the priority on purpose.

Cost wise I think Zen 4c vs Semi custom Zen 3 to be extremely marginal against the price premium they are charging.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,346
136
www.teamjuchems.com
I was pretty disappointed they only got +45% on N5/4
I think it’s a physics thing we’ve been defying in the desktop space. That we are at the point we have 1k psus and and anti sag bars now is a sad state of affairs that they don’t have as much bandwidth to accommodate in the console space.

Looking forward to a 500W PS6? 😂

(Also I agree. 100% would have been amazing!)
 
  • Like
Reactions: Tlh97 and Win2012R2

Win2012R2

Golden Member
Dec 5, 2024
1,227
1,268
96
RTRT's been with us in mainstream GPUs since 2018 and nothing changed.
It's changing, but slowly thanks to cheapMD's cheapskating, Indiana probably the first game that actually requires RT, and they did good, more of this will follow, we just need x10 RT perf over next 3 gens.
 

SolidQ

Golden Member
Jul 13, 2023
1,511
2,482
106
never gonna work
never say, never, we don't know what devs will do in future. Even 4070 can run C2077 with DLSS Bal/perf mode + FG at 1440p. PS6 will be more powerful. Just need 32Ram in PS6
Only mystery is can AMD candle PT with mix UDNA 1/2 for PS6
cf44e9f6d1aca53972eb765ed6b8dd41.jpg
 
  • Like
Reactions: Tlh97 and Win2012R2

Win2012R2

Golden Member
Dec 5, 2024
1,227
1,268
96
Low GPU clocks, that's what surprised me most.

I guess crap memory would have held them back anyway and as you say same crap L2 cache.
 

blckgrffn

Diamond Member
May 1, 2003
9,686
4,346
136
www.teamjuchems.com
Low GPU clocks, that's what surprised me most.
Physics man. If they had been serious the clocks wouldn’t have mattered as much with more of the hardware like Kepler said and a sizable Infinity Cache to balloon out the functioning memory bandwidth and keep power usage in check.

Very intentionally a half step forward. IC in PS6 too.
 

marees

Golden Member
Apr 28, 2024
1,841
2,461
96
Can we hope for Radeon VII successor based on UDNA with large VRAM (Primarily for AI but also for gaming ??)
 

soresu

Diamond Member
Dec 19, 2014
4,144
3,609
136
With HBM? No.
For the HPC/AI/datacenter market segments it's basically guaranteed.

CDNA is the real reason AMD don't use HBM on consumer Radeon cards, not the added cost of it over GDDRx.

The production capacity of the industry still isn't up to supplying enough HBM stacks at the moment to get anywhere near all GPU SKUs out there, so AMD (and nVidia) prioritise the SKUs that bring them the greatest profit return.

That will almost certainly be a high end SKU of UDNA, but as to whether it will be headerless like CDNA or have video outputs is anyones guess.
 
  • Like
Reactions: Tlh97 and marees

gdansk

Diamond Member
Feb 8, 2011
4,619
7,798
136
Maybe HBM for pro cards but 3gb memory with 512 bit bus for semi-professional cards ??
I doubt it. AMD should focus big cards on big customers like Meta/etc. Small fry will never have the scale/money to justify tuning their software to AMD's hardware. And Meta/etc want much bigger than something like Vega 20.
What consumers get should be parts that resemble Navi more but with some more area wasted on better ML. If someone is gonna make a part like Vega 20 it'll be Intel.
 
  • Like
Reactions: marees

adroc_thurston

Diamond Member
Jul 2, 2023
7,383
10,130
106
It's changing
Nope.
Indiana probably the first game that actually requires RT
Metro Exodus EE is 4 years old lol.
Even 4070 can run C2077 with DLSS Bal/perf mode + FG at 1440p
It looks like ugly upsampled vomit (which it is).
Very laggy vomit too once you turn on the fakeframes.
What got me thinking is this 👇
Geohot should be sent to gitmo.
Can we hope for Radeon VII successor based on UDNA with large VRAM (Primarily for AI but also for gaming ??)
Vega20 was a meme to restart GPGPU roadmap, not an actual product lmao.
 

RnR_au

Platinum Member
Jun 6, 2021
2,715
6,178
136
Can we hope for Radeon VII successor based on UDNA with large VRAM (Primarily for AI but also for gaming ??)
I used to think that there would be a market for 'consumer inference' given the activity in r/localllama and the quality of the downloadable ai models. Heck I even built my own gpu-mining setup for local inferencing purposes. But in a very short space of time, the price for a million tokens generated from various cloud providers have crashed, and now I'm being skeptical that such a niche market will get anything other than the usual Pro-sumer workstation cards with the usual high price.

For more on this see - https://simonwillison.net/2024/Dec/31/llms-in-2024/ - 3rd entry in the list, but the whole collection is great if you are wondering about how AI is going.
 

Win2012R2

Golden Member
Dec 5, 2024
1,227
1,268
96
CDNA is the real reason AMD don't use HBM on consumer Radeon cards, not the added cost of it over GDDRx.
No point using 10x speed memory if you don't have enough cores to use it, plus it physically takes like equivalent 4-5 bits of normal RAM, and yields, and memory makes want 60% gross margins too, and you get very high speed from a number of stacks, so doing just 24 GB won't get you top speed etc etc
 
  • Like
Reactions: scineram

Win2012R2

Golden Member
Dec 5, 2024
1,227
1,268
96
Cost per bit on HBM just really hasn't come down.
On top of this it became 3rd pillar for manufacturers to shift production - before they could decide to make DRAM or NAND, but now they can shift to HBM, thus avoiding having to make cuts to DRAM production to keep prices up, very nice for them at the moment...
 

adroc_thurston

Diamond Member
Jul 2, 2023
7,383
10,130
106
On top of this it became 3rd pillar for manufacturers to shift production - before they could decide to make DRAM or NAND, but now they can shift to HBM, thus avoiding having to make cuts to DRAM production to keep prices up, very nice for them at the moment...
HBM is DRAM. Just stacked.
 
  • Like
Reactions: Tlh97 and soresu