Discussion RDNA 5 / UDNA (CDNA Next) speculation

Page 79 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Thunder 57

Diamond Member
Aug 19, 2007
4,236
7,018
136
I gave you the size of the market, ~$6.5 billion of client dGPU cards (excluding APU).

It seems like you are in denial, and that NVidia's $4.5 trillion market cap started from "gaming".



How do you know this?

Because equivalent NVidia die is for:
- client gaming
- cloud gaming
- "professional" workstation graphics
- visualization, rendering
- video editing, post production
- crypto
- data analysis
- AI programming development platform
- client AI
- machine learning

$4.5 trillion market cap for NVidia started from these same DIY cards and with investment in software tools, multiplied the uses of these and similar dies.



It's almost like you are trolling...

Yea if that quote isn't trolling I don't know what is.
 

Win2012R2

Golden Member
Dec 5, 2024
1,266
1,311
96
no lol, it was 1:4 ratio usually.
No it wasn't.

"The PlayStation 5 is currently ahead by 25.42 million units. The PlayStation 5 has sold 52.65 million units in 38 months, while the Xbox Series X|S sold 27.23 million units. The PlayStation 5 has a 65.9 percent marketshare (+4.8% year-over-year), compared to 34.1 percent for the Xbox Series X|S (-4.8% year-over-year)."

This is end of 2023 situation, year after which Xbox hardware relative decline started: https://www.vgchartz.com/article/459675/ps5-vs-xbox-series-xs-sales-comparison-december-2023/

It's likely worse than 1:4 now, with PC gaming going up in price next year this will only accelerate.
 
  • Like
Reactions: Tlh97

Win2012R2

Golden Member
Dec 5, 2024
1,266
1,311
96
Anyone who poo-poos NVidia "gaming" revenue, which includes desktop dGPU, mobile dGPU, workstation dGPU and industrial uses of GPUs derived from the same die - anyone who think this market is not worth pursuing is not living in reality.
Agreed, all I was saying that 5090 market is very different in terms of volume than console, completely different people buying those.
No one at AMD is interested in gaming.
Liar.
Especially not Lisa, she outright hates gambling with R&D cash.
Obviously she is the problem, but that does not mean "no one at AMD is interested in gaming", that's an utter lie. Instead of wasting so of the company stock for Xilinx they should have put it into GPU R&D, maybe AI stuff would have been more successful.
 
  • Like
Reactions: Tlh97

adroc_thurston

Diamond Member
Jul 2, 2023
8,000
10,754
106
It's the truth.
but that does not mean "no one at AMD is interested in gaming", that's an utter lie
It's the truth.
Instead of wasting so of the company stock for Xilinx they should have put it into GPU R&D, maybe AI stuff would have been more successful.
Instead of acquiring a very successful business with top-shelf IP they should've gambled R&D buxx on graphics again?
Dawg this ain't 2007 anymore, AMD is long over client gfx.
 
  • Like
Reactions: Tangopiper

Talin3

Junior Member
Dec 28, 2025
3
0
6
At this point it really feels like AMD is optimizing for IP reuse and APUs rather than chasing a halo dGPU. A 2-year cadence and three monolithic dies makes sense if RDNA5 is meant to scale down into consoles and client parts first, not to win spec-sheet battles. Whether that’s smart or not depends on how much margin they still expect from consumer GPUs versus Instinct and EPYC. The lack of a clear ultra-enthusiast part is worrying for mindshare, but it’s consistent with where their revenue focus seems to be heading.
 

Thunder 57

Diamond Member
Aug 19, 2007
4,236
7,018
136
Agreed, all I was saying that 5090 market is very different in terms of volume than console, completely different people buying those.

Liar.

Obviously she is the problem, but that does not mean "no one at AMD is interested in gaming", that's an utter lie. Instead of wasting so of the company stock for Xilinx they should have put it into GPU R&D, maybe AI stuff would have been more successful.

Nah I think acquiring Xilinx was the right call. But to say AMD doesn't care about graphics is bunk. They need to for consoles/iGPU's at the very least.
 

marees

Platinum Member
Apr 28, 2024
2,114
2,730
96
No it wasn't.

"The PlayStation 5 is currently ahead by 25.42 million units. The PlayStation 5 has sold 52.65 million units in 38 months, while the Xbox Series X|S sold 27.23 million units. The PlayStation 5 has a 65.9 percent marketshare (+4.8% year-over-year), compared to 34.1 percent for the Xbox Series X|S (-4.8% year-over-year)."

This is end of 2023 situation, year after which Xbox hardware relative decline started: https://www.vgchartz.com/article/459675/ps5-vs-xbox-series-xs-sales-comparison-december-2023/

It's likely worse than 1:4 now, with PC gaming going up in price next year this will only accelerate.
latest figures below

folks over at VGChartz have shared their latest console sales estimates today (that go up to November), and they suggest that Xbox has sold over two million Series X|S consoles this year.

Compared to 2024, this is said to be a -45.1% decrease on the previous figure of 4.79 million, but that's absolutely no surprise when you consider both systems have received quite significant price increases over the past 12 months.

VGChartz now claims that Xbox has (cumulatively?) sold 34.1 million units worldwide of the Series X and S, compared to 12.4 million for the Nintendo Switch 2, 86.1 million for the PS5, and 152.7 million for the Nintendo Switch 1.

These estimates also suggest that Series X|S console sales have been slowly declining in recent years on a similar trajectory to the original Nintendo Switch. The Switch's decline has been sharper for obvious reasons, but over four million people are still said to have bought a Switch 1 this year.

 

dangerman1337

Senior member
Sep 16, 2010
428
65
91
A big hole in "AMD won't do much dGPUs especially in the high end" thing, why are they doing 9950X3D2s in the CPU space or X3D in general to be revealed next week? I mean X3D has uses beyond gaming yet they primarily market it as such even though dual V-Cache on both dies doesn't bring much benefit.
 

marees

Platinum Member
Apr 28, 2024
2,114
2,730
96
A big hole in "AMD won't do much dGPUs especially in the high end" thing, why are they doing 9950X3D2s in the CPU space or X3D in general to be revealed next week? I mean X3D has uses beyond gaming yet they primarily market it as such even though dual V-Cache on both dies doesn't bring much benefit.
are the chip sizes & fab costs comparable ?
 

adroc_thurston

Diamond Member
Jul 2, 2023
8,000
10,754
106
A big hole in "AMD won't do much dGPUs especially in the high end" thing, why are they doing 9950X3D2s in the CPU space or X3D in general to be revealed next week? I mean X3D has uses beyond gaming yet they primarily market it as such even though dual V-Cache on both dies doesn't bring much benefit.
All those parts are server rejects.
For Z6 they're also premium mobile rejects.

Client graphics, OTOH, requires bespoke engineering.
 
  • Like
Reactions: Tlh97 and marees

MrMPFR

Member
Aug 9, 2025
158
325
96
@vinifera spotted these interesting RDNA5 related RT patents and tried to post them but was blocked by dumb spam so I'll share instead.
 

NTMBK

Lifer
Nov 14, 2011
10,502
5,968
136
This one seems kind of silly. You generally don't want the same collision geometry and render geometry, render geometry is too complex for efficient physics calculation. Hunting down overly complex physics meshes is a pretty common optimization step when you're trying to speed up CPU performance.
 
  • Like
Reactions: Elfear

MrMPFR

Member
Aug 9, 2025
158
325
96
Commentary to #1,967. In advance sorry for the wall of text.

#1. Superior BVH maintenance cost due to no rebuilds with animating/moving geometry. Much higher flexibility for RT due BVH overhead which makes new RT practical at real times. Does all this by extending DGF functionality. SW patent, but perhaps hardcoded into RDNA5's DGF decompressor.


Related to DGF/DMM, at 20:25, One of the AMD RT Fellow Architects is asked that can DGF and DMM be combined and he basically says yes but it will require architecture work to do so and the goal is to have a DGF base layer with DMM encoded on top of the DGF base layer.
#2. DGF with subvisions (implicit and explicit, essentially DMM like) as @vinifera spotted^. Authors at HPG match 1:1 + one month after patent filing. Noticed the patent said extremely efficient geometry compression, very rare to see such strong wording in patents.
We can derive impact to BVH build times (~8-20X RTX MG) and storage overhead (>30X easily) as well as this is effectively RTX MG's CBLAS (DGF = CBLAS as per patents) with DMM multiplier on top.

#3. Additional improvement to standard DGF compression for implied mesh topologies that permits omitting the index buffer entirely and just storing the vertices, from which this can be inferred.

Wrap up #1-3: It seems like AMD will finally address the BVH issue entirely allowing ray tracing against the full detail geometry without any compromises (doesn’t apply to shading though). DGF rn isn't the full picture and expect AMD to update it in the future alongside a BVH SDK superior to RTX MG.


#4. This is a very interesting idea and it could extend to many others things besides collisions and ray tracing; really everything that requires spatial awareness can benefit from it. No more dumb AI vision and hearing etc... Potential for a larger impact to gameplay realism and immersion than PT and should be doable on PS6 and RDNA5 given the large expected ray traversal gains.
Hopefully AMD, MS and Sony can agree on some new universal DXR-like standard that doesn't care what the input and output is (assume it's a ray). I know it's not the same framework across all but assuming basically identical frameworks similar how it is now, then this standardization would help with game adoption. It would also make things a lot easier for devs, no need for many different systems for different things. Just trace everything through a universal shared BVH = massive engine simplification.

This one seems kind of silly. You generally don't want the same collision geometry and render geometry, render geometry is too complex for efficient physics calculation. Hunting down overly complex physics meshes is a pretty common optimization step when you're trying to speed up CPU performance.
This only tracing “rays” for collision detection and other stuff, doesn’t offload actual physics calculations or change them. Doom TDA already uses this. Not sure if it’s the same geometry but would suspect that as they said 1 pixel accuracy.

#5. Precomputed (at BVH build) and dynamic (along ray path) tagging of nodes (discard values) in a BVH which removes nodes from consideration during traversal and intersection. This leads to far fewer ray-box intersection tests, stack pushes, and subtree traversals. It seems like the benefits scale proportionately with BVH width potentially allowing for some absurdly wide and shallow BVH trees.
Can be implemented in SW but needs HW fully realize the benefits, but here some modifications are needed. For example this patent implemented in HW would likely result in a higher quality BVH. In addition one parallel fixed-point intersection tester per ray isn’t practical. For a BVH32 where more than half of nodes are removed that’s a huge waste of ressources and range of intersected nodes could wary a ton. So AMD has to find a way to fill the slots up with calculations likely by executing multiple rays in parallel. Now this is likely only practical if the BVH data is shared between multiple rays. Implementing this requires sorting rays into buckets/payloads by projected ray path destination (spatial coherency sorting) and finally executing the sorted ray buckets in parallel on each RT core.

Wrap up #5: Discard values further extend the massive RT traversal uplifts expected in RDNA5 (IF included) and should finally make ludicrously wide BVH viable. For reference fixed-point only allowed very wide BVH. I really did not expect to see another idea of a similar if not even bigger magnitude than low-precision intersection testing. Realistically with only a modest area overhead gains (at iso-node) due to new traversal logic (Radiance cores) the performance/area gains are conceivably so massive that they can get away with not even spending area more on intersection logic but we'll see. Also wonder how this is going to run (shaders or HW accel) and if it would benefit from a GPU BVH builder (no idea).
For RT shading overhead there's SER, OMM, and soon prob SER on steroids, which is delayed anyhit shader evaluation combined with HW accell payload sorting and execution via work graphs or later evolution (see prev posts from October). Then for RT traversal overhead there's low precision prefiltering, ray coherency sorting (again see Oct posts) and now this insane BVH pruning for ray traversal. What other innovations lie around the corner? I honestly can't wait to see how NVIDIA counters all this and what other innovations AMD has in the pipeline.
There are some early indications; like let's remember as lighting becomes more complex shading overhead begins to baloon out of control and dominate RT traversal, making it far less important than shading performance. So even if NVIDIA looses in RT traversal to RDNA5 they could decide to use some further refinement of GATE and much stronger tensor cores (CPX = 6090?) to brute force MLP overhead. Assuming MLP based neural rendering will replace ALL PT shading, heck even most traversal, bar a limited MLP training input, NVIDIA will prob just move the goalpost even further. Certain they'll find a way to spam as many offline renderer quality effects into the pipeline as possible forcing the 6090 to its knees (4K DLSS P 40-50FPS xD). While this application of MLPs is prob excessive I really hope RDNA5's ML HW is much much stronger than RDNA4's (on paper specs and HW optimizations); enough to drive many lighting MLPs, improved FSR suite and many other things.

Do we have leaks for RDNA5's ML FP8 rate per CU vs RDNA4 per WGP?
 

eek2121

Diamond Member
Aug 2, 2005
3,465
5,122
136
The day Radeon beats my 4090 (by xx%, not x%) in RT and raster performance, I will upgrade. Will AMD break my wallet the next couple of years? We will see. 🤣

Do we have leaks for RDNA5's ML FP8 rate per CU vs RDNA4 per WGP?
I’ve seen no leaks personally, though I haven’t been looking super hard.