Even if neural rendering stuff gets pushed into games by Nvidia, I expect RDNA5 to be no slouch on that front as well. AMD will for sure add FP4 support and might also double matrix core width. That is not as extreme as 8x width as on Rubin CPX (my expectation: gaming cards will likely get cut-down to 4x) but already very decent for many neural rendering use cases. Nvidias card might be faster, but we are talking about a few percent in probably most cases (and by far less than 2x).
154 CUs * 2.8 Ghz * 8,192 FP4 sparse/clock * 2 / 1000 = 7,065 PetaFLOPs matching Kepler's figure. That's based a quadrupling per CU vs RDNA 4 and doubling vs Blackwell. FP8 -> FP4 = 2X, raw increase = 2x.
IIRC All gaming implementations rn use INT8 and/or FP8 so effectively up to 4X increase vs RDNA 4 and Blackwell/Ada Lovelace. NVFP4 is fine and AMD will match for sure. DLSS5 and FSR5 will prob use NVFP4 and "AMD"FP4 to deliver reduced ms cost.
Let's wait for Rubin CPX's specs sheet at GTC 2026. Haven't heard anyone confirm this is the 6090 die.
Why AMD will likely extend matrix core performance:
- Neural rendering is kinda new but there are papers out there since at least 2021 (the original Neural Radiance Caching paper) and AMD will bring their own "neural rendering" stuff with FSR Redstone
- Neural rendering techniques can cut down cost. E.g. neural texture compression allows less VRAM and hard disk size. If we extend "neural rendering" to SR, FG and RR it gets even more obvious: You can use a smaller chip to get to similar visual and performance results
- AMD, Microsoft and Sony should look far into the future towards PS7 and Xbox-Next-Next. The more "neural rendering" is supported with strong matrix core acceleration, the easier will be crossgen of PS6 with PS7
- This trend is kinda obvious, already today: Usage of reural rendering techniques will get more and more prevalent in the future (at least for some parts of the rendering pipeline)
Neural asset compression isn't neural rendering but yeah can accomplish similar things for MB overhead at iso-image/asset quality.
SR, FG and RR are already neural rendering and they're already accomplishing that rn on NVIDIA side.
Consoles are not planned like that and it's impossible to say what will change in the next 10 years leading up to PS7 launch.
As for neural rendering is really just ReSTIR+ neurally augmented path tracing. Devs can and will make a scalable lighting solution, that works on PS5, nextgen handhelds and in many cases prob even the Switch 2. For all UE5 games baseline will probably be MegaLights + an AMD derived proper BVH SDK similar to RTX Mega Geometry for PS5 and XSX with derivatives of this pairing arriving in other engines during the later part of PS5/PS6 crossgen. This solution will be well ahead of current probe based RTGI and feel like another gen on gen uplift in RT and for even wider support many could could stick a full worldspace (PT) + probe based (DDGI) or mix solution. Essentially keep the old current version from PS5 gen alongside the new MegaLights derivative for PS5/PS6 gen. Neural rendering isn't a cutoff for PS5/PS6 crossgen.
AI LLMs can be offloaded to cloud too so that's another reason for even longer nextgen crossgen.
Strongly suspect the real cutoff for PS5/PS6 crossgen is API support (GPU work graphs) and derived tech (procedural geometry, self-budgeting rendering systems...) and fundamental implementations of ML essential to the core gameplay: Stuff like ML destruction, physics, combat mechanics etc... While some games could implement early version of this in the late 2020s, most game will probably wait pushing true nextgen PS6 games to 7-10 years from now. As for what lies beyond PS6-PS7 crossgen impossible to know other than PS6 will be useless and you'll need a PS7.