I'd be overjoyed to be proven 100% wrong and ignorant on this, but how fast can such a transition of the whole pipeline to AI be made? Are any studios working on this for upcoming titles? Don't development of bigger games take like 7 years now? End of next gen will probably be ~10 years from now, if games by then don't need more than a couple GB VRAM max usage for actual game assets and AI does magic on top of that would be a miracle, one I'd love to see. But I guess I'm just old enough to assume things always take more time than most people think.
No earlier than post-crossgen I think, +2031-2032 likely. TW4 might be the first game to selectively use some of it (thinking NRC, NTC and some other MLPs) as NVIDIA said latest RTX technologies in CES 2025 blog, but all of it prob no earlier than 6-7 years from now.
So plenty of time for tech to mature and HW to become more powerful. Hopefully PS6 GPU can run it all.
That ~10 years probably a stretch and these technologies can be bolted on later as we've seen with stuff like ReSTIR PT games.
Adaption will likely be gradual. Thinking NTC (on feedback fallback for DX12U compliant HW) and DGF+DMM first, then neural materials and other neural code compression, and work graphs (PCG and scratchpad savings) last as it requires complete engine and game design revamp.
NTC has already received a lot of research interest from IHVs and game companies so it'll happen sooner or later. Maybe as soon as nextgen.
Geometry will prob be handled by DGF combined with nextgen DMM.
6-7X compression easily doable here.
For some type of games PCG might make pregen asset storage entirely redundant (see AMD HPG 2025 Tree paper)
BVH side savings can be massive with DMMs on top of DGF. How much IDK, but large gains vs RTX MG for sure.
Workgraphs has already shown massive potential in scratchpad savings. Compute rasterization example from GDC 2024 reduced from 3400MB to 55MB. 98.4% reduction or 62 times smaller.
Neural shader code compression shows great potential as well but it's very early on. Zorah demo claims 46MB -> 16MB despite going from simple standard game material to offline renderer quality materials.
~10X overall savings at iso-asset complexity is not unreasonable. Subject to change and might increase over time given how new the tech is. Devs can then decide if they want more asset variety or spend VRAM on AI and something else.