adroc_thurston
Diamond Member
- Jul 2, 2023
- 7,093
- 9,844
- 106
Pretty dang good.And what's the volume and margin expected for cloud gaming?
The bubble ain't forever.Probably laughable compared to Nvidia's AI volumes.
Pretty dang good.And what's the volume and margin expected for cloud gaming?
The bubble ain't forever.Probably laughable compared to Nvidia's AI volumes.
MI450 has like 6-7x the BOM of a CPX. Even with ridiculous nvidia margins that'll be a too big of a TCO advantage to compete. AMD will essentially give up the large context inference market long term if they don't provide a similar solution.. they will..Yeah man throw a MI450X at it.
Maybe Rubin CPX (GR202 or whatever it's called) is simply ready earlier than some people expected.Very odd Nvidia announced it now - they are clearly worried about AMD's threat or perhaps trying to counter custom ASICs.
As long as it's better than volume and margins in discrete desktop market, who are we to argue? Money is money.And what's the volume and margin expected for cloud gaming? Probably laughable compared to Nvidia's AI volumes.
Whilst NV certainly should win with a 800mm^2ish die, (...)
That's more of an open question than it's ever been before, in my opinion.(...) It won't touch GR202, that's for sure.
And?MI450 has like 6-7x the BOM of a CPX.
That's not how TCO works.Even with ridiculous nvidia margins that'll be a too big of a TCO advantage to compete.
Just saying that I think AMD will very much bring something similar to CPX to the market in the next few years.And?
it's not a tiny fraction lmfao. It's 80% of the TCO. Capital Cost is roughly 4x the Operating Cost per GPU-Cluster over its lifetime (semianalysis)That's not how TCO works.
Upfront hardware costs are a tiny fraction of it.
There won't be a market for that in the next few years.Just saying that I think AMD will very much bring something similar to CPX to the market in the next few years.
it's not a tiny fraction lmfao. It's 80% of the TCO. Capital Cost is roughly 4x the Operating Cost per GPU-Cluster over its lifetime (semianalysis)
3/10 ragebait.quoting dolan
yeah, a byproduct of how they've built their shader cores.is RDNA5 is much stronger in RT than RDNA4?
wonder if it's closer, RDNA5 vs Rubin, than RDNA4 vs Blackwellyeah, a byproduct of how they've built their shader cores.
it does not matter since we're not gonna be doing RTRT anyway.wonder if it's closer, RDNA5 vs Rubin, than RDNA4 vs Blackwell
It's just an attempt to cost cut on the premiere inference platform.Maybe it is just me, but for me CPX isn't some sort of innovative new market grab rather than the admission that the VRAM bandwidth chase has become too expensive even for nVidia and its godzilla $10000+ offerings
They are switching to real time path tracing?it does not matter since we're not gonna be doing RTRT anyway.
Cuz AT0 is only ~7 PF or so, AMD is not doing the same matrix cores on gaming dGPUs and DC GPUs like NVIDIA is doing with Rubin.
What changes would an unorthodox GPU shader core introduce?See the problem is that we're not really limited by box/tri testing for RTRT.
Blackwell was barely an improvement RTRT-wise, and that's because making RTRT faster is hard without going for some, mildly unorthodox ways to build a GPU shader core.
Nope. PT was 40 series. 50 series unveiled NRC and Neural materials neural rendering, still not production ready. 60 series neural rendering galore. Watch NVIDIA's HPG 2025 keynote. Pretty obvious they want PT + advanced lighting effects approximated using tensor cores.They are switching to real time path tracing?
...And getting freezing or rid of high precision ML formats to save on die area. Even with LLM training NVFP4 seems fine.That and using the xtor budget for bigger systolic arrays. Ain’t rocket science.
what kind of magic sauce is nvidia putting into these chips to 6-8x FP4 Perf while barely increasing area and power
Nvidia advertises 4 PFLOPs for the PRO 6000 vs 30 PFLOPs for the CPX (both FP4 w sparsity)
Nothing that matters to you.What changes would an unorthodox GPU shader core introduce?
Duh that's the only way.Pretty obvious they want PT + advanced lighting effects approximated using tensor cores.
RT benefits from stuff that CPUs do like out-of-order execution and branch prediction.What changes would an unorthodox GPU shader core introduce?
yes we'z gonna loop all the way into Weird ISA Larrabee sooner or later.RT benefits from stuff that CPUs do like out-of-order execution and branch prediction.
As long as you have enough compute for RAG, your compute numbers don't really matter for inference so 7PF would be plenty, what matters is memory bandwidth and capacity...Cuz AT0 is only ~7 PF or so, AMD is not doing the same matrix cores on gaming dGPUs and DC GPUs like NVIDIA is doing with Rubin.
Pray that Meta wants that and it'll be done in a matter of months.that very well will likely bite AMD in the behind something fierce without getting their PTX equivalent, amdgcnspirv, up and running in a non-expermential/beta state before hand for a number of reasons...
Lol, at this point I'm pretty sure Lisa Su would rather he just acts as if AMD doesn't exist.On X, Musk has "endorsed" AMD for small to mid-sized AI models. That is not a nothingburger.
![]()
Elon Musk ‘Endorses’ AMD's AI Hardware for Small to Medium AI Models, Implying That There's Potential to Ease Reliance on NVIDIA
Billionaire Elon Musk has tweeted on the performance of AMD's AI hardware, claiming that it is sufficient for small and medium AI models.wccftech.com
he's less of a sperg now I think.Lol, at this point I'm pretty sure Lisa Su would rather he just acts as if AMD doesn't exist.
His endorsement certainly isn't going to do them favors now.