You seem to just see what you want. A narrow mind has always a narrow viewThey've been diddling xformers in purely incremental way for how long?
Lmao.
Yes cloud gaming. That's what AT0 is for.
You seem to just see what you want. A narrow mind has always a narrow viewThey've been diddling xformers in purely incremental way for how long?
Lmao.
Yes cloud gaming. That's what AT0 is for.
Apple has to source baby amounts of bandwidth outta their SRAM slab.Even if it goes beyond that still nothing new (M3 in 2023).
Patentware has no relation to any actual products.Yes they do. Read the patents, go beyond SOTA.
Pioneering is worthless in a vacuum.Very novel and we might finally see AMD pioneer tech for once.
Reality is often disappointing.You seem to just see what you want. A narrow mind has always a narrow view![]()
Not directly but you can read between the lines
View attachment 129351
View attachment 129352
"supportsWGP" flag where you would expect a "isGFX1250Plus" and "has gfx1250 instructions" condition rather than checking the GPU generation directly (gfx* instruction flags carry over from one generation to the next, i.e gfx1250 has gfx9/10/11/12 instructions).
Apple has to source baby amounts of bandwidth outta their SRAM slab.
Patentware has no relation to any actual products.
Pioneering is worthless in a vacuum.
AMD just build good products.
Reality is often disappointing.
You'll learn to accept the limits of it soon enough.
Nope, you're making a pile of tradeoffs.Yeah but it's still better than the alternative (fixed caches and VRFs).
Patentware that adheres to your imaginary checkbox list has nothing to do with the product™ at large.Kepler_L2 listed many of those patents in a thread a month ago when making claims about GFX13. Already confirmed most things.
GPUs don't live long enough to 'age'.Good products (at launch) that aren't looking ahead will age poorly over time. Look at Kepler vs GCN1. Same thing with RDNA1 vs Turing.
Man I love ROI, also AT0 really will be a massive service density leap for Xbox Cloud.Yes cloud gaming. That's what AT0 is for.
What adroc said.Good products (at launch) that aren't looking ahead will age poorly over time. Look at Kepler vs GCN1. Same thing with RDNA1 vs Turing.
hence why AT. thatsthejoke.pngEvery part has reuse outside of just being a dGPU
A lot more people than him, really.gotta credit Huynh here with such resourcefulness.
Of course many people have been working hard towards this, and a few above had to be convinced.A lot more people than him, really.
IMHO if not for nVidia's pivot to ray tracing they would be significantly longer still by now.Client graphics upgrade cycles are really short (and used to be much shorter).
Nope, RTRT hasn't been the upgrade cycle driver so far, like at all.IMHO if not for nVidia's pivot to ray tracing they would be significantly longer still by now.
Not quite what I meant.Nope, RTRT hasn't been the upgrade cycle driver so far, like at all.
It's all general performance creep plus VRAM limitations etc.
Oh no, openworld bloat plus Nanite and friends promised the infinite future of GPU torture anyhow.I meant that raster complexity increase in each new generation of games was if not plateauing then certainly decreasing significantly to the point that playable 4K was quite achievable for hi end GPUs of the pre Turing generation
Again, not the case.Without rt RT/PT to add a ginormous new compute burden to the mix the gaming GPU market was destined to get pretty stale as "good enough"
Are you talking about UE5 and Unigine supporting large (ie FP64) world coordinates for insanely big (potentially larger than Earth) world maps?Oh no, openworld bloat
Nope, you're making a pile of tradeoffs.
Patentware that adheres to your imaginary checkbox list has nothing to do with the product™ at large.
All you need to know is that gfx13 cachemem is different.
GPUs don't live long enough to 'age'.
Client graphics upgrade cycles are really short (and used to be much shorter).
Embedded lands have hardware living for decades.
What adroc said.
Timing is more important than being first in general.
ATI's first DX9 implementation (9700) was superior to Nvidia's (5800).
Did it help AMD?
Not nearly as much as it should have, because by the time it became relevant, Nvidia's 6k series came out.
ATI's first SM3.0 implementation (X1800 and X1900) later turned out to be superior to GF6k and 7k implementations in proper shader-heavy DX9.0c games.
Did it help them?
No, because by the time it became relevant, GF8k was out and demolished everything that came before in DX9.
AMD was far ahead on (async) compute until Turing.
Did it help them much in desktop?
All in all, no.
And outside some NV-sponsored implementations to sell the feature, RT didn't become truly relevant until what, 2024?
If N31 and N32 had gotten closer to their frequency, perf/W and maybe IPC targets (assuming they missed that a bit, too), they would've been good enough for their planned lifecycle.
Nevermind that the memory capacities of the Geforce 3070 and 3080 were definitely not very forward-looking, but they still flew off the shelves (in part because of mining, but still).
Supporting checkbox features early is mostly an additional marketing benefit when your architecture is otherwise better, too, because then it might help giving potential customers the final push.
But in terms of actual value?
The only GPU feature in recent memory that in my opinion really panned out well in that regard even for older architectures was DLSS.
Everything else only became relevant enough (or performant enough) 2-3 gens down the line.
US $1500-2000 has been proven to be acceptable price to at least a few millions ready to spent on top end consumer level GPU.Price is very important.
The Fire Connector?If next AMD top end GPU isn't faster than 5090 then it won't sell, I hope they don't use that stupid 16 pin connector.
The Fire Connector?
They probably will because the AI Pro 9700 reference board already uses it, but as long as they demand an ATX3.1 PSU they're probably fine.I hope they don't use that stupid 16 pin connector.
Nope.Might be different in implementation but the overall idea and benefits would still be the same
Fermi had a unified L1$/shmem slab 15 years ago.Incredibly boring AMD catching up to NVIDIA Volta ~10 years later situation.
They could probably even aim at 64x XBSX streams with such a system:Man I love ROI, also AT0 really will be a massive service density leap for Xbox Cloud.
Like you can have 8 AT0 cards in a 2S Venice box serving like 32 XSX streams or 16 Nextbox streams.
Nope, RTRT hasn't been the upgrade cycle driver so far, like at all.
The push towards RTRT or especially HWRT will change with the next console cycle. Not because RTRT drives graphics forward, which RTRT definitely will, but because it will change production pipelines of games.RTRT remained and remains a gimmick.
Wrong because we're gonna be doing even less RTRT than we do now, replacing it with ML approximations.The push towards RTRT or especially HWRT will change with the next console cycle. Not because RTRT drives graphics forward, which RTRT definitely will, but because it will change production pipelines of games.
No more or at least vastly reduced baking, shortened development cycles. RTRT is mainly a game changer for developers. Not for gamers. But we as gamers might get better or, to be more precisely, more consistent quality. Because HWRT gets rid of the biggest illumination flaws of raster approximations.
words words words completely disconnected from reality.HWRT together with a very scalable RTGI solution like MegaLights with virtually no limits on light source count will be the future. Starting with PS6 and Xbox-Next release. Main reason: Game development.
In that regard, RTXDI from Nvidia based on ReSTIR is conceptually the very same thing as MegaLights. Just geared towards the upper end of the quality spectrum (and HW requirements).
Nope.
Fermi had a unified L1$/shmem slab 15 years ago.
Are you really that new?