• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA4 + CDNA3 Architectures Thread

Page 233 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:
Is there even FSR FG latency testing? because when I searched it, all I found is either nothing or data with the first revision that was broken in some aspects (like not working with VRR).
 
We're going to infer inference. Why actually calculate 4 whole bits when we can calculate 1 and predict the other 3. And since we're predicting the least significant bits it has little impact, really.
In 10 years Jensen will just sell people an Nvidia branded blindfold for $2000 and he’ll tell us to just close our eyes and imagine high fps, fully path traced games, all powered by Nvidia Neural Rendering and Nvidia Reflex 4, which cuts out the whole rendering pipeline since light doesn’t even need to travel into your eyes. Super low latency!
 
In 10 years Jensen will just sell people an Nvidia branded blindfold for $2000 and he’ll tell us to just close our eyes and imagine high fps, fully path traced games, all powered by Nvidia Neural Rendering and Nvidia Reflex 4, which cuts out the whole rendering pipeline since light doesn’t even need to travel into your eyes. Super low latency!
It's the future of gaming.
1736357924651.png
 
Huge oof if that die size is real. It’s time to Radeon have Arc margins I guess lol
It's better than Arc margins even with a "huge oof". ~380mm² at $400 and $500.
But yes that's 350 and 500 less than Nvidia. Even if GDDR7 was twice as expensive it can't dent that margin much.
 
What are your thoughts on this ?
COD favors AMD heavily so AMD beating a 4090 isn't all that magical as it sounds. Will be very different in other games especially NV favoring ones with heavy RT.

If the die size is indeed around 350mm2 or bigger (vs the leaked 270mm2 ish) and the huge ass coolers + 3 pin options pointing to high power use = high clocks, that performance doesn't seem that unrealistic. I'm thinking 3.2ghz clocks or more.
 
COD favors AMD heavily so AMD beating a 4090 isn't all that magical as it sounds. Will be very different in other games especially NV favoring ones with heavy RT.

If the die size is indeed around 350mm2 or bigger (vs the leaked 270mm2 ish) and the huge ass coolers + 3 pin options pointing to high power use = high clocks, that performance doesn't seem that unrealistic. I'm thinking 3.2ghz clocks or more.
Agreed. I mean the only "inflation" going on with the IGN article is that I think they mislabeled the 9070XT as a normal 9070.
 
It's bad idea to extrapolate anything from a COD screenshot as it doesn't really list all settings like many other benchmarks do. Don't extrapolate from alpha drivers. Or a single game in general.

Really I know there's nothing to go on but it is best to ignore the IGN article entirely.
 
It's bad idea to extrapolate anything from a COD screenshot as it doesn't really list all settings like many other benchmarks do. Don't extrapolate from alpha drivers. Or a single game in general.

Really I know there's nothing to go on but it is best to ignore the IGN article entirely.
Why wouldn't AMD take that article down if it wasn't accurate ? It's been online for almost a day now. Do they like dealing with PR nightmares at launch ?
 
Why wouldn't AMD take that article down if it wasn't accurate ? It's been online for almost a day now. Do they like dealing with PR nightmares at launch ?
Not their job to control the media after the fact. But they let them run one (1) single benchmark. AMD set themselves up for it again.
 
That last bit about AMD “wanting” this out seems delusional to me. Have these guys EVER worked at a company before? AMD is not some singular puppet master entity, lol. It was just some dude from ign sneaking a bench in at a booth, and he didn’t even seem to know there was a 9070 and another xt model.
 
Last edited:
Back
Top