• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA4 + CDNA3 Architectures Thread

Page 228 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:
Here is in game bench 7900XTX OC vs 4090 OC

add screen
b12eb7749b60df11a00b6a8a38f85810.jpg
 
Last edited:
AI exp is so bad on AMD i gave up like 8months ago even bing is better... In the begining of AI craze like 2+ years ago u litterary had to know python to use it with way way way way worse results and perf vs rtx 3000 and 4000. You have so many models for nvidia its not even fair.
This is so very very out of date. There are plenty of AI backends that can run all the popular models on AMD, Intel and Nvidia cards - even simultaneously.

AMD is not suffering from a lack of models. They are suffering from the teething problems of getting rocm installed.
 
I don't think he could or was smart enough for that.
The RTG marketing person who no doubt had a hand in this 'leak' could though, and I wouldn't put it above them to do that.

Good to know that Q1 2025 means January and not April though, suggests that they're at least somewhat confident of how this will perform.
 
The RTG marketing person who no doubt had a hand in this 'leak' could though, and I wouldn't put it above them to do that.
evulz AMD ninjas again (you're really overestimating how much they care and how competent they are).
Good to know that Q1 2025 means January and not April though, suggests that they're at least somewhat confident of how this will perform.
the h/w thingy has been ready for eons.
 
Back
Top