• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA4 + CDNA3 Architectures Thread

Page 471 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:
AMD is quite competitive without X3D with a smaller core.
You don't count LLC when measuring core size. Not even L2.
They have the biggest stick. Biggest stick doesn't mean it is the biggest in die size, consumers don't care about die size. They care about what's fast and what the price is. And if it's the fastest price is sorta what you get to set.
 
Best Selling GPU from Newegg
Poeple love Asrock?
3d2f98e5f979ae4471058e91c8ef730d.png
 
'competitive' is for suckers.
You gotta win.
Here's the problem, nvidia will then just release some new proprietary feature, like proprietary neural shaders or something and it will run like crap on AMD, and this new feature would suddenly become the most important thing ever and the narrative would be "sure 10090 XT beat 6090 in outdated games but in the new hotness 6090 is 20% faster" and we get back to square one.
 
Here's the problem, nvidia will then just release some new proprietary feature, like proprietary neural shaders or something and it will run like crap on AMD, and this new feature would suddenly become the most important thing ever and the narrative would be "sure 10090 XT beat 6090 in outdated games but in the new hotness 6090 is 20% faster" and we get back to square one.
It's exactly this.
If Nvidia doesn't win at the current batch of benchmarks, they'll just push all reviewers to focus exhaustively on this new, suddenly-super-important feature that their newest architecture excels on.

They have enough leverage over reviewers and developers to not let a GeForce FX situation happen ever again.
 
That the 3060 selling for $300, 5 years after release, is mind-boggling.
Using old and slow GDDR6 and a chip made on Samsung 8nm, Nvidia's (and OEMs') margins on this must be sky high.



It's probably just the cheapest 9060 XT 16GB around.

And chumps say that "AMD needs to compete on price". Well you can get a 6600 for cheaper than a 3060. Hell a 600 and 3050 used to be priced the same. Guess which sold by the boatload?
 
Here's the problem, nvidia will then just release some new proprietary feature, like proprietary neural shaders or something and it will run like crap on AMD, and this new feature would suddenly become the most important thing ever and the narrative would be "sure 10090 XT beat 6090 in outdated games but in the new hotness 6090 is 20% faster" and we get back to square one.
None of that matters when AMD can ship 2x the shader core count of NV.
If Nvidia doesn't win at the current batch of benchmarks, they'll just push all reviewers to focus exhaustively on this new, suddenly-super-important feature that their newest architecture excels on.
This only works if AMD plays fair.
They don't have to!
 
Another one of Jack Hyunh's spin (officially) buried

It is pleased to see continued strong sales of AMD GPUs in many areas, AMD further explained, demand is still higher than "production" a surprising statement. After all, AMD had already announced shortly after the launch of the Radeon 9000 that it would QUICKLY increase "production".

 
From New Driver
  • New Game Support for AMD FidelityFX™ Super Resolution 4 (FSR 4)
    • FSR 4 can be enabled for most games that support FSR 3.1 with DirectX® 12.
this means that any game that supports fsr 3.1 will automatically support fsr4 without any update?
 
Back
Top