Discussion RDNA4 + CDNA3 Architectures Thread

Page 470 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,789
136
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:

adroc_thurston

Diamond Member
Jul 2, 2023
6,508
9,144
106
The giant bump in L2 offsets the area loss of MALL pretty heavily, so you must be talking about the shader core.
They made L2 denser doe.
Still better than now, or worse, it depends.
If 8SE/96WGP equivalent is that far off the top then the rest of the stack is doomed no matter what.
Like you really expecting Rubin to be another Ampere>Ada leap? With a much less significant node shrink?
Oh noes, just on top.
Giving up" would have meant taking away all the ROP blocks out of AT0, making it unusable for gaming
It's for cloud gaming.
You can't chop off ROPs there.
They did not and AT0 is coming to the DIY market, so they're not giving up.
A heavily castrated chop that won't be competitive.
 

branch_suggestion

Senior member
Aug 4, 2023
771
1,680
106
They made L2 denser doe.
That was a given, just being N3P helps densify the macro but obv needed to be closer to MALL density for PPA.
Oh and the area calc is based on AT2 being supposedly 264mm^2, most areas would be 2.4x, others <2x-2.66x.
So roughly 600mm^2.
Oh noes, just on top.
Then your perf projection for AT0 is lower than mine.
 

branch_suggestion

Senior member
Aug 4, 2023
771
1,680
106
Pretty sure less.
Will probably end up at 550ish through torturing RTL engineers.
It's less about perf and more about comp projections.
AMD doesn't have a single sliver of chance on top without resorting to SoIC. Which they won't.
I do agree, NV can beat AT0 with their known capabilities, just that they might need to get closer to the ret limit then they would like.
A 2x ret tiled thing on the other hand, no chance.
But better to wait for a special node to try that on, like A16.
 

adroc_thurston

Diamond Member
Jul 2, 2023
6,508
9,144
106
Will probably end up at 550ish through torturing RTL engineers.
We love that don't we folks.
NV can beat AT0 with their known capabilities, just that they might need to get closer to the ret limit then they would like.
that's the point, AMD's not gonna win.
But better to wait for a special node to try that on, like A16.
A16 is a dead node for 0 (zero) customers.
Not like it matters anyway. AMD's not competing with Nvidia in discrete graphics anymore.
 

branch_suggestion

Senior member
Aug 4, 2023
771
1,680
106
Not like it matters anyway. AMD's not competing with Nvidia in discrete graphics anymore.
You could say that ever since N36 got shelved.
RDNA3 really needed to not be made of suck.
If there is a question to be asked, is it inherently bad that they are pivoting away from discrete graphics comp to GPGPU and APUs?
It's not like they are taking resources away from graphics, the IP is progressing nicely and they are getting the bulk of the coolest parts over the CPU division.
 
  • Like
Reactions: Tlh97 and marees

adroc_thurston

Diamond Member
Jul 2, 2023
6,508
9,144
106
You could say that ever since N36 got shelved.
no, when N4c got axed.
it is inherently bad that they are pivoting away from discrete graphics comp to GPGPU and APUs?
It's just boring as hell.
It's not like they are taking resources away from graphics, the IP is progressing nicely and they are getting the bulk of the coolest parts over the CPU division.
It's good for what I actually shop for (laptops) and bad for the fun factor since you have good IP wasted on configs that don't take any heads.
Like in GPGPU they're on a jihad to make the meanest, baddest GPU socket, rack and interconnect? Client dGFX? you get scraps off someone's table.
 
  • Like
Reactions: marees

adroc_thurston

Diamond Member
Jul 2, 2023
6,508
9,144
106
Dunno how the market could ever escape the Jensen cult outside of at least 3-4 sustained ass bearings but some chiplet battle axe thing
well, duh.
either AMD embraces the Ork warboss lifestyle and builds the biggest choppa' again and again, or it is stuck forever.

worked every bit as well as one could imagine in DYI DT CPUs so they just have to commit.