Discussion RDNA4 + CDNA3 Architectures Thread

Page 295 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,777
6,791
136
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:

SolidQ

Golden Member
Jul 13, 2023
1,514
2,484
106
They can also theoretically strap GDDR7 to da thing to get 5080 comp.
at least hope, maybe they can do fast and release in summer :p

wild prediction MILD style and say it will be on par with a 4080S.
this is his speculation. he's just slap from AMD slide results. 5070/5070ti based on 5080\5090 reviews
4292278b51b159742b076cbbb50da438.png
 
  • Like
Reactions: Tlh97 and Elfear

adroc_thurston

Diamond Member
Jul 2, 2023
7,384
10,135
106
at least hope, maybe they can do fast and release in summer :p
Anything is on the cards now since NV kindly proven they're prone to massive underdeliveries as much as anyone else.
It was a good run tho, 10 whole years of real stompy things.
 
Last edited:

gaav87

Senior member
Apr 27, 2024
659
1,279
96
They can't, because the name is 9070 XT, so a 70 class card by name. So as Nvidia has the marketshare, people will only buy it (well some people) if it's cheaper than Nvidia's 70 class. So if this thing performs better than 5070 ti, doesn't matter, it will have to be preferably at least 100$ cheaper. So 650$ it is.
Do we really think AMD is not making a huge profit selling this card at 650$ with a ~350 mm2 die with GDDR6 memory?

Also, 5070 ti will be faster than 4070 ti super because it has like 5% more cuda cores, it will be around 10% faster or so, because it is a similar delta than the one between 5080 and 4080 super. To me it looks like 5070 ti will be around 5% slower than 4080 non super. So 9070 XT if it is around 7900 XTX performance on average, then it should definitely be a little faster than 5070 ti at least.
It wont be faster than 4070tiS nvidia released bw whitepapers already. 5070ti Cudas are so gimped fp16 pixel and texture fillrate are all LOWER than 4070tiS. At least 5080 has 15% higher fp16 and pixel and texture fillrate. Add to that worse cache latency for bw and lower bandwidth. Lower gddr7 latency (higher bandwidth) and u have, a disaster in high fps 1080p games.


Yeah it's 7900XTX perf plus or minus 5%. With better RT© and FSR4®. At less watts and idk the price.
They can also theoretically strap GDDR7 to da thing to get 5080 comp.
I do not agree from early testing gddr7 has worse latency than gddr6 only higher bandwidth. If 9070xt is really bandwidth starved it would help in 4k / RT but the latency would make it worse in lower resolution's so i think now amd did good on the gddr6 front. They should have slaped on samsung 24gbps instead.
 
  • Like
Reactions: Tlh97

adroc_thurston

Diamond Member
Jul 2, 2023
7,384
10,135
106
I do not agree from early testing gddr7 has worse latency than gddr6 only higher bandwidth
AMD has 4 whole levels of cache to wrap dat thing around.
NV caching is simpler (only two levels) but there lies the failure point also. L2 lat regression starts hurting real good since there's nothing above and below it.
but the latency would make it worse in lower resolution's so i think now amd did good on the gddr6 front
Whatever.
They should have slaped on samsung 24gbps instead.
Does not exist.
 

gaav87

Senior member
Apr 27, 2024
659
1,279
96
AMD has 4 whole levels of cache to wrap dat thing around.
NV caching is simpler (only two levels) but there lies the failure point also. L2 lat regression starts hurting real good since there's nothing above and below it.

Whatever.

Does not exist.
But it could have existed if amd wanted to buy it.

1738231148190.png
 

Jan Olšan

Senior member
Jan 12, 2017
581
1,141
136
They can also theoretically strap GDDR7 to da thing to get 5080 comp.

I would reserve "theoretically they can" for cases where we know the memory controller is there. There isn't any clue that it's there though (which makes such speculation just what if dreaming). Or is there?
 

del42sa

Member
May 28, 2013
181
312
136
I would reserve "theoretically they can" for cases where we know the memory controller is there. There isn't any clue that it's there though (which makes such speculation just what if dreaming). Or is there?
maybe this can explain , why it's so bloated :p
 

Kepler_L2

Golden Member
Sep 6, 2020
1,008
4,308
136
I would reserve "theoretically they can" for cases where we know the memory controller is there. There isn't any clue that it's there though (which makes such speculation just what if dreaming). Or is there?
I know that N4C was going to use GDDR7 and that N48/N44 were only validated to use GDDR6. The question is if the memory controller is physically the same as N4C and how long GDDR7 validation would take.
 

Jan Olšan

Senior member
Jan 12, 2017
581
1,141
136
I know that N4C was going to use GDDR7 and that N48/N44 were only validated to use GDDR6. The question is if the memory controller is physically the same as N4C and how long GDDR7 validation would take.
Interesting, thanks.

Do you know if the controller for Navi 4C was supposed to support both GDDR7 and GDDR6? I wonder when was the last time AMD GPU had such capability (I only found this but was that card even real? could be a typo-like error somewhere). Nvidia does have it in cards that use the custom memory (GDDR6X, GDDR5X, maybe as risk mitigation). But if Navi 4C couldn't do GDDR6 then 44 clearly has different memory controllers, likely shared with Navi48, and the likelihood of GDDR7 compatibility goes down a lot, IMHO.
 
Last edited:

Meteor Late

Senior member
Dec 15, 2023
326
346
96
So if it's 345-350 mm2, we know RTX 5080 is 378 mm2, if 9070 XT is like 8-10% slower than 5080, seeing as 5080 is around 10% faster than 4080 super and 9070 XT will have around that level of performance. It means PPA is similar between AMD and Nvidia on the same node WITH worse memory.