Discussion RDNA4 + CDNA3 Architectures Thread

Page 116 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
Mar 3, 2017
1,626
5,908
136
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:

Mahboi

Senior member
Apr 4, 2024
562
884
91
Simple.

In-game AI chatbots. Nvidia will require game devs to implement them, so if your GPU doesn't have enough TOPS, your in-game NPCs will have room temperature IQs and then you can't enjoy the game. You ask game devs why they can't just have conventionally scripted NPC interaction but they'll keep hush or just say "because reasons", meanwhile they're collecting that Nvidia moolah behind their backs.

You can send the check to my mailbox, Nvidia. You're welcome.
But seriously nah, there's an already established plan: slightly tune up your images with ML, scream AIAIAAIAIAIAIAIAIAIAIAIAIAIAIAI for 2 years, and claim that it is Absolutely Peasant™ to play without the Superior DLSS 4 and its AI Master Race™ experience.

Just look at the stupid AI denoiser: they made it into a fancy name, when it's just a denoiser, and they acted like it was a whole new world of gaming. Same scam, same pigeons.
 

Mahboi

Senior member
Apr 4, 2024
562
884
91
ROCm/HIP is mostly tied to their CDNA accelerators.

RDNA compatibility is mostly an after thought at the moment sadly.
To be entirely fair?
As long as they keep working on making it better, cool.
Just...don't go back to 1 year to get proper consumer flagship support.
 

Rekluse

Member
Sep 16, 2022
28
22
41
What percentage of mobile dGPUs are <= 4070 in your opinion?

My guess is about ~75% units, if not more.

So there is more than enough market potential for Strix Halo to compete for. No point to even think about higher tier of dGPU market before enough of this ( <= 4070) market is gained.
The big question is, does Strix Halo give AMD as much margin as a intel/nVidia 4070 combo at a competitive price ? It's be a bit disheartening if it doesn't
 

inquiss

Member
Oct 13, 2010
44
77
91
The big question is, does Strix Halo give AMD as much margin as a intel/nVidia 4070 combo at a competitive price ? It's be a bit disheartening if it doesn't
It's a margin product, but, for the OEMs too. Enables new use cases and power levels for the performance, saves on design complexity and lowers cooling needs. Once you build a chassis for it you're good for future versions too with the new form factor, should halo be a success
 

Rekluse

Member
Sep 16, 2022
28
22
41
It's a margin product, but, for the OEMs too. Enables new use cases and power levels for the performance, saves on design complexity and lowers cooling needs. Once you build a chassis for it you're good for future versions too with the new form factor, should halo be a success
Outside of the handheld PC gaming wave pioneered by Steam. I feel OEM's have been reticent when it comes to AMD and "new form factors" . Intel seems to be the ones pushing dual screen laptops and NUCS etc rather than the OEM's themselves
 
  • Like
Reactions: Tlh97 and marees

marees

Member
Apr 28, 2024
45
29
46
Outside of the handheld PC gaming wave pioneered by Steam. I feel OEM's have been reticent when it comes to AMD and "new form factors" . Intel seems to be the ones pushing dual screen laptops and NUCS etc rather than the OEM's themselves
Maybe a steam tablet / all-in-one / 2-in-one with a dock ??

Ideally should have been surface but for some reason, the surface team, don't seem to be a fan of AMD
 

tsamolotoff

Junior Member
May 19, 2019
3
4
81
From what I can gather DLSS/framegen and denoising are always going to be a problem while RT remains a focus.
So, maybe RTRT is sort of premature and needs a few orders of magnitude more of mem bw to actually work properly (as the current 'raytracing' is very sparse if even so)? I'm (and probably the majority of people) completely fine with baked lights, shadows etc, also the RTX(tm) titles still use it anyways (and since now you don't need to have an art director and lightning specialists thanks to the magic of RTX(tm), it also looks awful with or without pseudo-RT)

The PROBLEM is that Nvidia has no work to do in this regard, their rep is solid and their products are seen as the "default GPU brand you just buy
Yet they consistently do it at the most expert level. Just imagine that the Russian NV tech PR honcho (someone called Oleg Shkoda), was astroturfing his beloved corp day and night on b3d without ever indicating that he's a NV employee. They also had very specific guidelines for the indentured slaves press and tech bloggers with regards to how to review their products which most didn't mind (a shared former CIS thing, I guess)
 
  • Like
Reactions: ToTTenTranz

soresu

Platinum Member
Dec 19, 2014
2,753
1,961
136
So, maybe RTRT is sort of premature and needs a few magnitudes more of mem bw to actually work properly
Memory bandwidth is just part of the problem, the entire paradigm is fundamentally more compute intensive than raster graphics.

That's the thing - with raster it takes more code, but can give very good results with lower end compute hardware.

With RT you can use less code, but you need higher end compute hardware to get a good result, and if you want it in real time then denoising is basically unavoidable for shadowed areas and indirect light (global illumination/GI).
 
Last edited:

ToTTenTranz

Member
Feb 4, 2021
81
128
76
So, maybe RTRT is sort of premature and needs a few orders of magnitude more of mem bw to actually work properly (as the current 'raytracing' is very sparse if even so)? I'm (and probably the majority of people) completely fine with baked lights, shadows etc, also the RTX(tm) titles still use it anyways (and since now you don't need to have an art director and lightning specialists thanks to the magic of RTX(tm), it also looks awful with or without pseudo-RT)
From what I've read, RTRT is premature because it needs a lot more of everything and on top of that it needs more versatility and proper tools/features for compartmentalization and LODing. Current solutions consist of brute-forcing too much which then brings $2500 GPUs to their knees.

But for Nvidia that's okay because that $2500 GPU is brought to its knees a bit less than the competition and Nvidia's previous $2500 GPU. So not only do they get to shine the competition in a worse light, they also do planned obsolescence on their older graphics cards.
 

Saylick

Diamond Member
Sep 10, 2012
3,234
6,648
136
the thing with hybrid renderers we have now is that they have both more code and more hardware.
what a pickle.
Convenient for Nvidia, who want to always sell you a bigger chip while marketing their software solutions as the best.

Oh wait, but they were the ones who pushed for hybrid rendering in the first place. *pondering*
 

beginner99

Diamond Member
Jun 2, 2009
5,214
1,585
136
Ideally should have been surface but for some reason, the surface team, don't seem to be a fan of AMD
Maybe they get money from intel. MS surface looks "cool" but the hardware is always outdated and the price on macbook levels. At that point I can just buy a mac and get double the efficiency vs a 3 gen old intel cpu.