• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Discussion RDNA4 + CDNA3 Architectures Thread

Page 77 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

DisEnchantment

Golden Member
1655034287489.png
1655034259690.png

1655034485504.png

With the GFX940 patches in full swing since first week of March, it is looking like MI300 is not far in the distant future!
Usually AMD takes around 3Qs to get the support in LLVM and amdgpu. Lately, since RDNA2 the window they push to add support for new devices is much reduced to prevent leaks.
But looking at the flurry of code in LLVM, it is a lot of commits. Maybe because US Govt is starting to prepare the SW environment for El Capitan (Maybe to avoid slow bring up situation like Frontier for example)

See here for the GFX940 specific commits
Or Phoronix

There is a lot more if you know whom to follow in LLVM review chains (before getting merged to github), but I am not going to link AMD employees.

I am starting to think MI300 will launch around the same time like Hopper probably only a couple of months later!
Although I believe Hopper had problems not having a host CPU capable of doing PCIe 5 in the very near future therefore it might have gotten pushed back a bit until SPR and Genoa arrives later in 2022.
If PVC slips again I believe MI300 could launch before it :grimacing:

This is nuts, MI100/200/300 cadence is impressive.

1655034362046.png

Previous thread on CDNA2 and RDNA3 here

 
Last edited:
Why bother if it can't run on consoles at decent speed?
That what people don't understand. NV doesn't have consoles, so they can do whatever they want
Alot people thinking, if PS5 would have NV gpu, that would be much better RT, but they forgot Die Area not rubber.
But still AMD should do already to prepare good quality for PS6, at least like Xess have both versions
 
AMD carpets out some 200WGP tiled monstrocity
and people would think before release, AMD won't compete with Blackwell, even with RDNA5
like was amd can't beat RTX 3070/2080ti

offtopic
That true, what Switch2 DLSS is limited?


Found this
83e0430f68a5cfafeb0c3bfcf0ef16f3.png


My assuming is that because RDNA3/3.5 and 4 have difference WGP. Right?
 
Last edited:
Gamers are so influenced by halo part marketing that I'm not sure it'll be a success in any case.

Secondly, now Nvidia does actually have a feature other than driver FUDD that poor victimized gamers throw a hissy fit if it isn't included (DLSS). At least they are sane enough not to cry when a game doesn't include RT.
 
Gamers are so influenced by halo part marketing that I'm not sure it'll be a success in any case.

Secondly, now Nvidia does actually have a feature other than driver FUDD that poor victimized gamers throw a hissy fit if it isn't included (DLSS). At least they are sane enough not to cry when a game doesn't include RT.
Yeah but you can't win against hundreds of WGPs.
Simple and brutal!
 
I'd bet all my money that it's an FSR base that they added their own specialisations to.
There is literally no reason to redo an entire in-house upscaler. It's a massive workload and FSR works fine, image quality needs tweaks, but the core system runs hard.

What I'm curious about is the degree of porosity between PSSR and FSR. Like will Sony take FSR, improve it with their AI or whatever tweaks they want, then AMD will pinch bits and bobs that could work for a future iteration? Or it's going to be a hard branch with a completely different direction for FSR?
 
That funny. PT i think even on PS6 not happening
9409d265a4764b3469ce15763afea9be.png

Depends on the settings? Based upon Compubase's results you might be able to do something like CP at 1080p30 PSSR on the PS5 Pro?

It's not going to be like Ada of course.

That's not AMD.
And targets a microscopic subset of the total console install base.

Seems PS4 Pro sold about 25% of the PS4's in the time period it was available. About 14 million or so.
 
Depends on the settings? Based upon Compubase's results you might be able to do something like CP at 1080p30 PSSR on the PS5 Pro?

It's not going to be like Ada of course.



Seems PS4 Pro sold about 25% of the PS4's in the time period it was available. About 14 million or so.
PS4 Pro was a response to 4K TV and had a GPU that was more than twice as powerful than the original.

PS5 Pro is a much smaller jump if you take into account that half of its compute throughput (33 TFLOPS) comes from VOPD that does very little in games.
It's pretty safe to say that it won't sell nearly as well.
 
i'm talking about N44/N48 parts, not APU

1080p/1440p, don't need 4k.
Because it's going to mainstream level. Look how AW2 RT kill RDNA2/3 cards, and alot people buying NV because DLSS/RT.
Persoanlly i don't care about RT and won't care until 2035

Looks how 4060 kill 7600. That why 4060 selling much more, than 7600
6ad6e913467a90860e2016cd6a8c1366.jpg
They have to use DLSS + RT to get that percentage.

So you are comparing, or using a comparison, of a heavy GPU usage game, with 300 dollar cards, with upscaling AND raytracing on to make you comparison? C'mon man, I have see you post better quality than this.

Then you say AW2 and post video of Cyperpunk 2077?
 
Then you say AW2 and post video of Cyperpunk 2077?
That one of examples. If you want AW2 from High cards, there is.
Even not 4080
2bae88ce463c26b8731904acf05decb6.jpg

So you are comparing, or using a comparison, of a heavy GPU usage game, with 300 dollar cards, with upscaling AND raytracing on to make you comparison?
That how people chose it, Marketing working on them. That why RTX 3050 outsells RX 6600, despite 6600 kill 3050 in perfomance
 
Last edited:
That one of examples. If you want AW2 from High cards, there is.
Even not 4080
2bae88ce463c26b8731904acf05decb6.jpg


That how people chose it, Marketing working on them. That why RTX 3050 outsells RX 6600, despite 6600 kill 3050 in perfomance
All scenes are not equal, you could find one where the 4070TI perfs will tank more than here while the 7900XTX will be less behind, guess that tests should be done within a larger time in the game and then the average reported, at Computarbase they found the 4070ti 26% ahead in average, that s far from a selected scene that could favour particularly one card and put the other at a big disadvantage.

 
That s because the 32 CUs GPU goes from N6 to N4P while the 64 CUs start from N5, not counting that there s 4 more CUs, the smaller GPU has forcibly much more margin to increase frequency.

Edit : 2515MHz would mean that it would barely perform better than the 7600, and likely lower than the 7600XT, that just doesnt make sense.

7600XT clock up to 2760 and from N6 to N4P there s about 22% higher frequency at same power, that get us close to 3.4GHz, and that s assuming that the v/f curve is unchanged.
N48 >3GHz and not 2770
I am certain you already saw It, but All The Watts!! pretty much confirmed that what you thought is frequency in that tweet is something else.
I think It's IC BW, but I could be wrong.
 
Back
Top