Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 120 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,566
5,575
146

Geddagod

Golden Member
Dec 28, 2021
1,147
1,003
106
I agree it's very promising what they did. And they will benefit from this probably bigger and quicker than some people think now.
IMO disaggregating the compute die and cache die is a good first step, but IMO the big gain from MCM in GPUs is going to come from having multiple compute dies being able to work together as one GPU. Removing the infinity cache from the compute die allows for the compute die to be larger, yes, but having multiple compute tiles to work together allows for much greater scaling than even that.
Idk if RDNA 4 would end up having this, as it is an even greater challenge, it seems like, than just disaggregating the MCD and GCD, but I would be surprised if RDNA 5/ blackwell+ doesn't end up going in this direction.
 

gdansk

Golden Member
Feb 8, 2011
1,991
2,359
136
- Shoulda reworked the stack naming and had N31 launch as the 7800XTX instead. Would nip a lot of perception issues in the bud right there. As it stands, AMD's 9 series part is competing with NV's 8 series part, which again just reinforces the "value brand" perception of AMD.

They were smart to launch N10 as the 5700XT when it rightly competed with NV's 7 series Turing cards.
AMD did not do that because then people would expect it to be $700 or $800 following the 6800XT's example. Not that I'm against that at all but I'm sure Lisa is disappointed enough about not being able to increase the MSRP. People would be disappointed regardless of what they call it at $1000.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
- Shoulda reworked the stack naming and had N31 launch as the 7800XTX instead. Would nip a lot of perception issues in the bud right there. As it stands, AMD's 9 series part is competing with NV's 8 series part, which again just reinforces the "value brand" perception of AMD.

They were smart to launch N10 as the 5700XT when it rightly competed with NV's 7 series Turing cards.
AMD did not do that because then people would expect it to be $700 or $800 following the 6800XT's example. Not that I'm against that at all but I'm sure Lisa is disappointed enough about not being able to increase the MSRP. People would be disappointed regardless of what they call it at $1000.
How about seventy eight and a half hundred?
perfect..
that's wonderful...
1667681770796.png

"un-launch" the 7900XTX
7850XTX and 7850XT at $949 and $849 respectively
 
Last edited:

majord

Senior member
Jul 26, 2015
433
523
136
When AMD price too close to Nvidia based on their Raster performance , everyone screams they need to lower prices because RT and feature set can't compete... 'they'll never gain market share' , 'Distrupt the market' , etc etc.

AMD come in undercutting Nvidia significantly:

"Something's wrong"
"it must be even slower than the 4080 in rasterization"
" should be renamed and be dropped to $949"

You guys are funny.

As for the chip/architecture itself. The only thing that's "wrong" is the RT performance. Yet everyone's fixated on the clock speeds not being through the roof, not beating the s**t out of 4090 (even at a mere 355w) and therfore it must have been botched. It's a Fermi, it's an R520...

Hello? , since when is a 50% increase in perf/watt, and 60% increase in performance vs a predecessor "Botched"

It's still a huge uplift over RDNA2 at the end of he day. It's also the first Gen Chiplet architecture, which no doubt has presented a host of challenges, and wouldn't come without some compromise..

Comparing to Nvidia's Gen on Gen - They've gone from an inferior 8nm SS process, to a Superior custom '4nm' process , so you can't even draw any parallels there either. It was always going to be a challenge to maintain status quo with Nvidia this gen because of this fact.

Bit of a reality check people.. Raster perf and perf/watt is looking fine. Not amazing, not matching the random rumors started my morons, sure. but all things in the real world considered.. Fine.

RT.. Yeah It'd be interesting to discuss the "why's" around this. because regardless what personal importance you put on it, it's becoming more and more heavily weighted in reviews, but I can't tell if AMD seem to have consciously given it a low priority with the nature of changes made, or if it's performance is unusually low for the resources on tap. I'm struggling with this a bit as I don't fully understand the bottlenecks.
 

moinmoin

Diamond Member
Jun 1, 2017
4,934
7,620
136
AFAIK this issue is present on the entire first "batch" of RDNA3 products.
So we may end up seeing an RDNA3+ gen. With 25 months since 2 RDNA3 took way too long (was 16 months between 1 and 2), maybe having a minor gen in-between akin to Zen+ and ensuring getting RDNA4 right at the first go isn't the worst.
Phoenix does not have the SGPR bug but it has the export conflict bug.
Seems all current RDNA3 chips have this export conflict bug.
Bugs with the scalar general purpose registers can be serious. What's the export conflict bug about?
 

DisEnchantment

Golden Member
Mar 3, 2017
1,590
5,722
136
Bugs with the scalar general purpose registers can be serious.
Not on GPUs it isn't :)
The compiler can always inject additional instructions/wait states/nop etc. to work around such HW bugs.

on CPUs this basically means new stepping.
What's the export conflict bug about?
Export conflict seems to be a problem related to the reordering of the output of the shader execution (called export) in order to pass on to the next stage of the graphics pipeline. Basically most of the operation in the pipeline is sequential, but they parallelize some shader operations to have more throughput but then they have issues to reorder their output back in sequence leading to the entire sequence becoming invalid.
Sounds like an OREO problem.
Seems like a serious bug but then I am no graphics expert.
 
Last edited:

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
7,382
2,419
146
It will according to what Jarred Walton says AMD told the media at the event. And AMD did say themselves it is designed to scale as high as 3GHz.

It would be a fairly savvy play by AMD, if some of the cards do hit near 3GHz. Price and power draw were the Belle of the Ball. Despite not taking the performance crown in raster, the reception by the media has been largely positive. Partner cards with massive coolers, 3 power connectors, drawing over 400w, and hitting close to 3GHz, with another $100-$200 price boost, would have dampened the enthusiasm. If they can pull off 10% performance over stock, for what a reference 4080 costs? It is going to look good in reviews.
I am confused a bit by your post. You are saying that faster partner cards are coming, but they didn't want to show them off, due to it would dampen enthusiasm? Why not show both and let buyers decide which to buy? (I expect both to be good alternatives compared to ADA)

But then you say, pulling 10% performance over stock would look good in reviews. This I would agree with.

Anyway, we should cross our fingers for good OC results. I am looking forward to more info.
 

arcsign

Junior Member
Jul 26, 2009
8
26
91
Huh, I guess that was what they were hinting at over on guru3… somebody mentioned it upthread a bit. Wonder how much faster it could have been running as intended.
 

rommelrommel

Diamond Member
Dec 7, 2002
4,370
3,077
146
So we may end up seeing an RDNA3+ gen. With 25 months since 2 RDNA3 took way too long (was 16 months between 1 and 2), maybe having a minor gen in-between akin to Zen+ and ensuring getting RDNA4 right at the first go isn't the worst.

I think the chances of a respin are inversely proportional to how close RDNA 4 is. I believe these generational teams work mostly independently, so if RDNA 4 is going well and can maybe even be pushed a bit why respin? Maybe it’s a dream but if it’s 18 months off…
 

Joe NYC

Golden Member
Jun 26, 2021
1,899
2,197
106

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
I think the chances of a respin are inversely proportional to how close RDNA 4 is. I believe these generational teams work mostly independently, so if RDNA 4 is going well and can maybe even be pushed a bit why respin? Maybe it’s a dream but if it’s 18 months off…
Ehh, no one knows what AMD's plan is.

All we know is that on the mobile roadmap, there's an RDNA3+ which comes with Strix Point. That just might entail including the 50% larger registers that comes with desktop RDNA3A, but it's not certain if the desktop roadmap will get an RDNA3+ which fixes the aforementioned issues too. We'll just have to wait and see.
 

biostud

Lifer
Feb 27, 2003
18,195
4,676
136
While it will compete with
When AMD price too close to Nvidia based on their Raster performance , everyone screams they need to lower prices because RT and feature set can't compete... 'they'll never gain market share' , 'Distrupt the market' , etc etc.

AMD come in undercutting Nvidia significantly:

"Something's wrong"
"it must be even slower than the 4080 in rasterization"
" should be renamed and be dropped to $949"

You guys are funny.

As for the chip/architecture itself. The only thing that's "wrong" is the RT performance. Yet everyone's fixated on the clock speeds not being through the roof, not beating the s**t out of 4090 (even at a mere 355w) and therfore it must have been botched. It's a Fermi, it's an R520...

Hello? , since when is a 50% increase in perf/watt, and 60% increase in performance vs a predecessor "Botched"

It's still a huge uplift over RDNA2 at the end of he day. It's also the first Gen Chiplet architecture, which no doubt has presented a host of challenges, and wouldn't come without some compromise..

Comparing to Nvidia's Gen on Gen - They've gone from an inferior 8nm SS process, to a Superior custom '4nm' process , so you can't even draw any parallels there either. It was always going to be a challenge to maintain status quo with Nvidia this gen because of this fact.

Bit of a reality check people.. Raster perf and perf/watt is looking fine. Not amazing, not matching the random rumors started my morons, sure. but all things in the real world considered.. Fine.

RT.. Yeah It'd be interesting to discuss the "why's" around this. because regardless what personal importance you put on it, it's becoming more and more heavily weighted in reviews, but I can't tell if AMD seem to have consciously given it a low priority with the nature of changes made, or if it's performance is unusually low for the resources on tap. I'm struggling with this a bit as I don't fully understand the bottlenecks.
It is simply that you would expect x900 vs x090, x800 vs x080 etc. So when that does not match, it looks like something is wrong.
 

Joe NYC

Golden Member
Jun 26, 2021
1,899
2,197
106
Seems like we are in a very similar position. What components are you planning for your build?

I have not really started seriously planning. Just in general:

B650E motherboard
hopefully an PCIe Gen5 M.2 drive.
8 core Zen 4 V-Cache
Navi 32 based card
Probably a pedestrian speed DDR5, 32 GB
I will need a new PSU, and may just as well get a new case, and leave the old PC intact as a back up.