Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 184 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146

Kepler_L2

Senior member
Sep 6, 2020
537
2,198
136
Smaller dies tend to clock higher in general. N22 clocks higher than N21 by 12.5% if you look at 6750XT vs 6950XT and 23.5% higher if you compare 6750XT vs 6800. If you want to look at original release the 6700XT has a 14.6% clock speed advantage vs the 6900XT

Then there is the fact the N31 seems to have been designed around a higher clock target that they cannot hit at sane power levels in games. If N32 goes some way to fix whatever causes that issue N32 will have a further improved v/f curve.

In addition 7900XTX has an 11% higher boost clock than the 6900XT (using this as neither are refresh parts)

Also checking the specs at Anandtech that W7800 config only has 128 rops so the delta is even less than I 1st though between that N31 config and the supposed N32 config.

So really given the fact smaller dies tend to clock better anyway and the supposed N31 clock vs power flaw and the N21 to N31 clock speed progression a 15% clock speed advantage for N32 vs N31 does not seem at all far fetched and is well within the. Just for reference an 11% increase in clock speed over the 6700XT would be 2.87Ghz which is about the same as a 15% hike from 2.5Ghz so there is that as well.

Given that it does not at all seem far fetched to think N32 can clock a bit higher than N31 and there is possibility to expect it could clock quite a lot higher. Obviously actual execution can be entirely different but ya know, speculation thread so that should always be a given.
W7800 has 160 ROPs.
 
  • Like
Reactions: Tlh97 and Kaluan

ryanjagtap

Member
Sep 25, 2021
132
153
96
There was no cut-down N22 in that test, so we compared it to 6800S and both of them consumed ~80W.

I seriously don't know from where you got this nonsense about 25% improvement from just architecture, there is clearly no such thing.

1080pCyberpunk 2077Doom EternalF1 22Far Cry 6Ghostwire TokyoGuardiansSpider-ManAverage
RX 6800S113%118%103%105%101%103%106%107%
RX 7600S100%100%100%100%100%100%100%100%

RX 6800S has 14% more CU(Shader,TMU), but RX 7600S clocks higher.
What I found:
F1 22: 2350/2407MHz avg/max for 7600S at 90W. That should mean ~2300-2325MHz at the lowest.
F1 21: ~2400MHz is max frequency at 105W and then falls by 350MHz to 2050MHz at 80W, at 90W It looks to be ~2150-2175MHz. ComputerBase
So ~2300-2325Mhz vs ~2150-2175Mhz at comparable 90W, that's 7% difference.
My conclusion so far is that, If you compared the same configuration N33 vs N23 at comparable clocks, then performance would be just a bit better.

N33 looks like a FLOP. Why they even designed It is beyond me, at least If they used 5nm process, we would see some improvement in performance and better perf/W at lower TGP.
Yes, you are right. The 25% improvement estimate was too high. I read a few articles and see that the difference between the 6700S and 6800S is at the most 10-15%. So the new N33 7600S is not at all impressive.
Notebookcheck Comparison
Ultrabookreview Comparison
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
W7800 has 160 ROPs.

Anand have the spec wrong then which wouldn't be the 1st time. 128 doesn't make sense with 5SEs anyway unless you have 3 lots of 32 and 2 lots of 16 which is a really wonky config.

W7800 has to be N31 based.

I didn't say it wasn't.

I am saying a 7800XT based on that same config and a 7800XT based on full N32 with 15% higher boost clocks (~2.9Ghz) would have very similar performance.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
There was no cut-down N22 in that test, so we compared it to 6800S and both of them consumed ~80W.

I seriously don't know from where you got this nonsense about 25% improvement from just architecture, there is clearly no such thing.

1080pCyberpunk 2077Doom EternalF1 22Far Cry 6Ghostwire TokyoGuardiansSpider-ManAverage
RX 6800S113%118%103%105%101%103%106%107%
RX 7600S100%100%100%100%100%100%100%100%

RX 6800S has 14% more CU(Shader,TMU), but RX 7600S clocks higher.
What I found:
F1 22: 2350/2407MHz avg/max for 7600S at 90W. That should mean ~2300-2325MHz at the lowest.
F1 21: ~2400MHz is max frequency at 105W and then falls by 350MHz to 2050MHz at 80W, at 90W It looks to be ~2150-2175MHz. ComputerBase
So ~2300-2325Mhz vs ~2150-2175Mhz at comparable 90W, that's 7% difference.
My conclusion so far is that, If you compared the same configuration N33 vs N23 at comparable clocks, then performance would be just a bit better.

N33 looks like a FLOP. Why they even designed It is beyond me, at least If they used 5nm process, we would see some improvement in performance and better perf/W at lower TGP.

Not a flop at all. Lower power consumption (good for laptops) drop in replaceable (allows aibs to re-use current designs) and slightly better performance in a package that costs AMD less to make so they can increase margin and lower the price for aibs at the same time.

Overall that sounds pretty good. Not as good as what was rumoured when people thought the dual issue designs was properly 2x the shaders but good none the less.

As for the desktop version give it fast gddr6 and it should hit close to 6700XT tier performance provided it clocks high enough.

Give AIBs the option to pair it with 16GB of ram as well and it could be a very very solid 1080p card an okay 1440p card.
 
  • Like
Reactions: Tlh97

PJVol

Senior member
May 25, 2020
707
632
136
Smaller dies tend to clock higher in general. N22 clocks higher than N21 by 12.5% if you look at 6750XT vs 6950XT and 23.5% higher if you compare 6750XT vs 6800. If you want to look at original release the 6700XT has a 14.6% clock speed advantage vs the 6900XT

Then there is the fact the N31 seems to have been designed around a higher clock target that they cannot hit at sane power levels in games. If N32 goes some way to fix whatever causes that issue N32 will have a further improved v/f curve.

In addition 7900XTX has an 11% higher boost clock than the 6900XT (using this as neither are refresh parts)

Also checking the specs at Anandtech that W7800 config only has 128 rops so the delta is even less than I 1st though between that N31 config and the supposed N32 config.

So really given the fact smaller dies tend to clock better anyway and the supposed N31 clock vs power flaw and the N21 to N31 clock speed progression a 15% clock speed advantage for N32 vs N31 does not seem at all far fetched and is well within the. Just for reference an 11% increase in clock speed over the 6700XT would be 2.87Ghz which is about the same as a 15% hike from 2.5Ghz so there is that as well.

Given that it does not at all seem far fetched to think N32 can clock a bit higher than N31 and there is possibility to expect it could clock quite a lot higher. Obviously actual execution can be entirely different but ya know, speculation thread so that should always be a given
Wow, You're really not taking an easy way for speculating :)

What if, for the sake of clarity, we:
  • agree that for now, the very existence of the rdna3 bugfix is nothing but wishful thinking.
  • simplify the math and put aside rdna2 for a moment
  • take the 71.4% of 7900XT (60CU/84CU) as the N32 performance baseline
Then if we agree to use correlated N22 => N21 scaling, then taking the actual clocks under gaming workload (not clocks from specs) an average clock delta for the N22 to N21 is around +8%, based on several "6000 refresh" review data.

So, we've got 71,4% + 8% = 77% of 7900XT performance, which depending on the review put it somewhere between 6800XT and 6900XT, closer to the former actually. And that's it )
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
60CUs Vs 70CUs is not that much different and if N32 clocks better performance could be quite close. We are talking a 15% delta so around 2.9Ghz for New Vs 2.5Ghz for the N31 design would give you equal compute, texture fillrate and RT performance.

4 MCDs means both would have the same cache and memory bandwidth to go with that compute.

3 SEs Vs 5SEs means N31 version would have more rops and even with the clockspeed delta it would be ahead in this department but that would only show in cases where you were pixel fillrate limited.

So yea. Both would be pretty close in performance with just a 15% clockspeed advantage for N32 Vs that spec N31.

For raster perhaps but at higher consumption, for RT performance you cannot replace 10 RT cores with only 15% higher clocks.

Also, for this to work you will need a full NV32 die that will be able to clock at very high clocks and high consumption , instead of a cut off NV31 die with clocks that all dies can work.

We will find out in a few months.
 
  • Like
Reactions: Rigg

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
From the recent potentially true MLID AMD APU info dump, Hawk Point potentially being a Phoneix+, Phoenix w/ RDNA3.5/RDNA3+ (same 6WGP config) in early 2024, along with virtually every other APU that year (including maybe the mainstream Zen5 IOD iGPU as well)... I see a tendency of AMD trying to get away from RDNA3 as fast as possible. Which feeds into the narrative that they borked up somehow the initial RDNA3 design indeed.

Will we see discrete RDNA3+ GPUs in 2024 as well?
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Overall that sounds pretty good. Not as good as what was rumoured when people thought the dual issue designs was properly 2x the shaders but good none the less.

As for the desktop version give it fast gddr6 and it should hit close to 6700XT tier performance provided it clocks high enough.
6800S (32 cu N23) at 80w is tested at ~7% faster than 7600S (28cu N33) at the same power envelope, but somehow give them both desktop-class memory and power, and Navi 33 would perform like a 6700xt while the former performs.. Like a 6600/50XT?
 

Kepler_L2

Senior member
Sep 6, 2020
537
2,198
136
From the recent potentially true MLID AMD APU info dump, Hawk Point potentially being a Phoneix+, Phoenix w/ RDNA3.5/RDNA3+ (same 6WGP config) in early 2024, along with virtually every other APU that year (including maybe the mainstream Zen5 IOD iGPU as well)... I see a tendency of AMD trying to get away from RDNA3 as fast as possible. Which feeds into the narrative that they borked up somehow the initial RDNA3 design indeed.

Will we see discrete RDNA3+ GPUs in 2024 as well?
Nope, gfx11.5 is just for APUs.
 

coercitiv

Diamond Member
Jan 24, 2014
6,677
14,272
136
6800S (32 cu N23) at 80w is tested at ~7% faster than 7600S (28cu N33) at the same power envelope, but somehow give them both desktop-class memory and power, and Navi 33 would perform like a 6700xt while the former performs.. Like a 6600/50XT?
I guess the rationale was that a 14% CU advantage resulted in a 7% performance advantage for N23 at ISO power. If the desktop N33 brings CU count to parity and scales clocks better by something like ~10%, then it could lead the 6600XT by 15%+ at 1080p. This happens to be the rought performance delta between 6600XT and 6700XT at 1080p.

That being said, I would not consider this 6700XT tier, resolution scaling will be problematic. Performance gap between 6600XT and 6700XT grows to 23% at 1440p.
 
  • Like
Reactions: Tlh97

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
Wow, You're really not taking an easy way for speculating :)

What if, for the sake of clarity, we:
  • agree that for now, the very existence of the rdna3 bugfix is nothing but wishful thinking.
  • simplify the math and put aside rdna2 for a moment
  • take the 71.4% of 7900XT (60CU/84CU) as the N32 performance baseline
Then if we agree to use correlated N22 => N21 scaling, then taking the actual clocks under gaming workload (not clocks from specs) an average clock delta for the N22 to N21 is around +8%, based on several "6000 refresh" review data.

So, we've got 71,4% + 8% = 77% of 7900XT performance, which depending on the review put it somewhere between 6800XT and 6900XT, closer to the former actually. And that's it )

Not sure this is easier given the number of baked in assumptions.

It seems 7900XT and XTX clock about the same in games on average and scaling appears to match the relative compute performance fairly closely.

as such 71.4% with a 15% clock bump would be around 82% of 7900XT and 70/84 is 83%.

So a full N32 design that clocks around 2.9Ghz in games would roughly match a 70CU N31 design that clocks around 2.5Ghz in games and both would be in 6950XT ballpark territory.

6800S (32 cu N23) at 80w is tested at ~7% faster than 7600S (28cu N33) at the same power envelope, but somehow give them both desktop-class memory and power, and Navi 33 would perform like a 6700xt while the former performs.. Like a 6600/50XT?

An OCd 6650XT is about 10% ahead of the stock 6650XT. Allow that to represent the likely higher core clock and faster VRAM of a 7600XT and then factor in the approx 9% IPC increase and you get a 7600XT that is about 20% faster than a stock 6650XT and at 1440p that would be around 6700XT performance on average.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
Not a flop at all. Lower power consumption (good for laptops) drop in replaceable (allows aibs to re-use current designs) and slightly better performance in a package that costs AMD less to make so they can increase margin and lower the price for aibs at the same time.

Overall that sounds pretty good. Not as good as what was rumoured when people thought the dual issue designs was properly 2x the shaders but good none the less.

As for the desktop version give it fast gddr6 and it should hit close to 6700XT tier performance provided it clocks high enough.

Give AIBs the option to pair it with 16GB of ram as well and it could be a very very solid 1080p card an okay 1440p card.
A bit higher performance a bit better power consumption, the same Vram and ~$10-15 saved on producing these chips compared to N23. This sounds pretty good to you? Maybe If there was no competition.

What advantage does N33 have over RTX 4060 laptop? Nothing except lower production cost. AMD will need to lower price as much as possible for AIBs to sell something.
 
Last edited:

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
Nope, gfx11.5 is just for APUs.

Well, that's unfortunate. I guess the alleged super-APU "Sarlak" will be RDNA3+ fastest incarnation then.
Will defo look into it if I have the cash on hand, will probably make for a helluva mini/micro desktop PC/gaming console!
A bit higher performance a bit better power consumption, the same Vram and ~$10-15 saved on producing these chips compared to N23. This sounds pretty good to you? Maybe If there was no competition.

What advantage does N33 have over RTX 4060 laptop? Nothing except lower production cost. AMD will need to lower price as much as possible for AIBs to sell something.

There, that's the pretty good part. How exactly is that a bad thing and not a win for us?
If anything, have people not learned anything from the success of Polaris?
 
  • Like
Reactions: Tlh97

PJVol

Senior member
May 25, 2020
707
632
136
Not sure this is easier given the number of baked in assumptions.
It's funny that the only assumption in my post was in fact made by you (and which I agreed with) that the N32 clocks margin resulted from overall Cac reduction, at the very least is similar to the N22.
The rest is math.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
An OCd 6650XT is about 10% ahead of the stock 6650XT. Allow that to represent the likely higher core clock and faster VRAM of a 7600XT and then factor in the approx 9% IPC increase and you get a 7600XT that is about 20% faster than a stock 6650XT and at 1440p that would be around 6700XT performance on average.

- The 7600S that was tested was clocking at ~2.4 ghz, we don't know what the 6800S clocked at in the comparison, but from other tests on the laptop it tends to settle at around ~2.1ghz in an 80w envelope, so 7600S most likely enjoyed a sizable clock speed advantage...and still lost out.
- The 9% IPC increase was based on comparing a 6900XT vs 7900XT at 4K, but there are other factors at play there which I have already explained before:

To be frank I don't think we can conclude that dual issue brings performance improvement by itself. Yes the computerbase test comparing 7900XT vs 6900XT shows some improvement (9% average at 4k) but those are not equal GPUs even if you normalize CU/clocks.
- 7900XT has 800gb/s of vram bandwidth vs 512 gb/s for 6900XT, this is counteracted somewhat by the higher IC the latter has, but at 4k the 7900XT still should have significantly more usable bandwidth.
- 7900XT has 192 ROPs vs 128 ROPs for 6900 XT, so it has 50% higher Pixel Rate even at the same clocks.

How much of the observed improvement is due to 2x FP32 as opposed to the 7900XT just being more endowed in bandwidth and GPU front end?

A 7600XT wouldn't have 50% more bandwidth nor 50% more ROPs than a 6650XT...
 
  • Like
Reactions: TESKATLIPOKA

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
There, that's the pretty good part. How exactly is that a bad thing and not a win for us?
If anything, have people not learned anything from the success of Polaris?
Let's say 7600M XT laptop will be sold $50 cheaper than a comparable one with RTX 4060. Is It worth It?
At $1149 vs $1199 It's only 4.2% cheaper
At $1449 vs $1499 It's only 3.34% cheaper
Would you buy It over Nvidia? I personally wouldn't.

At least 7600M(S) against RTX 4050 laptop looks pretty decent, but the question is If It is worth It compared to the stronger GPUs like RXT 4060 or RX 7600M XT.

Success of Polaris? Are you sure? AMD had to lower prices so much that it barely made anything per card. From when is this considered success? Not to mention that in mobile they had almost zero presence because they were power hungry compared to competition.

edit:
Not sure if in desktop 7600XT will do that much better.
Here, a difference of $50 would be more significant, true.
On the other hand, 4060(full Ad107) will be more efficient, and I think will have better OC potential.
They should release the cards finally.
 
Last edited:
  • Like
Reactions: Tlh97 and Kaluan

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
- The 7600S that was tested was clocking at ~2.4 ghz, we don't know what the 6800S clocked at in the comparison, but from other tests on the laptop it tends to settle at around ~2.1ghz in an 80w envelope, so 7600S most likely enjoyed a sizable clock speed advantage...and still lost out.
- The 9% IPC increase was based on comparing a 6900XT vs 7900XT at 4K, but there are other factors at play there which I have already explained before:



A 7600XT wouldn't have 50% more bandwidth nor 50% more ROPs than a 6650XT...

Cross comparing laptop results is always a problem. The TUF they had was the FA617NS judging by the fact it had the 7735HS CPU. Also means it was a Zen 3+ CPU and given there are two configs with just single channel ram and they do not specify what ram config their model had we don't know if it was running dual channel or single channel.

Given the number of potential variables between this system and the G14 that had the 6800S in it drawing any conclusions from such data has rather wide error bars that only grow when trying to extrapolate such results from the laptop arena to the desktop arena. We also see frequently that two different laptops with the same GPU, CPU and memory config can offer very different gaming performance just based on the cooling solution so even when the components are the same it is still not apples to apples unless the chassis is a match as well.

It might be flawed data but it is the best independent test we had. If we went with AMDs claimed IPC uplift number the target would be closer to 6750XT / 6800 which from what we know so far seems a bit far fetched. It is always going to be a ballpark figure anyway due to the nature of guessing based on specs. If it ends up being closer to an OC'd 6650XT I would not be surprised. If it ends up just about matching the 6700XT at 1080p and maybe 1440p also would not be surprised. Slower than the former and quicker than the latter would be surprising though.
 
  • Like
Reactions: Tlh97 and Kaluan

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
AMD might not be in a rush to release Navi 33 parts to the desktop market, particularly if there's still a lot of Navi 23 stock left in the channel.

Navi 33 should be less expensive for them considering that the die area is smaller and that TSMC was alleged to be offering a discount for moving wafers from 7nm to 6nm.

It's hard to say what AMD uses for a launch price as the 6600/XT and other Navi 23 parts were launched during the mining boom and were widely regarded as being too high. At $250 they probably would do quite well, just because there's nothing new at that price point.
 
  • Like
Reactions: Tlh97 and Joe NYC

jpiniero

Lifer
Oct 1, 2010
15,223
5,768
136
It's hard to say what AMD uses for a launch price as the 6600/XT and other Navi 23 parts were launched during the mining boom and were widely regarded as being too high. At $250 they probably would do quite well, just because there's nothing new at that price point.

But that was before the price hikes. Maybe TSMC will cut N7/N6 prices back if demand for it continues to wane but so far they haven't admitted to doing so. So unless that changes AMD is likely paying more for N33 than N23 when accounting for the hikes.

I don't know what the magic number is, but $200/$250 definitely aint it.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
AMD might not be in a rush to release Navi 33 parts to the desktop market, particularly if there's still a lot of Navi 23 stock left in the channel.

Navi 33 should be less expensive for them considering that the die area is smaller and that TSMC was alleged to be offering a discount for moving wafers from 7nm to 6nm.

It's hard to say what AMD uses for a launch price as the 6600/XT and other Navi 23 parts were launched during the mining boom and were widely regarded as being too high. At $250 they probably would do quite well, just because there's nothing new at that price point.

I think AMD might try and stick to the current 6600 and 6650XT pricing given N33 can drop into the same PCBs as those parts. Just do a box update and you have a 7600 in the exact price point of the 6600 but offering better performance and the same for the 7600XT replacing the 6650XT with more performance. AMD could also offer a 16GB 7600XT variant if they so wished and charge a $50 or so premium for it. Would be the one to go for if you wanted to hold onto it for a few years where as the 8GB version would be great if you are using it as a stop gap card until the next gen parts are released.

But that was before the price hikes. Maybe TSMC will cut N7/N6 prices back if demand for it continues to wane but so far they haven't admitted to doing so. So unless that changes AMD is likely paying more for N33 than N23 when accounting for the hikes.

I don't know what the magic number is, but $200/$250 definitely aint it.

The price hikes will impact both N23 and N33 going forward and if the price of a 6600 is $200 because that is what that level of performance from AMD can command selling a 7600 in that price point using the cheaper to make N33 will net AMD more margin so that is a win for AMD.

8GB cards have a price ceiling now so unless AMD decide to make all N33 designs 16GB sub $300 is where it needs to land.
 
  • Like
Reactions: Tlh97