Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 79 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
As I posted earlier in the thread I think this:

RX 7500(N33) 6GB
RX 7600(N33) 8GB
RX 7700(N32) 12GB
RX 7800(N32) 16GB
RX 7900(N31) 20GB
RX 7950(N31) 24GB

Maybe a 3D cache version of both top N31 and N32.
RX 7500(N33) shouldn't have only 6GB Vram, because that would mean 96bit bus instead of 128bit which the Full N33 should have.
6GB Vram is too little and you would sacrifice 25% of total bandwidth by having only 96bit bus and N33 doesn't have a lot of BW to begin with.
In my opinion, It will have 8GB Vram and 128bit bus, they will use slower memory instead.
The rest of stack looks good.
 

biostud

Lifer
Feb 27, 2003
18,193
4,674
136
RX 7500(N33) shouldn't have only 6GB Vram, because that would mean 96bit bus instead of 128bit which the Full N33 should have.
6GB Vram is too little and you would sacrifice 25% of total bandwidth by having only 96bit bus and N33 doesn't have a lot of BW to begin with.
In my opinion, It will have 8GB Vram and 128bit bus, they will use slower memory instead.
The rest of stack looks good.
With cache shouldn't it be enough?
 

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
RX 7500(N33) shouldn't have only 6GB Vram, because that would mean 96bit bus instead of 128bit which the Full N33 should have.
6GB Vram is too little and you would sacrifice 25% of total bandwidth by having only 96bit bus and N33 doesn't have a lot of BW to begin with.
In my opinion, It will have 8GB Vram and 128bit bus, they will use slower memory instead.
The rest of stack looks good.

OTOH a 96bit variant makes use of any dies that have a defective 32bit memory PHY. For the 7500 class I think 6GB and 96bit is enough since it will still have 32MB of cache so for entry level 1080p gaming it should be fine but pushing it to 1440p it won't hold up as well as the full 7600 tier parts will due to the bandwidth loss.
 
  • Like
Reactions: Tlh97 and Kaluan

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
N23 didn't have a 96bit version. Why do you think N33 will have It?

N33 will be significantly faster than N23, yet both of them have only 32MB IC.

Cutdown N23(RX 6600) has 128bit GDDR6 with 14,000gbps memory -> 224GB/s.
Full N23(RX 6600XT-6650XT) has 128bit GDDR6 with 16,000-17,500gbps memory -> 256-280GB/s.

N33 should have 128bit GDDR6 with 20,000-24,000gbps memory -> 320-384GB/s.
A cutdown N33 96bit GDDR6 with 20,000-24,000gbps memory -> 240-288GB/s.

A cutdown N33 would have comparable bandwidth to full N23, while being comparable or faster than 6700XT, which has 96MB IC and 384GB/s.
Even the Full N33 doesn't have a lot of BW even with 24gbps memory.

P.S. AMD had a cutdown chip with 96-bit bus, but that was Radeon RX 5300(Navi 14).
 
Last edited:

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
N23 didn't have a 96bit version. Why do you think N33 will have It?

1) N23 was on N7 so was competing with Zen 3 for wafers and Zen 3 had far more margin and EPYC was in very high demand.
2) Cutting N23 to use in x500 would have meant an even lower margin card than the 6600 which would have diverted wafers away from Zen 3 chiplets.
3) N24 was a terrible x500 tier part which in normal times may have been an x300 tier.
4) 96bit N33 makes sense, it allows AMD to use defective dies and since the die is N6 and 200mm it is really cheap. 6GB means BOM cost is lower than the 7600XT so can work well as a 7500XT. Also saves on the upfront costs of designing an N34 part.

N33 will be significantly faster than N23, yet both of them have only 32MB IC.

With 8GB of ram N33 will still be 1080p / entry 1440p. It will be a very good part and might allow 1080p RT to actually be usable in a low end GPU which is a good thing.
Cut N33 with 6GB will make an excellent entry level 1080p part.

Cutdown N23(RX 6600) has 128bit GDDR6 with 14,000gbps memory -> 224GB/s.
Full N23(RX 6600XT-6650XT) has 128bit GDDR6 with 16,000-17,500gbps memory -> 256-280GB/s.

N33 should have 128bit GDDR6 with 20,000-24,000gbps memory -> 320-384GB/s.
A cutdown N33 96bit GDDR6 with 20,000-24,000gbps memory -> 240-288GB/s.

A cutdown N33 would have comparable bandwidth to full N23, while being comparable or faster than 6700XT, which has 96MB IC and 384-432GB/s.
Even the Full N33 doesn't have a lot of BW even with 24gbps memory.

6700XT is designed as a 1440p card which can do entry level 4K but RT performance is awful. N33 is designed as a 1080p card that (benchmarks pending) might actually allow RT to be used at 1080p. It will also be good for pure raster at 1440p. Cut N33 could easily be the entry level 1080p pure raster card and with a cheaper BOM than the 6600 / uncut N33 thanks to having less VRAM selling it for $250-300 would be be another case of AMD being able to increase margin and offer good perf/$.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
1) N23 was on N7 so was competing with Zen 3 for wafers and Zen 3 had far more margin and EPYC was in very high demand.
2) Cutting N23 to use in x500 would have meant an even lower margin card than the 6600 which would have diverted wafers away from Zen 3 chiplets.
3) N24 was a terrible x500 tier part which in normal times may have been an x300 tier.
4) 96bit N33 makes sense, it allows AMD to use defective dies and since the die is N6 and 200mm it is really cheap. 6GB means BOM cost is lower than the 7600XT so can work well as a 7500XT. Also saves on the upfront costs of designing an N34 part.

With 8GB of ram N33 will still be 1080p / entry 1440p. It will be a very good part and might allow 1080p RT to actually be usable in a low end GPU which is a good thing.
Cut N33 with 6GB will make an excellent entry level 1080p part.
N23 was built using N7, which was(is) supposedly more expensive than N6, was(is) bigger than N33, they had limited amount of 7nm wafers for everything, yet AMD never made a cutdown 96bit model with 6GB Vram from the defective chips.
Yet you expect they will do It for N33.

6700XT is designed as a 1440p card which can do entry level 4K but RT performance is awful. N33 is designed as a 1080p card that (benchmarks pending) might actually allow RT to be used at 1080p. It will also be good for pure raster at 1440p. Cut N33 could easily be the entry level 1080p pure raster card and with a cheaper BOM than the 6600 / uncut N33 thanks to having less VRAM selling it for $250-300 would be be another case of AMD being able to increase margin and offer good perf/$.
Increase in BW is nowhere near the expected increase in performance, that was my point from the beginning. Of course I am not sure what effect the low BW will have at 1080p, but I still don't believe in a 96bit part.
A cutdown N23 was $329, yet you think a much faster card will cost only $250-300 because It has cheaper BOM? Do you even realize how much better perf/$ It would have compared to N23? This price is not realistic.
In my opinion $349-399 is more realistic regardless If It has 6 or 8GB Vram.
 
Last edited:

maddie

Diamond Member
Jul 18, 2010
4,723
4,628
136
N23 was built using N7, which was(is) supposedly more expensive than N6, was(is) bigger than N33, they had limited amount of 7nm wafers for everything, yet AMD never made a cutdown 96bit model with 6GB Vram from the defective chips.
Yet you expect they will do It for N33.


Increase in BW is nowhere near the expected increase in performance, that was my point from the beginning. Of course I am not sure what effect the low BW will have at 1080p, but I still don't believe in a 96bit part.
A cutdown N23 was $329, yet you think a much faster card will cost only $250-300 because It has cheaper BOM? Do you even realize how much better perf/$ It would have compared to N23? This price is not realistic.
In my opinion $349-399 is more realistic regardless If It has 6 or 8GB Vram.
We should also remind ourselves that AMD, at least since 2008, cut shaders not memory. The odd exception proves the rule.
 

eek2121

Platinum Member
Aug 2, 2005
2,904
3,906
136
I agree, it would be a mistake not doubling the unit count per WEU.

Raytracing is moving away from being a gimmic and working itself into core features of game-engines, like replacing rasterized lightning entirely (in Metro series) or speeding up more advanced lightning techniques (UE5 Lumen). Every profile title seems to have more and more of it.

And yes, in games you can buy today, raytracing is largely still a gimmic, but let's not forget how long the lifecycle of GPUs . Let's say someone wants to by "the best AMD has to offer" in September 2024. Even then, the only option would still be a N31 variant (probably a mid-cycle refresh), and that person almost certainly would want at least 2-3 years out of that purchase. Just adding 20% more Ray Accelerators (even when slightly tuned) would not be enough for that timeline. AMD already sorta is the laggard in the group (as even Intel has 2 ray traversal pipelines per Xe-core with can each do more).

What I would want for AMD to do is:
  1. Double the Ray Accelerator count per WEU
  2. Improve the units themselves. AMD's RT units currently do 4 Ray/Box and 1 Ray/Triangle intersections, while Intel does 12 and 1 (with 2x the "RT units" per "core"). As there is more and more geometry in games, I'd at least like to see the Ray/Triangle rate double to 2 per clock in at least most instances (Ray/Box rate IMO is less of an issue, but could also be increased slightly)
  3. Add some fixed-function hardware to speed up BVH traversal, like the competitors have (freeing up some CU resources)
I really hope we at least get point 1 or 2. Both would already be very good, with 3 as a bonus (but quite unlikely considering the die-sizes)

Agreed. RT is the future of PC gaming. Many gamers don’t quite understand that RT is not a gimmick. It is the future of gaming since it allows for photorealistic lighting. It was never implemented in hardware earlier because first, implementation was difficult and second, implementation was expensive.

I hope AMD is competitive with RT this gen. I may pick up the top end card if they are.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Between Kepler suggesting that >50% has heavy emphasis on the > part then @ 375W a 2x performance uplift is very believable.
Seems like my prior guess is close enough. :D

Seems like AIB models will be able to push >2x raster, but likely with deteriorating perf/W.

Also, on the eve of what might be a leak with clockspeeds and performance targets, does anyone want to make their final guesses? This pump that dumps fuel into the hype train's engine ain't going to prime itself! :p

Here, I'll start. My guess for full N31 specs:
- Up to 60% perf/W uplift over N21
- 1.8x - 2x raster uplift
- 2.2x to 2.5x RT uplift
- 3.1 GHz game clock
- 375W TDP
- 24 GB GDDR6 RAM
 

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
N23 was built using N7, which was(is) supposedly more expensive than N6, was(is) bigger than N33, they had limited amount of 7nm wafers for everything, yet AMD never made a cutdown 96bit model with 6GB Vram from the defective chips.
Yet you expect they will do It for N33.

Yea for the reasons you just stated. N33 is cheaper than N23 and it is not on a node that is competing with higher margins parts so AMD have the capacity and the margin to offer a much better x500 tier part without designing and manufacturing another die.


Increase in BW is nowhere near the expected increase in performance, that was my point from the beginning. Of course I am not sure what effect the low BW will have at 1080p, but I still don't believe in a 96bit part.
A cutdown N23 was $329, yet you think a much faster card will cost only $250-300 because It has cheaper BOM? Do you even realize how much better perf/$ It would have compared to N23? This price is not realistic.
In my opinion $349-399 is more realistic regardless If It has 6 or 8GB Vram.

Full N33 for around $400, cut N33 with 96bit and 6GB ram for $250-300. Margin would improve over RDNA2 parts which is a positive for AMD and the great perf/$ will do them good in the mindshare arena. Especially with the NV prices, the coming financial issues and 2nd hand parts.

We should also remind ourselves that AMD, at least since 2008, cut shaders not memory. The odd exception proves the rule.

The 5600XT using N10 with a 192bit bus and 6GB ram does exist you know!
 
  • Like
Reactions: Tlh97 and scineram

maddie

Diamond Member
Jul 18, 2010
4,723
4,628
136
Yea for the reasons you just stated. N33 is cheaper than N23 and it is not on a node that is competing with higher margins parts so AMD have the capacity and the margin to offer a much better x500 tier part without designing and manufacturing another die.




Full N33 for around $400, cut N33 with 96bit and 6GB ram for $250-300. Margin would improve over RDNA2 parts which is a positive for AMD and the great perf/$ will do them good in the mindshare arena. Especially with the NV prices, the coming financial issues and 2nd hand parts.



The 5600XT using N10 with a 192bit bus and 6GB ram does exist you know!
I guess this was invisible, here it is again. "The odd exception proves the rule."
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
Yea for the reasons you just stated. N33 is cheaper than N23 and it is not on a node that is competing with higher margins parts so AMD have the capacity and the margin to offer a much better x500 tier part without designing and manufacturing another die.
Because of what I previously stated, I would have expected to see an SKU based on N23 with 96Bit bus. There was none and It's pretty hard to imagine AMD would throw away those chips when they didn't have enough wafer to begin with. If they had so many of them then RX 6600 should have been with only 96bit bus, but instead It had 128bit.

Full N33 for around $400, cut N33 with 96bit and 6GB ram for $250-300. Margin would improve over RDNA2 parts which is a positive for AMD and the great perf/$ will do them good in the mindshare arena. Especially with the NV prices, the coming financial issues and 2nd hand parts.
The new Zen 4 CPUs are not cheap, even If you ignore the cost of boards and DDR5, so why do you think AMD will suddenly give us really cheap GPUs? Nvidia increased prices so AMD has space to do the same, but not by so much and still be viewed as the less greedy company.

RX 6900XT currently costs $669, so why do you think a new full N33 will cost only $400 when It supposedly has comparable performance at 1080p or close enough and probably better RT performance and much lower power consumption? Of course less Vram and weaker raster perf at 1440p and 4K, but even so RX 6900XT doesn't look so tempting for that price.
 
Last edited:

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
Because of what I previously stated, I would have expected to see an SKU based on N23 with 96Bit bus. There was none and It's pretty hard to imagine AMD would throw away those chips when they didn't have enough wafer to begin with. If they had so many of them then RX 6600 should have been with only 96bit bus, but instead It had 128bit.

A 96bit N23 would mean additional N7 wafers used on more N23 dies which means fewer Zen 3 dies being produced so would be a net loss for AMD. Also AMD had N24 available as a stop gap bottom tier part even though it was utterly unsuitable for the 6500 tier but given the market AMD could get away with it. In a more normal market it is entirely possible that the 6500XT would have been a 96bit N23 part.

A 96bit N33 on N6 is not competing with N7 wafers for Milan / Milan-X and it is not competing with N5 wafers for Genoa / Genoa-X / Bergarmo so is a totally different equation. It would also make for a proper x500 tier card and there is no noise at all about a potential N34. I don't see AMD offering two 8GB parts in the x600 and x500 tier when AMD can shave off a bit of BOM cost by making the entry level 1080p card 6GB. It is enough and it reduces the memory bill by 25% and allows for a lower power config with a cheaper cooler.

The new Zen 4 CPUs are not cheap, even If you ignore the cost of boards and DDR5, so why do you think AMD will suddenly give us really cheap GPUs? Nvidia increased prices so AMD has space to do the same, but not so much and still be viewed as the less greedy company.

N31 and N32 won't be 'cheap'. They will be better value than NV but the NV value proposition is terrible (4090 aside but US only given the current exchange rates).

N33 is built to be cheap, 200mm of N6 makes it far cheaper than N23 already and they can sell the full version of it for more money and a cut version for the same money. Current retail for a 6600 is around $260. A 6GB card that performs much better for a similar price will be higher margin for AMD. Why would they make it an 8GB part instead?

I guess this was invisible, here it is again. "The odd exception proves the rule."

Given that in the last 2 gens we have

5300, 5600XT and 6700 all with a cut bus I am not so sure you can call it an exception. Prior sure but they have cut busses quite a few times of late.
 

maddie

Diamond Member
Jul 18, 2010
4,723
4,628
136
Given that in the last 2 gens we have

5300, 5600XT and 6700 all with a cut bus I am not so sure you can call it an exception. Prior sure but they have cut busses quite a few times of late.
I thought we were mentioning mainstream parts. Guess I was wrong.

Was the RX 5300 ever sold at retail?
The RX 6700 came at the end of the RX6XXX cycle & in rare models.

The RX 5600XT was one of the rare cases where a memory cut die was a standard part.
 
  • Like
Reactions: Kaluan

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
I thought we were mentioning mainstream parts. Guess I was wrong.

Was the RX 5300 ever sold at retail?
The RX 6700 came at the end of the RX6XXX cycle & in rare models.

The RX 5600XT was one of the rare cases where a memory cut die was a standard part.

The 6700 has been available for a while at retail and you have your typical ones like a Sapphire Pulse or an XFX speedster or the Powercolor Fighter.
 
  • Like
Reactions: Tlh97 and Kaluan

Timorous

Golden Member
Oct 27, 2008
1,534
2,535
136
Agreed. RT is the future of PC gaming. Many gamers don’t quite understand that RT is not a gimmick. It is the future of gaming since it allows for photorealistic lighting. It was never implemented in hardware earlier because first, implementation was difficult and second, implementation was expensive.

I hope AMD is competitive with RT this gen. I may pick up the top end card if they are.

Agreed but we are still waiting for a GPU to do for RT what the 9700 Pro did for AF and AA. Not sure that is going to be RDNA3 or ADA though but I could see RDNA4 / Hopper (or whatever the arch after ADA is) being that moment.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
I wonder if this David Wang interview bit from awhile back has anything to do:

"And to bring more photorealistic effects into the domain of real-time gaming, we are developing hybrid approaches that takes the performance of the rasterization combined with the visual fidelity of raytracing, to deliver the best real-time immersive experiences without comprising performance."

...with this recent GPUOpen reveal:

Kepler seems to think it's relevant too 😅
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
N33 is built to be cheap, 200mm of N6 makes it far cheaper than N23 already and they can sell the full version of it for more money and a cut version for the same money. Current retail for a 6600 is around $260. A 6GB card that performs much better for a similar price will be higher margin for AMD. Why would they make it an 8GB part instead?
Why would they sell the cutdown N33 for the same price as RX 6600 is now when they can sell It for more?

Why would they make a 6GB part instead of 8GB?
With N23 we already saw the amount of faulty chips were negligible, If that wasn't true then they would have released such version for OEMs at least, but they didn't.
You will reduce cost by using only 3 chips instead of 4, but you also need chips with higher speed to compensate for the missing bandwidth, and they cost more.
You can have the same bandwidth by using 6GB Vram at 24gbps or 8GB Vram at 18gbps. So is there a noticeable difference in BOM between them? I don't think so, and buyers would certainly prefer more Vram.
 
Last edited:
  • Like
Reactions: KompuKare