Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 185 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
16 GB is not going to happen as no 32 Gbit memory chips exist.

I expect AMD to initially have higher prices for the 7600 (XT), so they can gradually lower them & still get rid of the remaining 6000-cards.

8GB cards have a price ceiling now so unless AMD decide to make all N33 designs 16GB sub $300 is where it needs to land.
AMD has a habit of pricing their cards too high at first, so once you determine what price it should be, you need to increase that price to where AMD will price it at first, to ensure poor reviews and low sales.
 

Kaluan

Senior member
Jan 4, 2022
504
1,074
106
Presumably it would greatly impact whether AMD bothers with the N33 desktop release.

The 6600 is under $200 now.

Speak of the Devil...


What I can confirm, however, is that individual (!) AMD board partners will show a Radeon RX 7600 as a finished product at Computex, while other partners, who use both AMD and NVIDIA chips, are first exercising wait-and-see restraint. To paraphrase politely. The statement that you don’t want to produce anything just to satisfy production and where you don’t see any chances of profits then hurts a bit. There is no price basis for a Radeon RX 7700XT at the moment, because you could even slip into the loss zone because the target group shifts and the price shouldn’t fit anymore.


Of course, it’s a lot of speculation, but the current price distortions due to a strong increase in inflation are real, but obviously haven’t reached the consciousness of many consumers yet. Most of them only see butter, bread, vegetables and fruit. And there we have already reached 20 to 30 percent often enough. And this is not only due to the evil retail trade (but probably also with). And so, in the foreseeable future, the graphics card classes will probably also be oriented more towards the income classes and no longer only towards the actual performance.
 

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
You don't need 32 Gbit chips to have 16 GB with 128-bit. Clamshell could be used for doubling Vram. The question is If It's worth It for N33 and some AIB or AMD are willing to do It.
It's indeed not needed, but the consequence would be that the 16 GB cards would have half the bandwidth per GB on the card, compared to a 256 bit card with 16 GB. It's very doubtful that they would do that because of all the headaches it would create for them (both in the marketing and because it could severely cannibalize the 7800 and 7700 sales).

I could only see them do this if they somehow cancel 7800 and 7700.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
It's indeed not needed, but the consequence would be that the 16 GB cards would have half the bandwidth per GB on the card, compared to a 256 bit card with 16 GB. It's very doubtful that they would do that because of all the headaches it would create for them (both in the marketing and because it could severely cannibalize the 7800 and 7700 sales).

Navi 33, is a 128bit bus GPU, so 256 bit cards are not possible, so that's an irrelevant point.

It's simply a case of 8GB vs 16GB on 128 bit bus.

The downsides that make it unlikely are, that you need a new, more expensive different design board, for the 16 GB cards, to double up chips on each channel, plus the added expense of extra VRAM, so probably at least $100 added to the retail price of the card, and considering this is kind of low end product aimed mostly at 1080P, that is probably too much.

Though it would be nice if AMD just gave their AIBs permission to do that if they feel like it.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
It's indeed not needed, but the consequence would be that the 16 GB cards would have half the bandwidth per GB on the card, compared to a 256 bit card with 16 GB. It's very doubtful that they would do that because of all the headaches it would create for them (both in the marketing and because it could severely cannibalize the 7800 and 7700 sales).

I could only see them do this if they somehow cancel 7800 and 7700.
As @guidryp already mentioned, N33 has 128-bit bus, you can't make 258-bit out of It. Because It has only 128-bit bus, you can have 8GB Vram or 16GB Vram in clamshell, BW won't change only Vram can double.
Even If N33 had 256-bit bus, It wouldn't cannibalize 7700-7800, because N32 has 60CU instead of 32CU in N33 and even the cut-down version should have at least 48CU enabled, most likely more.
 
  • Like
Reactions: Tlh97

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
@guidryp @TESKATLIPOKA

I didn't compare it to a 256 bit card to argue that AMD could change the IO of N33, but to argue that it would mess up their marketing. They somehow have to convince normies that a 16 GB card with less bandwith is considerable better than 8 GB, so they are willing to pay more, but not as good as a proper 16 GB card with a 256 bit bus. And that it's also worse than a 12 GB 7700 XT, assuming that they want to sell that one as well.

Even If N33 had 256-bit bus, It wouldn't cannibalize 7700-7800, because N32 has 60CU instead of 32CU in N33 and even the cut-down version should have at least 48CU enabled, most likely more.
Except that a very large percentage if not a majority buys based on 'bigger numbers better' and the CU count is not on the box. If you look on front of AMD Radeon boxes, the three information boxes that are on every box and thus almost certainly are mandatory are:
- Targeted resolution
- VRAM quantity
- PCIe standard

So the VRAM quantity is actually one of the things they market most prominently with and that they've conditioned casual buyers into comparing between cards. One of the three other boxes, the PCIe version, will be equal between all cards. If AMD chooses to market both the 7600 XT 16GB and 7700 XT 12 GB as a 1440p card, the resolution box is also going to be the same. So then the 7700 XT 12 GB looks worse than the 7600 XT 16GB when comparing the only specs visible on the front of every box. If people are picking a card for 1440p, then even if the 7800 XT 16GB shows that it is good for 4K, buyers may think: "I'm gaming on 1440p, so the 7600 XT provides identical performance to the 7800 XT, since the only difference is 4K support, which I don't need."

Of course that isn't true, but a noob picking out a card in the store often isn't going to realize that.
 

linkgoron

Platinum Member
Mar 9, 2005
2,409
979
136
Speak of the Devil...

Smaller RDNA3 arriving in June is just sad. AMD still using RDNA2 kind of reminds me of AMD reusing Polaris for a few gens, while not having something very competitive in the high end because Vega was just barely competitive with the 1080. Sure, N31 is much more competitive with the 4080 than what Vega64 was with the 1080, but it's just bad. I hope that AMD can fix the supposed bugs that exist with N33 and N32 or a N31v2 (if it's not just that the architecture is fundamentally bad), but I'm doubtful. Hopefully RDNA4 will be better, but it's surely far away in the future.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,166
7,666
136
Wondering if it would have made more sense for AMD to just port their RDNA 2 designs to TSMC 6N and just rebrand them.

There is no way AMD started designing N32 expecting to sell it for $800 or whatever N22 was going for during the crypto boom, it was built to be sold taking TSMC's 5nm waffer costs into account as well as market segmentation, so the more I think about it the less I think N32 is waiting in the wings for pricing and margin reasons.
 
Last edited:

gdansk

Platinum Member
Feb 8, 2011
2,962
4,493
136
Navi32 would have had a similar target MSRP as Navi22. $600 maximum.

If Navi32 doesn't ever ship it'll be because they forecast low sales. With low enough sales the relatively cheaper manufacturing cost would never pay off the GCD masks etc. If that is the case then the low sales also deters a shrink of Navi21.
 

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
With low enough sales the relatively cheaper manufacturing cost would never pay off the GCD masks etc. If that is the case then the low sales also deters a shrink of Navi21.
The real cost is in the initial design and the validation of the initial mask set, which should already have been done. Replicating masks is not that expensive, although if they need a respin, that is more costly. But it still seems unlikely that they would scrap an entire design until it would actually sell worse than to keep selling Navi 2x.

But I agree that a node shrink of Navi 2x doesn't make much sense.
 

gdansk

Platinum Member
Feb 8, 2011
2,962
4,493
136
The real cost is in the initial design and the validation of the initial mask set, which should already have been done. Replicating masks is not that expensive, although if they need a respin, that is more costly. But it still seems unlikely that they would scrap an entire design until it would actually sell worse than to keep selling Navi 2x.
Let me put it the other way: If Navi 32 does appear it will be because it is substantially cheaper than Navi 21 to make. If it doesn't see mass production, then reasonably it is not cheaper to make.

Or perhaps tape out was delayed and the part is now incorporating RDNA3 bug fixes. But I find it very unlikely. Why invest so much - in this economy - to 'fix' the design in a respin when more Navi 21 would sell nearly as well.
 
Last edited:

Kepler_L2

Senior member
Sep 6, 2020
537
2,198
136
Let me put it the other way: If Navi 32 does appear it will be because it is substantially cheaper than Navi 21 to make. If it doesn't see mass production, then reasonably it is not cheaper to make.

Or perhaps tape out was delayed and the part is now incorporating RDNA3 bug fixes. But I find it very unlikely. Why invest so much - in this economy - to 'fix' the design in a respin when more Navi 21 would sell nearly as well.
Why would anyone believe N21 is cheaper to produce? We know the die sizes already.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
Let me put it the other way: If Navi 32 does appear it will be because it is substantially cheaper than Navi 21 to make. If it doesn't see mass production, then reasonably it is not cheaper to make.

Or perhaps tape out was delayed and the part is now incorporating RDNA3 bug fixes. But I find it very unlikely. Why invest so much - in this economy - to 'fix' the design in a respin when more Navi 21 would sell nearly as well.

N32 will appear because it will be used in higher end laptops. N33 only covers 7600M XT and 7700S. The 7700M, 7800M and 7800S are going to need bigger dies which is where N32 comes in.
 

leoneazzurro

Golden Member
Jul 26, 2016
1,052
1,716
136
The price gap between N7/N6 and N5/N4 would have to get rather wide for that to happen. That being said, N32 would be a lot more expensive than N22.
Total area for N32 is WAY lower than N21 (approx. 350mm^2 vs 521mm^2), yields for smaller chips are higher and wafer area is better utilized. Yes, it is more expensive than N22 but N22 is clearly significantly less powerful. And prices went high for everyone. Only problem in the confrontation with Nvidia is that they used too little N5 silicon OR they overestimated the target clocks. Something is definitely off because a 384bit bus+IC is defiitely overkill for the 7900XTX and probably it was meant to feed something more than what we got.
 
  • Like
Reactions: Tlh97 and Joe NYC

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
Clearly N31 was intended to be a 4090+ competitor.
It's quite peculiar that the dual issue shaders seem to do nearly nothing, as the performance difference between the 6950 XT and 7900 XT can be explained by the higher clock speed, wider bus, and the extra shaders. The choice by AMD to pretend that the extra execution units don't exist also suggests that they do nearly nothing.

'Feeding the beast' is always one of the bigger challenges with processors. It's hard to make optimal use of the hardware that exists, when the workload is often not optimal for the hardware, so all kinds of tricks need to be done to alter the workload so it can be run efficiently. Perhaps AMD ran into big issues when trying to build a new scheduler that assigns work to their shaders and they stuck to the old one, that treats the shaders as single issue?

Delaying N32 could also be because they want to release it with a new scheduler.
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
The real cost is in the initial design and the validation of the initial mask set, which should already have been done. Replicating masks is not that expensive, although if they need a respin, that is more costly. But it still seems unlikely that they would scrap an entire design until it would actually sell worse than to keep selling Navi 2x.

But I agree that a node shrink of Navi 2x doesn't make much sense.

If AMD had done a shrink of Navi21 from N7 to N5 I think it would have worked out fine. The chip would have been significantly smaller and faster.

IMO AMD was a bit too ambitious with their chiplet design.

Both AMD and NVIDIA need to find a way to make smaller, faster, less expensive chips/GPUs, or at the very least, find a way to make the technology automatically scale.

I can’t imagine what a flagship GPU will cost 5 years from now.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
IMO AMD was a bit too ambitious with their chiplet design.

IMO, AMD really did a smart, optimal first step for GPU chiplets. Too ambitions would have been dividing up compute.

The problem multi-chip compute GPUs was really going to be needing massive BW, and super low latency comparable to being on the same chip. If not you then need to resort to horrible SLI/Xfire software solutions.

But the Memory Controller/Cache chiplets sidestep that issue, since all the compute is still together, plus both memory controller and cache take a lot of space, and don't really respond much to process shrinks anymore so is ideal to put on a cheaper less advanced process node.

But it's mostly a cost savings move, so they don't get more performance out of it, and NVidia made a bigger leap this generation because they were getting a much bigger boost moving away from inferior Samsung Process node, and AMD a smaller upgrade from an already good TSMC process.
 

Aapje

Golden Member
Mar 21, 2022
1,513
2,065
106
But it's mostly a cost savings move
If they don't pass a large part of that on to consumers, their cards become fairly unattractive.

I'm wondering what the 7800 XT and 7700 XT will end up costing after some months (rather than at first, since AMD will likely ask too much, as usual).