Question Speculation: RDNA2 + CDNA Architectures thread

Page 219 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,607
5,822
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,845
106
It appears that Navi 23 was designed as 1:1 technological replacement for... Navi 14.
Shouldn't that be actually N24?
1. Similar performance
2. lower power consumption
3. same or smaller die size
4. newer technologies
5. price should be similar
6. 4GB Vram

In my opinion N23 replaces N10.
1. Similar performance
2. lower power consumption
3. smaller die size
4. newer technologies
5. price should be similar or lower
6. 8GB Vram
 
Last edited:

Geranium

Member
Apr 22, 2020
83
101
61
Igor spills some beans on 6600XT and 6600.

Overall it's quite a nice chip, and this is expected due to unifying stuff with laptops, but still unfortunate that it only has PCIe 4.0 x8. This will seriously diminish it's sale for upgrades or budget builds

... well the latter at least won't be a proble with current GPU price climate as there are no budget builds, so there's that.
PCI-e 4.0 is with us from mid of 2019 and budget B550 is with us for nearly a year and priced similarly to B450. If people are still buying B450/X470 nowdays it by saying who needs PCI-e 4.0 then they have rethink their stance about PCI-e 4.0.

And 8x PCI-e was only problem for RX 5500 XT 4GB, when very high or ultra texture is used. 8GB had no problem. This new RX 6600 XT dont have any 4GB version.
 

Glo.

Diamond Member
Apr 25, 2015
5,700
4,545
136
Shouldn't that be actually N24?
1. Similar performance
2. lower power consumption
3. same or smaller die size
4. newer technologies
5. price should be similar
6. 4GB Vram

In my opinion N23 replaces N10.
1. Similar performance
2. lower power consumption
3. smaller die size
4. newer technologies
5. price should be similar or lower
6. 8GB Vram
No, N24 is not replacing N14.

N23 has the same 8 bit PCIe and 128 Bit bus as N14. It appears that this GPU was specifically designed to be drop in replacement for laptop OEMs, that used N14, because the package is, I think, the same size.

P.S I don't believe that we will see N24 on desktops.

So either there is no 6500 series, or they are based on Navi 23 die.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,845
106
No, N24 is not replacing N14.

N23 has the same 8 bit PCIe and 128 Bit bus as N14. It appears that this GPU was specifically designed to be drop in replacement for laptop OEMs, that used N14, because the package is, I think, the same size.

P.S I don't believe that we will see N24 on desktops.

So either there is no 6500 series, or they are based on Navi 23 die.
Let's be honest, RDNA1 was barely used in laptops, there was a very small number of models with them.
N23 will be released for desktop and maybe even sooner than the laptop version.
The only thing you need to make a laptop GPU is to lower frequency and voltage of a desktop GPU to be within a limited TDP, so I don't understand these talks how this or that RDNA2 GPU was especially designed for laptop use.

I don't see a reason why N24 should be exclusive for laptops. You can make some nice small form factor video cards with them and N14 level of performance is not that bad for FullHD.
 
Last edited:
  • Like
Reactions: Tlh97

gdansk

Platinum Member
Feb 8, 2011
2,078
2,559
136
Let's be honest, RDNA1 was barely used in laptops, there was a very small number of models with them.
You're right about a small number of customers but they did have a whale among them. Apple shipped Navi 12/14 in what are probably the most popular of any laptop model with a discrete GPU (MacBook Pro 16).
 

Gideon

Golden Member
Nov 27, 2007
1,619
3,643
136

zinfamous

No Lifer
Jul 12, 2006
110,553
29,156
146
-Impressive that it was an actual boot and benchmark run, not just one of those silly "Boot to desktop" OCs.

Mildly frustrating that we didn't get any performance numbers there... would have been nice to see the FPS curve.

Edit: Also somewhat anxiety inducing to see someone using LN2 without any apparent PPE and just freehanding it.

You're actually not supposed to wear a lot of PPE with LN2. Face shield and LN2 rated gloves, at the most. The problem is that if you dump a lot on yourself, wearing a lab coat or improper gloves, it will absorb into your clothes, which are colder than you, and cause burns more readily than if it just touches your bare skin--it evaporates much quicker in contact with skin. In fact, at a steady, slow rate, you can trickle it directly onto your hand it will never actually touch you.

It's weird. You're technically better off naked when handling LN2 than if clothed; at least for most use cases, like in the video with a hand-held dewer's amount at a time.

That's exactly how we handle it in my field. Hell, it's actually also very common to grind tissue with it--so filling a mortar with LN2, submerged in a larger styrofoam cooler bath of LN2, and grinding tissue away with a pestle, always adding more as needed. Usually bare-handed, but minimal gloves only because the pestle gets really cold to handle, pretty quickly, lol.
 

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
RX6600XT should be in the RX 5700XT/RTX3060 performance/price range territory

And we should expect a RX6700 with Navi 22 that will be filling the gap between 6700XT and 6600XT.

RX6500XT should be in the RX5500XT/GTX1660 performance/price range territory
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,845
106
RX6600XT should be in the RX 5700XT/RTX3060 performance/price range territory

And we should expect a RX6700 with Navi 22 that will be filling the gap between 6700XT and 6600XT.

RX6500XT should be in the RX5500XT/GTX1660 performance/price range territory
And that RX6500XT is based on N23 or N24?
N24 should be capable to perform similarly to RX5500XT 4 GB in performance with high enough clocks assuming 64bit GDDR6 is not a bottleneck. A cut down N23 should be a lot faster.
 

Bigos

Member
Jun 2, 2019
127
282
136
AMD Yellow Carp - Rembrandt?


It seems it has no L3 cache, but instead has 2MB of L2 cache (also 6 CUs? or 6 per SA?):

+ {
+ /* L2 Data Cache per GPU (Total Tex Cache) */
+ .cache_size = 2048,
+ .cache_level = 2,
+ .flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE),
+ .num_cu_shared = 6,
+ },


Support for disabled SA (shader array), it means there are at least two?

 

Kepler_L2

Senior member
Sep 6, 2020
329
1,160
106
AMD Yellow Carp - Rembrandt?


It seems it has no L3 cache, but instead has 2MB of L2 cache (also 6 CUs? or 6 per SA?):




Support for disabled SA (shader array), it means there are at least two?

Yeah it's Rembrandt, 12 CU with 2 SA and no L3/IC.
 
  • Wow
Reactions: Gideon

GodisanAtheist

Diamond Member
Nov 16, 2006
6,776
7,103
136
Yeah it's Rembrandt, 12 CU with 2 SA and no L3/IC.

- Its kinda a bummer about no L3 /IC. Seems like a dollop of IC is exactly what something like an APU needs to skirt the ultralow bandwidth limitation of having to use system memory.

Maybe there is a weird inverse situation where once bandwidth drops low enough you actually start needing a higher IC ratio cause the mem hits would be so slow that a small amount of IC wouldn't actually amount to any sort of real world performance gain. IC is just there to "buffer/mask" trips to memory after all, not eliminate them.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
- Its kinda a bummer about no L3 /IC. Seems like a dollop of IC is exactly what something like an APU needs to skirt the ultralow bandwidth limitation of having to use system memory.

Maybe there is a weird inverse situation where once bandwidth drops low enough you actually start needing a higher IC ratio cause the mem hits would be so slow that a small amount of IC wouldn't actually amount to any sort of real world performance gain. IC is just there to "buffer/mask" trips to memory after all, not eliminate them.
I thought the IC prevents multiple trips to memory by different CU for the same data, so it's a lot more than a buffer.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,776
7,103
136
I thought the IC prevents multiple trips to memory by different CU for the same data, so it's a lot more than a buffer.

-You're right, I was vastly understating IC, but I'm thinking that saving trips to main memory might be worthless if the trips to main memory take so long that they stall whatever work the IC allows the CUs to do in the meantime.

Just seems odd to leave out IC where it may potentially have the largest impact on performance.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
-You're right, I was vastly understating IC, but I'm thinking that saving trips to main memory might be worthless if the trips to main memory take so long that they stall whatever work the IC allows the CUs to do in the meantime.

Just seems odd to leave out IC where it may potentially have the largest impact on performance.
Probably just cost factors, if left out, as the IC is basically 'fetch once use many'.
 

Bigos

Member
Jun 2, 2019
127
282
136
Pretty nil. The CCX L3 cache is mostly accessible from CPU cores themselves. Also, the cache line size differs (64 bytes vs 128 bytes) between CPU and GPU cache hierarchy.

I believe the GPU has coherent access to the CPU caches, but it is not the fastest there is. For GPU the fastest is CPU-incoherent memory, so straight from GPU caches to memory.

Unless they finally change things with Rembrandt, but I doubt it.
 
  • Like
Reactions: scineram

dr1337

Senior member
May 25, 2020
330
551
106
What is the chance that the iGPU just doesn't have a dedicated L3$/IC but goes through the CPU/package's L3$ allowing that to work as IC assuming sufficient free space left?
I feel like this is 100% what they're doing especially since the l2 cache is much bigger. I'd think they're using that extra cache to store tags to accelerate loading from CPU l3. Or on the other hand, maybe cpu cache isnt shared and the bigger L2 is there just to help performance.

I am also overly confident that we will 100% see the v-cache chiplet on rembrandt and the gpu will definitely be able to use it.
 

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
I feel like this is 100% what they're doing especially since the l2 cache is much bigger. I'd think they're using that extra cache to store tags to accelerate loading from CPU l3. Or on the other hand, maybe cpu cache isnt shared and the bigger L2 is there just to help performance.

I am also overly confident that we will 100% see the v-cache chiplet on rembrandt and the gpu will definitely be able to use it.
Looking at the layout of these APUs, I don't see it feasible for a separate v-cache chiplet. At this point in time, it appears that insulating the active logic portions of a SOC is too difficult at economic price points.

So the question becomes this one. Where do you place the cache chiplet?