Question Speculation: RDNA2 + CDNA Architectures thread

Page 170 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,629
5,938
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

Veradun

Senior member
Jul 29, 2016
564
780
136
that's the issue that they don't have a die between 40 and 80 CUs. I agree 48CU cut seems like a lot it's too close to a 40CU full die and far from the 60CU one but 56 it too close too 60. More likley is 40CU has higher clocks so it acts more like 50CUs making the difference to 60CU a little smaller. But AMD for sure has a "hole" in their stack around the 50CU mark that is hard to fill economically.
Why not 52?
 

Timorous

Golden Member
Oct 27, 2008
1,611
2,764
136
12GB vRAM for N22 according to _rogame, so 192b bus.

I wonder if MALL will be 96MB?

Maybe, bus is larger relative to N21 but it is useful to have 12GB in that performance segment (2080Ti +/-). Not sure how they expect to get 40CUs to that performance level although if you can increase clocks 30% at the same power draw then that means a 2.5Ghz 40cu card is possible at 225W but I do not see that providing a 50% performance gain which is what is needed to match the 2080Ti / 3070.

52CUs @ 2.2ghz would probably be 2080Ti level but nothing points to a 52CU N22 die so also doubtful.

I think the only way for AMD to hit 3070/2080Ti performance is to further cut N21 because I don't think N22 will make it. The other option is that AMD leave that open and offer a card that has 90% of 3070 performance with 12GB of ram (so wins in doom and wolf with max textures turned on but trails in most everything else) for $400 but that is quite a large pricing hole between the 6700XT and the 6800.
 

richierich1212

Platinum Member
Jul 5, 2002
2,741
360
126
A side note.

Based on what I am hearing, AMD's marketing team underplayed the efficiency of RDNA2 GPUs, again.

That 250 and 300W of power TBP are supposedly numbers of when the GPUs are pushed absolutely to the wall.

In normal circumstances we should see 6900XT averaging around 275W of power.

6800XT will be more like 250-260W of power.

Why would they sandbag? Do they want reviewers to reveal lower power consumption numbers? Leave more power on the table for OC versions?
 

Tup3x

Senior member
Dec 31, 2016
963
948
136

With hardware acceleration disabled and simply relying on the Software DXR fallback layer, the AMD Radeon RX 6000 graphics card scored 34 FPS. With hardware acceleration enabled & making full use of the newly added Ray Accelerator cores, the AMD Radeon RX 6000 series graphics card saw a 13.8x speedup. The same RDNA 2 graphics card scored 471 FPS in the application which is a huge uplift in performance.

Comparing to NVIDIA's first generation ray-tracing implementation, the GeForce RTX 2080 scores around 308 FPS. This means that the AMD RDNA 2 based Radeon RX 6000 series graphics card scores a 50% lead over NVIDIA's first generation RTX graphics card. The GeForce RTX 2080 Ti scores around 390 FPS which puts the AMD Radeon RX 6000 graphics card 20% ahead.

A user on AMD's subreddit posted the score of his stock ASUS GeForce RTX 3080 TUF Gaming graphics card which delivered 630 FPS in the same benchmark. This shows the 2nd Generation RT cores at around 33% faster than AMD's 1st Gen Ray Accelerator cores.

We still don't know which graphics card was used for this comparison as the RDNA 2 lineup includes the 60 RA RX 6800, 72 RA RX 6800 XT and the 80 RA 6900 XT graphics card.

There's a lot that needs to be discussed about AMD's ray tracing implementation and we hope AMD provides us more details in the future prior to the embargo lift which should be sometime in the coming month.
That doesn't run too badly on GTX 1070...
rt.png
 
  • Like
Reactions: psolord

Tuna-Fish

Golden Member
Mar 4, 2011
1,349
1,534
136
That's an interesting question. A 40CU class card probably doesn't need more than 8GB even going forward, but for 8GB it would be running either a 128b or 256b bus. Even with Infinity Cache 128b would be woefully inadequate, so if they went that route you think they'd need 256b/8GB. A 6/12GB card with a 192b bus would be a possibility, and would give some marketing points in the larger config. In either case, I'd expect to see 64MB of IC on the 40CU die.

128b won't be woefully inadequate for 40CU unless 256b is woefully inadequate for 80CU.

Regardless, based on the driver leaks N22 has a 192-bit bus.

The big question for me is, why is N23 so close to N22? It's just 8CU less, although with a 128-bit bus.
 

Ajay

Lifer
Jan 8, 2001
15,451
7,861
136
Maybe, bus is larger relative to N21 but it is useful to have 12GB in that performance segment (2080Ti +/-). Not sure how they expect to get 40CUs to that performance level although if you can increase clocks 30% at the same power draw then that means a 2.5Ghz 40cu card is possible at 225W but I do not see that providing a 50% performance gain which is what is needed to match the 2080Ti / 3070.

52CUs @ 2.2ghz would probably be 2080Ti level but nothing points to a 52CU N22 die so also doubtful.

I think the only way for AMD to hit 3070/2080Ti performance is to further cut N21 because I don't think N22 will make it. The other option is that AMD leave that open and offer a card that has 90% of 3070 performance with 12GB of ram (so wins in doom and wolf with max textures turned on but trails in most everything else) for $400 but that is quite a large pricing hole between the 6700XT and the 6800.

AMD already has an RTX 3070 match (or better) with the 6800. I think N22 with the top clocks of the 6800 XT would be a fine GFX card.

That doesn't run too badly on GTX 1070...
View attachment 32572

Hideously low poly count. Doesn't represent any sort of in game workload.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Why would they sandbag? Do they want reviewers to reveal lower power consumption numbers? Leave more power on the table for OC versions?
Everything AMD does is a fiasco.

Like "look, it exceeds PCIe specs" (as other cards before, something we've been quiet about because who cares)

So if they have a super corner case in which it draws 300W they need to use that as spec to reduce attack surface by nvidia fanboybase.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
Do we know for sure that N22 is 40 CU. Cutting N21 any more than it already is seems unlikely and the number of such chips are probably so low that it would be an OEM card anyways.

40 CU N22 leaves a big hole as others have stated. Either they're really confident in hitting high clocks to make up for the lower CU count, or there's a plan to eventually release a ~60 CU die mid product cycle. A 48 CU N22 fills the gap a lot better and would likely hit 2080 Ti performance with a game clock slightly above the 6800XT. If it had a 2050 MHz game clock and 48 CUs it would be roughly 90% of a 6800 assuming no memory bandwidth issues. A 40 CU part would need a game clock of around 2450 MHz to hit that same 90% performance level

Why would they sandbag? Do they want reviewers to reveal lower power consumption numbers? Leave more power on the table for OC versions?

Better to underpromise and over deliver than the other way around. Most people wait for reviews anyhow and it isn't as though NVidia can sell hundreds of thousands (or maybe even just thousands) of their cards before that point.
 
  • Like
Reactions: richierich1212

Ajay

Lifer
Jan 8, 2001
15,451
7,861
136
128b won't be woefully inadequate for 40CU unless 256b is woefully inadequate for 80CU.

Regardless, based on the driver leaks N22 has a 192-bit bus.

The big question for me is, why is N23 so close to N22? It's just 8CU less, although with a 128-bit bus.
Maybe N23 doesn't have MALL (sorry, Infinity Cache*).

*and beyond!
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
I don't think we see N22 (whatever it may be) until next year. AMD has too many other dues competing for wafers right now and precious little window left for a product we haven't heard anything about officially yet.

Saving it for early 2021 would let them build up inventory for what will no doubt be a higher volume seller if it comes in at $300 - $500. They also have something to unveil at CES as well.

I'd still like to see it come out as a 48 CU part of only because it fits in the overall lineup better, but it may not matter if the clock speeds can be bumped up a lot higher similar to what we see with the PS5.
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,552
136
Why not 52?
Because you want to disable one DCU per SA. And that makes it 54 CUs possible from 3 Shader Arrays.
I don't think we see N22 (whatever it may be) until next year. AMD has too many other dues competing for wafers right now and precious little window left for a product we haven't heard anything about officially yet.

Saving it for early 2021 would let them build up inventory for what will no doubt be a higher volume seller if it comes in at $300 - $500. They also have something to unveil at CES as well.

I'd still like to see it come out as a 48 CU part of only because it fits in the overall lineup better, but it may not matter if the clock speeds can be bumped up a lot higher similar to what we see with the PS5.
Navi 22 CANNOT come as 48 CU die.

The specs for N22: 40CUs/192 bit bus.

There is one possibility that N22 will be 48 CU one. If AMD will find a way they can attatch those 8 lacking CUs from Navi 21 die in RX 6800XT SKU to Navi 22 die!
 
  • Like
Reactions: lightmanek

Tuna-Fish

Golden Member
Mar 4, 2011
1,349
1,534
136
Do we know for sure that N22 is 40 CU. Cutting N21 any more than it already is seems unlikely and the number of such chips are probably so low that it would be an OEM card anyways.

The reason we had all the numbers about N21 is that they were decoded out of their mac os drivers. Those drivers put it really clearly that N22 is 40CU. They might be wrong, ofc, but they were right about everything in N21...

Maybe N23 doesn't have MALL (sorry, Infinity Cache*).

That's a serious possibility I didn't even consider. Supposedly their RDNA2 APUs will have Infinity Cache, so I expected it would be used (in varying amount) across the entire lineup. I think they could cut a lot of cost out of a 32-cu GPU without one, though...

*and beyond!

So there is a rumor going on that says that AMD had a different internal name (probably MALL) for the tech until RGT called it infinity cache, and their marketers went "huh I like the sound of that". Dunno if true, but they did register the trademark only after RGT posted that video. I would love it if it was true, though.
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,552
136
What would be the expected performance of the 32CU part?
First of all, this is Mobile first die.

So expect lower POWER DRAW targets compared to N22, and also as a consequence - lower clock speeds.

RDNA2 has 10-15% higher IPC than RDNA1, so those 32 CUs should perform just like around 36 CUs from RDNA1. Se we are already looking at at the worst - RX 5600 XT/RX 5700 performance levels.

I would expect that this die, will have 2.3 GHz maximum boost, with slightly above 2.1 GHz game clocks.

Comparatively. RX 5600 XT has 36 CUs, with 1.715 GHz Maximum Boost, and 1650 Game Clock. So we are looking at 350-450 MHz higher game clocks.

We should expect 25% higher performance than RX 5600 XT, so around RTX 2070 Super/RTX 2080(non-super).
 

zinfamous

No Lifer
Jul 12, 2006
110,587
29,213
146
Better to underpromise and over deliver than the other way around. Most people wait for reviews anyhow and it isn't as though NVidia can sell hundreds of thousands (or maybe even just thousands) of their cards before that point.

Could be a classic case of AMD overvolting their cards on release/reference models to meet those "competitive specs," with very high power draw, only for consumers to realize that a slight undervolt will significantly cut out the power draw and actually increase performance. ...could be maybe?

except in this case, they would be advertising that this undisclosed overvolt is actually already a "winner" on power and also is already a match/winner on performance. So, that would be a pleasant surprise indeed, and is only an issue of them actually doing what they always do anyway, right?

:D

3rd party reviews should be fun. Even if they basically replicate everything that AMD has shown so far, that's still really good.
 
  • Like
Reactions: lightmanek

Glo.

Diamond Member
Apr 25, 2015
5,707
4,552
136

Antey

Member
Jul 4, 2019
105
153
116
40 CU GPU will be around 90-95% of RTX 3070.

Such a small difference between a 40 CU and a 60 CU gpu? That's a 50% difference. i'm expecting more like a 35% difference between these two, from 80-85% to 110-115%. if it ends up being 90-95%, i will be glad to be wrong for sure
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,552
136
Such a small difference between a 40 CU and a 60 CU gpu? That's a 50% difference. i'm expecting more like a 35% difference between these two, from 80-85% to 110-115%. if it ends up being 90-95%, i will be glad to be wrong for sure
Have you considered that 40CU die is designed for WAY higher core clocks?

MacOS leak touted 2.2 GHz max boost clock for Navi 21 die and 2.5 GHz max boost clock for Navi 22.

So expect lower performance delta between those two dies.