Question Speculation: RDNA2 + CDNA Architectures thread

Page 89 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
I wonder if Apple is choosing to run Navi21 a bit slower than what we might see in the PC market...
If we go by Navi 10 specs in Apple power tables, and Navi 21, we can see that 2050 MHz version of Navi 21 has 200W TDP, and 2200 MHz version has 238W TDP.

As has been shown, Radeon Pro 5700 is 130W TDP, for GPU only portion, using Apple specs, with 1400 MHz clock speed.

Wait a minute.

So its 2050 MHz version that has 200W TDP. If it 256 bit bus, even full board this way should be 250W TDP, with GDDR6 memory.

Holy ...
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
Power limit, not TDP.

Weeee!

If these rumored specs are accurate, good god I feel bad for those who blindly bought a 3080.

EDIT: Glo, I don’t know why that is surprising, AMD did state a 50% uplift. They aren’t the company to pull numbers out of their ass these days.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
If we go by Navi 10 specs in Apple power tables, and Navi 21, we can see that 2050 MHz version of Navi 21 has 200W TDP, and 2200 MHz version has 238W TDP.

As has been shown, Radeon Pro 5700 is 130W TDP, for GPU only portion, using Apple specs, with 1400 MHz clock speed.

Wait a minute.

So its 2050 MHz version that has 200W TDP. If it 256 bit bus, even full board this way should be 250W TDP, with GDDR6 memory.

Holy ...
I don't think It will have more than 60-64CU.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
OP here. These pptables are test configurations, they are not used by production hardware (you can likely boot with some parameters to use them though). And yes they are the same in macOS and in Linux.
Are there any clock speeds available for Navi 23 in the tables, yet?
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
BTW guys If this is true then It's very good for desktop, but think about the mobile versions! That's where It will be impressive.
Just for comparison:
RX5600M in Witcher 3 works at 1352Mhz and power consumption is reported as 90W. Link
RX5600 Pulse works at 1712Mhz(average) and power consumption in Metro Last Light is 160W. Link
So by decreasing clocks by 21% you lower the power consumption by 44%.
If you apply the same to Navi22 215W TBP ~205W average power consumption and let's say 2350Mhz(average) you get ~115W average power consumption and clockspeed will be 1975Mhz! Lower It by another 10% and you are at 90W or maybe less and 1775Mhz average clockspeed. So you have the same power consumption as RX5600M, but with a much higher performance(31% just from clockspeed difference, then add 10% more CU and some IPC increase).
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
Thats another possibility that it is cut down variant with 72 CUs, for example.
Bandwidth will be a problem even with 60-64CU. 40CU Navi22 uses 192bit and 16Ghz GDDR6, so If you increase CU count by 50% or more and bus width only by 33%, that won't be enough.
 
Last edited:

sandorski

No Lifer
Oct 10, 1999
70,240
5,810
126
I have the feeling that AMD will match even the 3090 in almost everything except RayTracing. The 3080 and 3090 are so close in Performance it seems highly unlikely that Navi2 won't match 3090 if it already matches 3080. That might require Overclocking/Custom Cooling though, possibly not a Marketed Product. The 3090 might have more Professional Work Cred justifying its' Price and Existence in this scenario.
 

ModEl4

Member
Oct 14, 2019
71
33
61
Considering the PS5 and Xbox's have a cut down GPU the performance difference should be noticeable. The PS5 RDNA2 based GPU is a 36 CU unit. The lowest Big Navi is a 60 CU part for PC, correct? There was some discussion of the graphics not being too good where I got that video link from. We don't know how old that gameplay video is and this game is not due for another month. It's also built atop a game that came out a year ago and wasn't designed with RT in mind. It's a good effort.

TechPowerUp had this to say to about the Series X GPU. I'm afraid I don't know much about AMD GPUs to understand how powerful a CU or if a double CU is less than two good separate CUs.






I bring up the consoles because I was thoroughly confused by your post. That video is PS5 video, not PC video. It doesn't seem like there's plans to bring it to PC for now. Your 30% figure should be higher given the cut down GPU in the consoles. This new gen of consoles brings a lot of value to the console market, but I do not think they'll be on the same footing as PCs. Close but not the same.

I agree, close but not the same, but let's compare how close. Let's take a 3090 (2,5X in 4K in relation with a 5700 - TechPowerup) How faster is going to be the PS5 in relation with a 5700, let's take a 5% IPC and the clock difference, I would calculate with a hypothetical 2115MHz average actual clock (anyone can do their calculations with their own IPC and actual clock projections) this will give nearly +29% for PS5 in relation with a 5700 but I will go with a +25% assuming there is no compression efficiency advantage in the RBEs leading to a small deficit since the bandwidth is the same 448GB/s (I won't go into details) so with these assumptions we have that 3090 is just 2X in relation with PS5. Why I say just 2X? Well in many games this is the frame rate ratio between 4K and QHD resolution. If you check at TechPowerup a Sapphire 5700XT Nitro+ Special Edition (around -5% from this assumed at PS5 level) you will see that the QHD average fps is around 1,8X in relation with 4K. Of course this is an average and it depends from the game and the engine. But coding in a fixed hardware environment with the necessary optimizations it is not illogical to assume that the development teams will extract nearer 2X than 1,8X on the average. (Please don't compare 3090 scaling, it will only mean that you don't know how things work, lol, btw 3090 is not +10% in relation with a 3080 due to some scaling wall Nvidia faced with ampere design, already 3080 is system limited not just CPU limited, if rumors are true about Cypress cove, within 4(?) months, 3080 will get at least +2% additional performance in relation with 2080Ti with Rocket Lake-S in relation with Comet Lake-S (PCI-Expess 16X gen4, Single thread performance, usage of upcoming Gen4 SSDs, etc.) You don't have to analyze it on a theoretical level that is system limited, just check 3090 at the games that are low on CPU resources (a good indication would be 60fps on consoles) and another must is that the engine must be designed to scale at high refresh rates (very hard) there you will see what is the true difference between 3090 and 3080. Do you think the only reason Nvidia persued the ray tracing path is because they see that they have an advantage in relation with the competition? They should have made the simulations, throwing just more raster GPU performance into the mix does not yield any more near perfect scaling like before due to the other system advancements that must happen concarently, of course I am talking in the long run, not for the immediate future (although Jensen in Turing launch was referring to another thing, how rasterization visual quality per pixel will be much more difficult to scale in the future, and that with all these cheating raster technics trying to simulate reality looking scenes, increasing resolution and precision, results in scenes looking more fake because the technics break and the fake looking result becomes much more clear for someone to perceive it) Anyway, Nvidia is way ahead than some people think, although I think Nvidia's core team is brilliant, giving up the consoles to AMD was a major mistake for Jensen (of course he will say things like, human resources and time is finite and find talent to hire is difficult, blah blah,blah, it was a mistake) Where would AMD be without all these collaborations from 360 era till now (on the technology level, don't underestimate Sony and MS contribution to the design choices/optimizations), and having the whole f...ing industry optimizing their engines for them and still they are at 20% (please don't defend Jensen based on the 80% market share, or because he may had concerns about monopolistic regulations etc. , He made a big mistake, billions $ mistake, of course I wouldn't change him for anything (lol, I'm mainly saying this because In the last presentation wasn't himself, low energy and excitement, we want old Jensen back🤭) Anyway, back to PS5, it is not irrational to accept that a PS5 will do at QHD what a 3090 can do in 4k (on optimized for consoles engines, eventually all of them...)
 

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
I mentioned a certain source told me that clocks in the lab were close to 3ghz. Pro variants are usually slower (these are all pro cards) so it would not surprise me if the final card hit 2.75-2.8 ghz.
> People start raving about ARM finally breaking 3GHz.

Lisa Su + David Wang: "Hold our beers".

Jokes aside, here's an interesting Tweet regarding this leak.