Question Speculation: RDNA2 + CDNA Architectures thread

Page 86 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
You are forgetting heat density. On renoir not really a problem with 8CU Vega. But Radeon VII and N10 already had some heat density issues. So going dense doesn't help if you then can't cool it. HP libraries will help with clocks and heat density.

And then what was speculated about before. The bigger die size could also be due to big caches which themselves might actually also help with heat density.
I though about the heat density, but I am not sure about It. Is heat dissipation worse for my 120CU chip than the 80CU chip when it has comparable die size and most likely lower power consumption?

HP libraries will increase clocks but with It also power consumption.

I can understand that big caches can help with heat concentration, but what about performance? Can 128MB L2 cache compensate for a missing 128-256bit bus? You can't really use It as a framebuffer, It's not nearly big enough.
3840x2160 * (64bit(HDR color dept)+32bit(z-buffer)) * 3(Tripple buffering) /8 = ~300MB (If you want MSAA then you multiply It by 2x, 4x, 8x)
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,523
3,037
136
I don’t care about die sizes. Right now we have actual proof of specs from AMD regarding 3 of the 4 chips.

EDIT: AMD leaked the firmware of the actual cards via ROCm and a Linux PR and then later tried to retract (via git, rofl). Thus happened just a couple days ago.
But I do care about both die size and CU count.
And how do we know If what was written in firmware is actually correct and It wasn't a deliberate move from AMD?
 

coercitiv

Diamond Member
Jan 24, 2014
6,677
14,275
136
And how do we know If what was written in firmware is actually correct and It wasn't a deliberate move from AMD?
You know because they do this with a purpose, and that purpose is linked to engineering needs, not marketing. AMD determined silence is their desired strategy this time around, this excludes purposely disseminating false information.

No poor Volta, no overclocking dream, no borderline fake efficiency demo, no 150W TDP promise, just awkward silence and emoticons.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,747
6,598
136
That would be next-level mind-games by AMD. Tell the engineers to intentionally put wrong stuff into the commits to confuse the competition and leakers. I like it actually. Would be funny if true.
Nobody will allow to upstream it. And it also means you miss the kernel merge windows
Linus Torvalds will not spend his weekend to merge some fake commits from any Vendor.
The ramifications for this would be that AMD will never be allowed to upstream and they can only work on their stuff downstream. Like NVIDIA.

For the end user it means, for any distro they use, it won't work out of box.
They would need to install some debian/rpm packages from somewhere to make things work.
And this defeats the whole purpose of Open Source. Might as well go NVidia way and just deliver rpms/debs/binaries.

It also means other non AMD contributors to AMD source code cannot work.
Valve, Intel, Google, Microsoft, Oracle, Redhat, etc contribute to AMD code, imagine if you push fake things.
 
Last edited:

A///

Diamond Member
Feb 24, 2017
4,351
3,158
136
Even then the open source world monitoring people do doesn't paint much of a picture. At least you can buy a foot warmer this Christmas if you fancy NVidia more.

You know because they do this with a purpose, and that purpose is linked to engineering needs, not marketing. AMD determined silence is their desired strategy this time around, this excludes purposely disseminating false information.

No poor Volta, no overclocking dream, no borderline fake efficiency demo, no 150W TDP promise, just awkward silence and emoticons.
I sometimes feel this was heavily Raja influenced. A few weeks ago he posted a photo of him holding a bottle of pure capsicum gel. You know. Spicy. The moment certain people left AMD's marketing, the BS died down. Intel and NVidia hired those people.
 
  • Like
Reactions: Tlh97 and Mopetar

maddie

Diamond Member
Jul 18, 2010
4,881
4,951
136
You are forgetting heat density. On renoir not really a problem with 8CU Vega. But Radeon VII and N10 already had some heat density issues. So going dense doesn't help if you then can't cool it. HP libraries will help with clocks and heat density.

And then what was speculated about before. The bigger die size could also be due to big caches which themselves might actually also help with heat density.
I think the heat density issue is a CU level problem to solve, not a total die area problem. Heat density doesn't increase with more CU as the circuit density is the same. More heat over a proportionally greater area is not a cooling problem. The arrangement of high load/power circuits close together is what causes hot spots. This is independent of CU numbers.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,305
1,218
136
Look at the power usage chart. Notice the 5700 and 5700xt compared to the 3080. If AMD comes in anywhere close to those plus 50w. That will be a win against the 3080 on power consumption alone. Nvidia set the bar very low for excessive power usage on the 3080.

 

kurosaki

Senior member
Feb 7, 2019
258
250
86
Valve, Intel, Google, Microsoft, Oracle, Redhat, etc contribute to AMD code, imagine if you push fake things.
So You're saying that AMD has managed to conspire together with Valve, Intel, Google, Microsoft, Oracle, Redhat AND Linus Torvalds etc to contribute with phony AMD code? This might be huge! 240 CU, 150TF, 6900XT confirmed!

#The_Linux_Conspiracy_2020


;)
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
Ok, then 80CU is max.

Currently it looks that way, but then again the majority of gamers game on Windows anyways.

So You're saying that AMD has managed to conspire together with Valve, Intel, Google, Microsoft, Oracle, Redhat AND Linus Torvalds etc to contribute with phony AMD code? This might be huge! 240 CU, 150TF, 6900XT confirmed!

#The_Linux_Conspiracy_2020


;)

Hypothetically speaking....What would the consequences be if AMD decided to not out a larger CU count offering at this time? No phony code, just no code yet.

No hype train fuel. Just curious about it as I'd imagine the majority of the intended end users would be on Windows anyways.
 

blckgrffn

Diamond Member
May 1, 2003
9,300
3,442
136
www.teamjuchems.com
Occam's razor ruining all the fun...

Per the usual.

Now don't be laying razors on these tracks! The train refuses to derail because of these silly things!

@Kenmitch a ways back in this thread, didn't we see a commit comment that essentially said they were going to stop including hardware specs in the code? If that's true I guess all we would see is a new card ID show up? And what's to stop that from dropping at release or just before launch?

We definitely need more fuel for the hype train, what with all these people talking sense in this thread.
 

ModEl4

Member
Oct 14, 2019
71
33
61
It is interesting though that the rumored Big Navi clocks are only 2.1GHz. If true, it is an indication that Big Navi is N7 and PS5 is N7+ EUV (or N7P, but I doubt). And then we have N7E for series X, I wonder how difficult or time consuming would be to convert the RDNA2 design to all these processes. With RDNA1 we had 1905MHz for 5700XT, 1845MHz for 5500XT (-3%) and then the comfort zone (a good compromise between performance/W) for the design was at 1750MHz (highest 5600XT designs, mild OC 5700 designs). PS5 2230MHz figure should be the 5500XT RDNA1 clock level equivalent, (judging by the Marc Cerny presentation and the way he frased his comments regarding the clocks) meaning that a classic RDNA2 design AIB in a N7+ EUV process can hit 2,3GHz (2.5GHz air cooled) Since N7+ EUV performance is +10% in relation with N7, then the 2.1GHz clocks should be justified then in N7. Also the +10% perf the EUV design enjoys in relation with plain N7, is in the new TSMC report, before 1 year was at the same performance and before 2,5 years, EUV performance projections in official TSMC reports was worst the N7. N7+ EUV was not so successful as other TSMC's nodes in such designs and it would be logical to assume that TSMC in order to improve the performance of the node so much in the last year should have in volume production a design and a partner such as Sony and PS5, it fits somehow. Also if Sony wanted to have the option to migrate in a later phase of the console life to smaller EUV process it would be easier I would assume. Anyway, enough with the speculations, but a classic 2.1GHz RDNA2 design with just 5% IPC improvements, 128 RBEs , 8 primitives and 80CUs with enough bandwidth to feed the RBEs (1TB/s) it would be past 3080 by more than 10% easily. I really don't know what to think about the rumored 256bit memory bus/128RBEs and 128MB cache design, lol the ArtX dream lives on, we will see.
 
  • Like
Reactions: Tlh97

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
It is interesting though that the rumored Big Navi clocks are only 2.1GHz. If true, it is an indication that Big Navi is N7 and PS5 is N7+ EUV (or N7P, but I doubt). And then we have N7E for series X, ...

All three are most likely on the same process, and clock speed differences are a reflection of what they need to be, to hit power targets. Sony pushing the hardest to extract performance in exchange for power. IIRC, PS5 uses up to 340W from a fairly small die.
 
  • Like
  • Haha
Reactions: Krteq and PJVol

NostaSeronx

Diamond Member
Sep 18, 2011
3,706
1,233
136
Arden, Ariel, Lucienne and Mero are all N7-enhanced.

Arden = Anaconda in Xbox Series X
Mero = Lockhart in Xbox Series S
Ariel = Oberon in Playstation 5
Lucienne = Refresh of Renoir ==> Acton which is the Surface Edition, like Winston is to Picasso and not to Raven Ridge.

Seems Models x8h-xFh are N7e for 40h-4Fh(PS5), 60h-6Fh(RNR), 80h-8Fh(XSX), 90h-9Fh(XSS).

However, mainline products of CDNA + RDNA2 might not be N7-enhanced.
 
Last edited:

NostaSeronx

Diamond Member
Sep 18, 2011
3,706
1,233
136
Isn't the Playstation 5 set for 150 watts just like the PS4 Pro?

Take the max CPU cluster power out of the PS4 Pro => 2x25W => 100W for GPU. Which is basically really close to 1266 MHz @ 150W for Polaris10.
Take the max CPU cluster power out of the PS5 => 1x25W => 125W for GPU. RDNA2 isn't out yet so can't compare 125W to Navy Flounder or anything.

100W @ 911 MHz vs 125W @ 2.33 GHz
~1.5x
50W @ 911 MHz or 100W @ 1.3665 GHz
~1.5x
25W @ 911 MHz or 100W @ 2.05 GHz

100W to 125W
~1.122321429x (150W to 185W ratio-ed to 100 to 125W)
125W @ 2.3 GHz

( ͡° ͜ʖ ͡°) Stop spreading these lies. The PS5 is not power hungry at all.
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
No, we don't know anything about real power consumption. Goes for both PS5 and XBSX/XBSS. The PSU has nothing to do with it.

Why? Because there's a 310W rated PSU inside the PS4 Pro yet power consumption under full load is around like <=180W.

Even lesser. Digitalfoundry measured 155w in Infamous First Light at 4k on PS4 Pro even though it had a 310w rated PSU.


Xbox One X had a power draw of 175w with a much smaller 245w rated PSU


I expect Xbox Series X total power draw to be 200-210w and the PS5 below 200w given the lower CPU clocks and the use of Smartshift to intelligently move power between CPU and GPU.
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
Isn't the Playstation 5 set for 150 watts just like the PS4 Pro?

Take the max CPU cluster power out of the PS4 Pro => 2x25W => 100W for GPU. Which is basically really close to 1266 MHz @ 150W for Polaris10.
Take the max CPU cluster power out of the PS5 => 1x25W => 125W for GPU. RDNA2 isn't out yet so can't compare 125W to Navy Flounder or anything.

100W @ 911 MHz vs 125W @ 2.33 GHz
~1.5x
50W @ 911 MHz or 100W @ 1.3665 GHz
~1.5x
25W @ 911 MHz or 100W @ 2.05 GHz

100W to 125W
~1.122321429x (150W to 185W ratio-ed to 100 to 125W)
125W @ 2.3 GHz

( ͡° ͜ʖ ͡°) Stop spreading these lies. The PS5 is not power hungry at all.

I am more inclined to believe closer to 300w than 150w for the PS5 just based on how large (and frankly unwieldy) the chassis is this time around. It's either that or believing that Sony's engineers are straight up bad at their jobs, as the X1X is tiny in comparison and can still dissipate ~175-180W of heat without much trouble.
QZfwovL.jpg
RF4OCW0.jpg
 
  • Like
Reactions: Elfear