Question Speculation: RDNA2 + CDNA Architectures thread

Page 35 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,690
6,348
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

exquisitechar

Senior member
Apr 18, 2017
664
883
136
80CUs is the max for Navi2X, there’s no hidden bigger chip. As for why it has a relatively large die size considering that, we’ll find out soon enough. Although I already have an idea.
 

Gideon

Golden Member
Nov 27, 2007
1,697
3,891
136
80CUs is the max for Navi2X, there’s no hidden bigger chip. As for why it has a relatively large die size considering that, we’ll find out soon enough. Although I already have an idea.
Makes sense. I can see them fitting more CUs as far as density and transistor-budget is concerned, but they would really start to run into power walls even with the rumored 50% perf/watt gain (or the TBP would be something equally crazy as Overclocked RTX 3090 models).

RTG folks sure seem to enjoy this round of rumors so far.
Overall I really hope RDNA2 is as great as Scott seems to hint as I'm upgrading my GPU and monitor at around when Cyberpunk 2077 launches.

The monitor I'll pick up will almost certainly be LG 27GN950 (144Hz 4K IPS, HDR-600) for about 850€ if the review's are OK. I would really like a FALD backlight and true HDR-1000, but this seems to be totally out of my budget (and there are very few new monitors with that tech).

Corresponding Graphics Card will hopefully be something from RDNA2 Stack. I don't really want to pay more than 900€ and it would have to be at least competitive with RTX 3080, otherwise I'm going with that (for what will hopefully be no more than 850€ around here). For 3080 the biggest concernes are the 320W TDP (which would hopefully be ok with the Founders Edition) and only 10GB of memory (Ideally I'd want 14-16 GB). I would really prefer an all-AMD setup though. Something to go with a nice Zen3 / PCIe 4.0 SSD upgrade some time later.

Overall It's still Unbelievable how much I'm ready to spend on computer hardware now, considering how rarely I game these days (in the past I was the strict 200-300€ GPU sweet-spot guy for decades, with the total RIG cost in around 1000 €).
 

DisEnchantment

Golden Member
Mar 3, 2017
1,659
6,101
136
An exact doubling of Navi 10 is 502mm^2 at 41Mxtors / mm^2. If you increase this to be closer to Renoir density the die size required for the 20.6B xtors is 343mm^2. If there is a 450+ mm^2 Navi 2x die and transistor density is in the same ballpark as Renoir then it will have considerably more than 80CUs. Possibly upto 120CUs.
Renoir use N7 HD cells. Whereas Navi uses N7P. They traded off density for performance.
Navi will not reach Renoir's density. Also NV GA102 seems to have higher density than Navi.
On paper Navi should be able to clock much higher beyond Renoir, but it seems Renoir can clock beyond 2.2 GHz with ease.
 

beginner99

Diamond Member
Jun 2, 2009
5,219
1,591
136
If AMD offered a Big Navi that had 16 GB of VRAM and performance roughly equatable to the RTX 3080 in traditional rasterization tasks and noticeable RT performance gains over Turing, but not high enough to beat the 3080, all at the same price as the 3080 ($699) and with reduced TDP (say 275W), would you guys buy that?

Tough call. First I would prefer an AMD card anyway but I like the option to dip my feet into deep learning if I go the nvidia route. However for that the 10GB RAM is even more of an issue than for future-proofing gaming. Consoles will have 16gb of RAM and their OS will most certainly use less than 6 GB so that 10GB can be a bottleneck very quick.
But then I decide not to invest into a DL capable card, I would probably drop down one 1 tier anyway. On top of that I would wait for actual user reports. As a multi-year owner of a 290x, the emitted heat during gaming can be annoying. it warms up my "office" room a lot. Not having that again would be nice.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,409
2,904
136
Renoir use N7 HD cells. Whereas Navi uses N7P. They traded off density for performance.
Navi will not reach Renoir's density. Also NV GA102 seems to have higher density than Navi.
On paper Navi should be able to clock much higher beyond Renoir, but it seems Renoir can clock beyond 2.2 GHz with ease.
Why would they use N7P when N7 HD has better density, clocks high enough as seen in Renoir and If I am not mistaken it also has better power consumption.
 
  • Like
Reactions: spursindonesia

Zstream

Diamond Member
Oct 24, 2005
3,396
277
136
Why would they use N7P when N7 HD has better density, clocks high enough as seen in Renoir and If I am not mistaken it also has better power consumption.
I've been around long enough, and if this is the case, they aren't going for cheap branding. They need the performance, and clearly will go after it. The chip will be larger than some might have guessed IMO, and will cool better with the lower density.
 

DiogoDX

Senior member
Oct 11, 2012
746
277
136
Tough call. First I would prefer an AMD card anyway but I like the option to dip my feet into deep learning if I go the nvidia route. However for that the 10GB RAM is even more of an issue than for future-proofing gaming. Consoles will have 16gb of RAM and their OS will most certainly use less than 6 GB so that 10GB can be a bottleneck very quick.
But then I decide not to invest into a DL capable card, I would probably drop down one 1 tier anyway. On top of that I would wait for actual user reports. As a multi-year owner of a 290x, the emitted heat during gaming can be annoying. it warms up my "office" room a lot. Not having that again would be nice.
290X was 290W and the 3080 is 320W. Will heat up more.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,409
2,904
136
I've been around long enough, and if this is the case, they aren't going for cheap branding. They need the performance, and clearly will go after it. The chip will be larger than some might have guessed IMO, and will cool better with the lower density.
They could go with the HD libraries and thanks to the higher density put more CUs in the chip and not be any bigger than on N7P and thanks to that for the same performance you can set the clocks lower which will lower the power consumption so It won't locally overheat. Let's say instead of 80CU you will have 100CU or 25% more Raw performance, you will lower the clocks by 15%, the performance will be the same, size could be smaller than N7P and power consumption could be also lower.
 
Last edited:

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
I am less optimistic than you guys about Big Navi. I thought AMD had a shot if nVidia was going for a 40-50% performance increase but they jumped around 70-80%. I don’t think AMD will be able to compete at the highend, I hope I’m wrong though.
 

maddie

Diamond Member
Jul 18, 2010
4,786
4,771
136
I am less optimistic than you guys about Big Navi. I thought AMD had a shot if nVidia was going for a 40-50% performance increase but they jumped around 70-80%. I don’t think AMD will be able to compete at the highend, I hope I’m wrong though.
It's obvious that the 3090 is an outlier. More than 2X the price for 20-25% performance increase. It will allow, probably, for Nvidia and their fan minions to shout, KING OF THE HILL. For the rest of us, the sane ones, the choice comes down to the $700 and less cards. A truly great increase in perf/$.
 

soresu

Platinum Member
Dec 19, 2014
2,883
2,092
136
I am less optimistic than you guys about Big Navi. I thought AMD had a shot if nVidia was going for a 40-50% performance increase but they jumped around 70-80%. I don’t think AMD will be able to compete at the highend, I hope I’m wrong though.
I'm less interested in the absolute high end and more about moderate performance for a reasonable price.

Unless AMD have been playing tricks all along and have a much bigger chip they don't have a chance to compete in the high end and having left announcement so long, they have lost even the opportunity to make something of RDNA2 before Ampere's announcement.

All they have now is banking on a moderate to high performing chip that costs less to make than nVidia's equivalent does.

Given their claim of 'simplified logic' that doesn't seem like such a stretch, so long as they haven't wasted that boon on spacing the logic out for higher clocks.

At this point though their credibility is starting to go down the tubes.

Here is nVidia announcing their 2nd gen range of RT accelerating cards, and AMD have not announced so much as one SKU with RT acceleration capability - I'm a red boy till I die, but their perpetual hype silence is starting to wear a bit thin, I'd be happy with just an AV1 decode confirmation at this point.
 

DisEnchantment

Golden Member
Mar 3, 2017
1,659
6,101
136
They could go with the HD libraries and thanks to the higher density put more CUs in the chip and not be any bigger than with N7P and thanks to that for the same performance you can set the clocks lower to have lower power consumption so It won't locally overheat.
RDNA(2) favors higher clocks, the Unified Geometry Engine and the command processor can work faster, the cache subsystem, input assembler and so on can be faster.
With the GE clocked high RDNA can discard a lot of triangles before needing to engage the CUs.
64 CUs@2.4 GHz with ~20 TF will beat 80 CUs@1.9GHz with ~20TF in actual gameplay. This is basically Sony's PS5 approach.

Compute only for sure will make sense to go with N7 HD because you can go really wide like CDNA with 128 CUs.
Still, RDNA has 2 compute pipes with 4 queues per pipe. And they can be scheduled asynchronously without waiting for the command processor.
So for graphics the bottleneck is there somewhere which high clocks could help alleviate.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
AMD have not announced so much as one SKU with RT acceleration capability

I guess the consoles are nothing to brag about.

What's the purpose of announcing anything yet anyways? The loyalist will shop Nvidia anyways. Too many idiots just want AMD to compete for nothing more then a discounted Nvidia offering. You already see the bad drivers, past history, no way in hell can they compete, etc and yet there hasn't really been anything that would sway the logical person one way or another.

Maybe there's a storm brewing?

It's best to wait it out and see what happens. There's probably only a select few that are in the know when it comes to what Lisa has planned out.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
I'm less interested in the absolute high end and more about moderate performance for a reasonable price.

Unless AMD have been playing tricks all along and have a much bigger chip they don't have a chance to compete in the high end and having left announcement so long, they have lost even the opportunity to make something of RDNA2 before Ampere's announcement.

All they have now is banking on a moderate to high performing chip that costs less to make than nVidia's equivalent does.

Given their claim of 'simplified logic' that doesn't seem like such a stretch, so long as they haven't wasted that boon on spacing the logic out for higher clocks.

At this point though their credibility is starting to go down the tubes.

Here is nVidia announcing their 2nd gen range of RT accelerating cards, and AMD have not announced so much as one SKU with RT acceleration capability - I'm a red boy till I die, but their perpetual hype silence is starting to wear a bit thin, I'd be happy with just an AV1 decode confirmation at this point.

AMD has not mentioned anything on purpose. Its to their advantage to wait until after nVidia announced things. AMD is acting exactly how they should be in this case.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,409
2,904
136
RDNA(2) favors higher clocks, the Unified Geometry Engine and the command processor can work faster, the cache subsystem, input assembler and so on can be faster.
With the GE clocked high RDNA can discard a lot of triangles before needing to engage the CUs.
64 CUs@2.4 GHz with ~20 TF will beat 80 CUs@1.9GHz with ~20TF in actual gameplay. This is basically Sony's PS5 approach.
You are right about the performance, but 80 CUs@1.9GHz with ~20TF using HD libraries will be smaller and also have lower power consumption and better performance/W ratio than the other approach.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
AMD has not mentioned anything on purpose. Its to their advantage to wait until after nVidia announced things. AMD is acting exactly how they should be in this case.

Exactly.

Why has there been no console pricing revealed?
Why has there been no console pre-orders?
Why no leaks about Big Navi?
Why no leaks about Zen 3?

I'm going with it's all part of Lisa's plan.

Allow Microsoft and Sony to tease console features, but not pricing and availability.

Let Nvidia show it's cards and pricing with availability.

Allow Sony and Microsoft to reveal pricing and start pre-orders before Nvidia's products are available.

Unleash some sponsored Big Navi rumors with teased performance just as Nvidia's products start to become available.

Announce Zen 3 with more Big Navi performance teasing.

Announce Big Navi pricing and availability w/bundle deals when purchasing a Zen 3 processor.

Seems like a good plan at least as monies gonna be tight for a lot of people this holiday season.

Sounds like a sane approach to at least make a dent and stay in the internet chatter zone for a lot longer.

If mining turns out to be good on Ampere I think I'd do some pump and dumping of coins to entice the miners into buying Nvidia's card....Gotta give those silly guys who'm want AMD to compete for better deals on Nvidia products the finger somehow!
 
  • Like
Reactions: Tarkin77 and Tlh97

Konan

Senior member
Jul 28, 2017
360
291
106
All they have now is banking on a moderate to high performing chip that costs less to make than nVidia's equivalent does.

Apparently, TSMC 7nm updated process for AMD is going to cost them 30% more than Nv on SS 8nm. I don't think an economies of scale with TSMC will result in any super meaningful cost reductions for the consumer. It will cost them a ton to make something at say a 3090 level.
 
  • Haha
Reactions: spursindonesia

kurosaki

Senior member
Feb 7, 2019
258
250
86
Screenshot_20200902-165609.png
2-4 RT cores per CU x ~2ghz is far from Nvidias claimed 50+TF in RT performance (320GF). By a magnitude, how is rdna2 and big Navi going to have a chance here? Starting to get a bit worried, or hopefully I have missed something in my calculations.
 

Konan

Senior member
Jul 28, 2017
360
291
106
I'm going to with the latter
Do you have a link to how much both Nvidia and AMD pay for wafers? Or is this more of a FUD type of thing?

Woah calm down. There has been quite a few people in the Ampere thread speculating that Samsung was cheaper. The postings revolve around Nvidia wanted cheaper silicon prices, so it went to Samsung and things backfired. Then there is speculation from Youtubers like AdoredTV, wccftech and other speculative Asian media articles, paywall articles, analyst comments out there over the past few months. Obviously cost per wafer info isn't public, but you knew that already. The assumption (which is why I said "apparently") is just that, SS is cheaper for NV and TSMC was more expensive. There is lots of common sense in this assumption....
 

maddie

Diamond Member
Jul 18, 2010
4,786
4,771
136
Woah calm down. There has been quite a few people in the Ampere thread speculating that Samsung was cheaper. The postings revolve around Nvidia wanted cheaper silicon prices, so it went to Samsung and things backfired. Then there is speculation from Youtubers like AdoredTV, wccftech and other speculative Asian media articles, paywall articles, analyst comments out there over the past few months. Obviously cost per wafer info isn't public, but you knew that already. The assumption (which is why I said "apparently") is just that, SS is cheaper for NV and TSMC was more expensive. There is lots of common sense in this assumption....
Nobody's doubting that SS is cheaper, but they are questioning your 30% cheaper and wondering why you would even make up a number if you don't know.
 

blckgrffn

Diamond Member
May 1, 2003
9,179
3,144
136
www.teamjuchems.com
View attachment 29097
2-4 RT cores per CU x ~2ghz is far from Nvidias claimed 50+TF in RT performance (320GF). By a magnitude, how is rdna2 and big Navi going to have a chance here? Starting to get a bit worried, or hopefully I have missed something in my calculations.

The way I see it is that we are only going to see RT in the vast majority of games that is portable to the consoles.

The Series X and PS5 will set the minimum performance threshold for turning the RT toggle on at 4K and 30 fps minimum.

As long as the midrange RDNA2 PC GPUs have the same or more ability to accelerate RT then I can't see owners of those GPUs really missing out on too much?

BUT :) Out of curiosity, what does your math work out to and how did you plug in the numbers?