Question Speculation: RDNA2 + CDNA Architectures thread

Page 73 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
16 and 12... 24GB out the window on N21

Edit:
N21 w/80CU on 256bus and rumor of large cache with 16GB and rumor of $549 / $599 pricing (think this tells the performance level if true)
N22 possibly w/60CU and 12GB

Pricing rumors are the ones I trust least of all because unlike almost every other decision about a product it's the easiest to change at the last possible moment.

Anyone on a design team knows how many shaders a die will have as soon as the design is sent off to the fab. They might not know exactly how a specific product will cut until later and final cost need not be shared with engineers at all.

I'd like to think most engineers are smart enough to realize that any information they get about pricing could be made up in such a way to try to find leaks as well.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
With Renoir and GA100 at > 60M xtors / mm^2 and also offering good clocks and good power profiles I do not see why an 80CU RDNA 2 part has to be 500mm^2 anymore. If is 500mm^2 and only competes with the 3080 that is a total failure when there were other options available to AMD to be far more wafer efficient with their product stack.

If you want to believe rumors, then it's a ~500mm2 part.

Also density is greatly affected by what you are designing.

The closest thing to a current AMD GPU design is the the new XB Series X. With 42Mt/mm2. Which also leads to a ~500mm2 part.

I suppose you can cherry pick the option that best fits your narrative, but no one knows how this will turn out.

I would say we don't strong evidence either way, but if want an answer right now, I would say the evidence, sketchy as it is, leans more towards ~500 mm2.

Regardless of the exact size. It will still be a big expensive chip, and it has still had the price ceiling lowered with Amperes release, and it's still going to be more profitable to use Wafers for Zen 3. So the exact size doesn't change my argument.

I hold to my original reason: Big Navi needed Ray Tracing and other advanced features.
 
  • Like
Reactions: Konan

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
If you want to believe rumors, then it's a ~500mm2 part.

Also density is greatly affected by what you are designing.

The closest thing to a current AMD GPU design is the the new XB Series X. With 42Mt/mm2. Which also leads to a ~500mm2 part.

I suppose you can cherry pick the option that best fits your narrative, but no one knows how this will turn out.

I would say we don't strong evidence either way, but if want an answer right now, I would say the evidence, sketchy as it is, leans more towards ~500 mm2.

Regardless of the exact size. It will still be a big expensive chip, and it has still had the price ceiling lowered with Amperes release, and it's still going to be more profitable to use Wafers for Zen 3. So the exact size doesn't change my argument.

I hold to my original reason: Big Navi needed Ray Tracing and other advanced features.

It is not about cherry picking or rumours or leaks. It is about what is a smart business decision and designing a 3080 competitor using a 350mm^2 die seems far more intelligent and sensible than designing a 3080 competitor around a 500mm^2 die.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
It is not about cherry picking or rumours or leaks. It is about what is a smart business decision and designing a 3080 competitor using a 350mm^2 die seems far more intelligent and sensible than designing a 3080 competitor around a 500mm^2 die.

There is no indication they can actually do that, unless you cherry pick and avoid the most obvious and applicable comparison (XB Series X).
 

Konan

Senior member
Jul 28, 2017
360
291
106
Pricing rumors are the ones I trust least of all because unlike almost every other decision about a product it's the easiest to change at the last possible moment.

Anyone on a design team knows how many shaders a die will have as soon as the design is sent off to the fab. They might not know exactly how a specific product will cut until later and final cost need not be shared with engineers at all.

I'd like to think most engineers are smart enough to realize that any information they get about pricing could be made up in such a way to try to find leaks as well.

In this case the $$ quotes originated from Coreteks Tier 1 partners, so up a level from engineering guesses :)
The video I posted a couple pages back. He details where he gets his info from.

 
  • Like
Reactions: Mopetar

Konan

Senior member
Jul 28, 2017
360
291
106
Coreteks being the guy who said there was "Traversal Co-processor" on the back of Ampere cards.
He did clearly state that was a speculation piece on his part and not tied to any of his sources. It’s a little bit like us debaiting the patent at the top of this page. In this particular case above this post is clearly saying where he got his pricing sources
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
There is no indication they can actually do that, unless you cherry pick and avoid the most obvious and applicable comparison (XB Series X).

GA100 and Renoir both show TSMC N7 can increase transistor density over what AMD have used so far.

Renoir CPU clocks just as well as Matisse and the GPU clocks higher than any other GPU made by AMD.

Also we do not know the density of PS5 for all we know that could be similar to Renoir.
 
  • Like
Reactions: Tlh97

DJinPrime

Member
Sep 9, 2020
87
89
51
The claim I was questioning was that PC big Navi based on the CU count of Xbox, and based on how well RDNA1 scaled 5500 to 5700 means that PC big Navi will have no problem being more than 60 CUs and performance will scale great at 80 CUs. Some of you guys are even expecting 90% - 100% scaling.

What I wanted to point out was that AMD did not go bigger than 5700 and I can't believe it's a marketing decision, because that's just too stupid. Being successful is all about being agile. If it is true that AMD could have scaled up something bigger than 5700 and decided not to because they're making great margins on 5700 or it's not in their release cycle then no wonder they're behind NV. NV sure have no problem pumping out the super/Ti cards anytime they deemed necessary. Do you want to keep doing the same thing or try to be more like the competition who have a big lead on you? If true, that turned out to be a really bad bet, because now 2080ti level performance only commands a $500 price. And moving the goal post about saying well, it's going to take a lot of resource to design a bigger Navi 10. If that's the case, then your architecture wasn't all that scalable. Just like my software example, I can't call my application scalable if I needed tons of redesign and rework to get things working at higher volume.

As an engineer myself and works in the financial industry, I just can't believe that AMD was holding back. I'm not saying big Navi can't perform great. But I don't think it's logical to assume things will scale well based on RDNA1, because they wouldn't have stop at 5700. If you think that's a good business decision, then we'll just have to agree to disagree. If you think Big Navi will be better than 3080 just because you think so, well I can't argue there. Hope you're right.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
The claim I was questioning was that PC big Navi based on the CU count of Xbox, and based on how well RDNA1 scaled 5500 to 5700 means that PC big Navi will have no problem being more than 60 CUs and performance will scale great at 80 CUs. Some of you guys are even expecting 90% - 100% scaling.

What I wanted to point out was that AMD did not go bigger than 5700 and I can't believe it's a marketing decision, because that's just too stupid. Being successful is all about being agile. If it is true that AMD could have scaled up something bigger than 5700 and decided not to because they're making great margins on 5700 or it's not in their release cycle then no wonder they're behind NV. NV sure have no problem pumping out the super/Ti cards anytime they deemed necessary. Do you want to keep doing the same thing or try to be more like the competition who have a big lead on you? If true, that turned out to be a really bad bet, because now 2080ti level performance only commands a $500 price. And moving the goal post about saying well, it's going to take a lot of resource to design a bigger Navi 10. If that's the case, then your architecture wasn't all that scalable. Just like my software example, I can't call my application scalable if I needed tons of redesign and rework to get things working at higher volume.

As an engineer myself and works in the financial industry, I just can't believe that AMD was holding back. I'm not saying big Navi can't perform great. But I don't think it's logical to assume things will scale well based on RDNA1, because they wouldn't have stop at 5700. If you think that's a good business decision, then we'll just have to agree to disagree. If you think Big Navi will be better than 3080 just because you think so, well I can't argue there. Hope you're right.

AMD is only going to sell RDNA dGPUs to consumers. Otoh Renoir and zen2 are sold to OEMs and are used in server deployments.

Proving they can supply the OEMs with enough chips to make producing AMD product lines worthwhile is far more profitable and important long term than competing with the 2080Ti.

Now that they are gaining trust with OEMs and have funding to buy more wafers off of TSMC they are in a position to address the halo tier of the GPU market without impacting the CPU side of the business as much.
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
Totally agree.

Dr. Su has Zen 3 coming out first to show how far Ryzen has come. This also buys time for big navi to get some polish. She is building out the stack. Let Nvidia have it's day in the sun. I suspect Big Navi will beat the RTX 3070 but fall a bit short of the RTX 3080 (which as I write this is sold out EVERYWHERE)

I'm looking to stay with AMD to upgrade my RAD VII on my 3900x rig.

I'll wait till RDNA2 drops and see how much difference there is in game play.
Not really. All leaks point to them beating the 3080 easily.
Don't know if this was mentioned, but I saw that the PS5's power supply is rated 350 W, 340 W for the Digital Edition. Series X is 310 W I think.

The PSU for the Xbox Series X has been confirmed to be the same as the Xbox One.

I guess you all are still missing my point. If Navi scales really well, then AMD had no reason to just release the 5700 and 5700xt over a year ago. In development, the engineers should know how well their design scales. Since 5700 was such as small chip, and if scaling was not an issue, then just making a bigger chip would not be that much effort. Why would you limit yourself to 2070 level performance? Do you not want to be known as a high performance company and sell chips? Why wait 1.5 years to make money when you can make money now? Things change, like the competition releasing a $700 card that is faster than the current $1200 card. You never sandbag yourself.
I'm a software developer, so I can only explain from that point of view. I have an app that process multiple requests, and things seems to work really well when the number of concurrent request is under 10. Performance wise, it seems to scale really well. But as soon as I am in a scenario where there are 11 concurrent requests, I start seeing issues. I start seeing locking timeouts, data being overwritten by parallel process, memory heap issues, performance issues. So, my program is only scalable upto a certain point. It's a leap to assume big Navi scales the same way as 5500 to 5700.
If Big Navi does end up performing better than the 3080, it's because the engineers had to do a lot of awesome work to get around the problems of RDNA1, just like how I would have to redesign my program to be able to scale above 10.

AMD didn’t release a bigger GPU for 3 reasons:

1) RTG was broke.
3) The RX 5700 XT already had a high TDP and AMD is done playing that game.
3) Raja was given the boot and RTG was reorganized.

Navi2X is the first real attempt at a first class gaming product in years.
 

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
From Slimbook

Due of the large worldwide success of the AMD 4000 series processors, the manufacturer is unable to keep up with the current demand. Learn more here.

If you buy this product, you will have to wait until the new batch of units gets shipped to us on the 4th quarter of this year. Chances are that it will be October or November at most, but we will update this information with more accurate dates once we they assign us a batch of AMD processors.

So I would say there was/is no enough capacity to make Navi 2x. I think we'll see paper-launch on 28th
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
What I wanted to point out was that AMD did not go bigger than 5700 and I can't believe it's a marketing decision, because that's just too stupid.

Navi was by design targeting the mid-range performance level. Simple economics as greater sales are to be had in the lower to mid-range vs the higher end market. Putting Big Navi on the back burner doesn't mean they couldn't have made it earlier if it made sense at the time to do so. It was in AMD's best interest to use their resources and wafer supply for their other offerings and commitments at that time.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
So I would say there was/is no enough capacity to make Navi 2x. I think we'll see paper-launch on 28th

Nothing more then demand exceeding the sales predictions on certain sku's. When this happens you can't just whip up more, I believe it takes a few months from start to finish.

AMD estimates sales and produces what they think they'll need. Best to wait and see what happens. I believe AMD said Big Navi comes before the consoles. I'd guess availability should be close to the announcement as there really isn't much time in between it and PS5's release date. Availability ??? Most likely in stock longer then the 3080's. The good old supply and demand arguments will be fought at that time.
 

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
This supply/demand issue is what has me sketchy on “dumping” my 5700xt. I don’t want to feel like I am under pressure to find something at MSRP+ just because my current rig feels like it is running with the anchors down with a really old limp along GPU.

My PS5 acquisition dreams are rapidly pivoting to when they release some sort of bundle in 2021 (Ratchet & Clank?) rather than worrying about it now. My next gen GPU shopping might be the same. What is the opportunity costs of keeping my card last launch? 🤔

The good news is I didn’t pay $1200 for a 2080ti so I can maybe have the same percentage loss but the dollar amount will be so much less 😂
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
From Slimbook



So I would say there was/is no enough capacity to make Navi 2x. I think we'll see paper-launch on 28th

Renoir was unexpectedly well received. In addition, AMD was in the middle of a nearly year long manufacturing run of console SoCs. In addition, there was strong demand for both Ryzen and EPYC. The first (and likely the biggest) run of consoles is wrapping up right about now. That is why AMD timed the announcements the way that they did. You will note that despite incredible discipline by AMD, some of the lower margin PC parts still sold out.

The Ryzen 5000 series and the Radeon 6000 series will likely have a bit of an issue upon initial launch (pent up demand, as always) but I strongly suspect that the availability will be good as we move forward. AMD hasn’t been in the habit of paper launches.
 

eek2121

Diamond Member
Aug 2, 2005
3,100
4,398
136
This supply/demand issue is what has me sketchy on “dumping” my 5700xt. I don’t want to feel like I am under pressure to find something at MSRP+ just because my current rig feels like it is running with the anchors down with a really old limp along GPU.

My PS5 acquisition dreams are rapidly pivoting to when they release some sort of bundle in 2021 (Ratchet & Clank?) rather than worrying about it now. My next gen GPU shopping might be the same. What is the opportunity costs of keeping my card last launch? 🤔

The good news is I didn’t pay $1200 for a 2080ti so I can maybe have the same percentage loss but the dollar amount will be so much less 😂

Do you really consider it a loss? I paid $800 for my 1080ti and I hope to retire it this year, but it won’t be a loss at all for me. I got my money’s worth out of it, and I can’t even sell it (it is going in another PC).
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
It is really easy actually. To compete with 2080Ti AMD would have needed to produce a 500mm^2 RDNA GPU. This would not have had Ray Tracing or DLSS but would have competed at native resolution. The max they could sell such a card would be about $1000 and they would have to sell chips for far less than that to board partners so they could make a profit after adding in the PCB and memory.
You don't need 500mm2 RDNA GPU to compete with 2080Ti.
60CU, 3840SP, 96Rops, 384bus RDNA GPU would be only ~361mm2 with the same density and shouldn't be too far behind 2080Ti.
If I add 80CU instead of 60CU then It's ~418mm2 and this config should be a bit faster than 2080ti even with 15% lower clocks(1.6Ghz) than RX 5700XT, unless the scaling of CU is horrible with higher number.
 
Last edited:
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
Quite simply, RDNA1 was limited by the thermals. 1.8GHz at 220W means that for having a card that could beat 2080 or 2080 Super they had to go over 300W, and they anyway had traded blows with the 2080Ti which offered better features with a lower power consumption. So they had sold none, and lost money in developing a big chip that they could have not even used for the professional market as they already decided to use CDNA for that purpose. SO having a "Big Navi" on RDNA1 would have been a big mistake. Now they are claiming +50%perf/W, compatition went for power hungry cards, they added ray tracing, so they can build a chip that in theory can compete.
They wouldn't need 300W TBP to combat 2080Ti, 250W TBP is more than possible. They just need to set the clockspeed lower and power consumption would go down quite a bit.
Example:
RX 5700XT -> 1887Mhz and 219W on average
RX 5700 -> 1672Mhz and 166W on average
11% lower clockspeed and 4CU less and you gain 24% lower power consumption, which is not bad considering memory consumes the same.
 
Last edited:
  • Like
Reactions: Tlh97

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
The answer to ALL of that above is very simple.

AMD is on a yearly cadence. Designing big GPU with architecture that is going to be retired just 12 months later is uneconomical, from design costs perspective. If you are going to retire your architecture just 12 months later, its better to milk as much as possible cheapest possible designs.

Which means: smallest possible designs.

I hope this ends this ridiculous discussion.
RDNA 1 release -> July 7, 2019
RDNA 2 release -> October 28, 2020
This is actually 15 months and we don't even know If RDNA2 will replace everything.
I rather think It's because they wanted more wafers for Zen.
 
  • Like
Reactions: Konan and Tlh97

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
He did clearly state that was a speculation piece on his part and not tied to any of his sources. It’s a little bit like us debaiting the patent at the top of this page. In this particular case above this post is clearly saying where he got his pricing sources
He still thinks it's true, claiming that certain "partners" are testing it for a future product.
 
  • Haha
Reactions: Tlh97 and kurosaki

leoneazzurro

Golden Member
Jul 26, 2016
1,052
1,716
136
They wouldn't need 300W TBP to combat 2080Ti, 250W TBP is more than possible. They just need to set the clockspeed lower and power consumption would go down quite a bit.
Example:
RX 5700XT -> 1887Mhz and 219W on average
RX 5700 -> 1672Mhz and 166W on average
11% lower clockspeed and 4CU less and you gain 24% lower power consumption, which is not bad considering memory consumes the same.

True to an extent, but you still probably more than 250W (which is 2080Ti's TBP) and you are still missing the added fatures (Ray Tracing, DLSS). An enthusiast would have chosen the better featured card anyway,( not speaking about the fact that Nvidia could have pushed clocks and thermals a bit further).
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
TDP can be within 250W, but the clockspeed and voltage is the key.
If no game supports these features then It's pointless and 2080Ti was very expensive, so I don't see why AMD wouldn't be able to compete even without these features. Now more games have support for DLSS and RT, but Ampere is way better in both of these features than Turing.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,305
1,218
136
They wouldn't need 300W TBP to combat 2080Ti, 250W TBP is more than possible. They just need to set the clockspeed lower and power consumption would go down quite a bit.
Example:
RX 5700XT -> 1887Mhz and 219W on average
RX 5700 -> 1672Mhz and 166W on average
11% lower clockspeed and 4CU less and you gain 24% lower power consumption, which is not bad considering memory consumes the same.
When AMD released the 5700XT it was a test card with the 7nm die shrink. The end result was wow, look what we did without even trying. Big Navi is supposed to be the real deal and a major attempt to dethrone Nvidia. We still have yet to see the 3090 and I am convinced they could push beyond 400w. So those performance numbers from Nvidia come with a massive power bill that has not been seen even by Vega 64. Even the pro reviews say wow, nice performance but cringe at the power consumption levels of the 3080. Others have questioned why the performance does not scale better with the 8nm process and all the additional cores and better memory.

Meanwhile, AMD looks stupid by not having Zen3 out yet considering Intel processors crush Zen2 processors in gaming by close to 20% in some cases. I know I sound negative but all of this frustrates me. I can't believe 1080ti owners think their cards are outdated. What is AMD to say about big Navi? Why is it late? Why is Zen 3 not out yet? They say AMD wants to tie in the Zen3 and big navi release with the new consoles. PC gamers are not console gamers and do not care about new consoles.

It looks as if AMD will easily win the power efficiency award even if big navi doesn't outperform Nvidia.