Question Speculation: RDNA2 + CDNA Architectures thread

Page 71 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
Yes, precisely. It's not like I would be against It, most likely I would buy a new laptop with RDNA2 and Zen3 next year, If this was true, but for me It simply looks like hype and after Polaris and RDNA1 hype I'm very tired of It.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
The problem is that AMD is claiming only 50% increase yet here some user(s) have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1. Now let's include the conclusion of some users about console gpus that a 36CU 2.23Ghz(Boost) GPU consumes more than 52CU at 1.825Mhz. The 100% increase in perf/w is simply not possible with clocks over 2Ghz when a card with 44% more CU consumes less power just because It has ~10-15% lower clockspeed.

When AMD compared GCN to RDNA performance per watt they were comparing two products. Namely Vega 64 and 5700XT. If AMD had cherry picked the comparison they could have made a far more outlandish claim like Nvidia did. As it so happens when you compare Vega to the 5000 series the only two match-ups that do not provide 50% or better performance per watt increases are Vega 56 to 5700XT and 5500XT (5700XT and 5500XT have the same performance per watt) where the gain was a meager 49% performance per watt increase.

On the basis that the RDNA -> RDNA2 jump is also product to product (and I think it says something to that effect in the footnotes of ones of the slides of one of the presentations that I am not going to spend an hour or so digging out) and that perf/watt scaling is like the 5700XT and 5500XT then we can do some simple maths that says 210W x 2 = 420W * 0.66 ~= 280W ballpark for double 5700XT performance.

We can do the same with transistor counts where 100% more transistors is historically, for AMD, between + 70% and + 139% additional performance or we can take the Series X performance in Gears 5 and see that a 24% Tflop increase is around a 29% performance increase which if that trend holds upto 80CUs also puts an 80CU RDNA 2 part at around double 5700XT performance.

I would not put any stock in PS5 SoC estimates since Sony have not detailed anything.

MS have come out and said what their target is and the 255W of power going to the SoC + NVMe + GDDR6 is more like 205W after you take off efficiency and longevity headroom which means the GPU must be pretty damn good at those clocks. It also suggests that 1.825Ghz is on the low end of the voltage power curve so for a PC part there is headroom to increase clockspeeds, to what I have no clue.

The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,305
1,218
136
When AMD compared GCN to RDNA performance per watt they were comparing two products. Namely Vega 64 and 5700XT. If AMD had cherry picked the comparison they could have made a far more outlandish claim like Nvidia did. As it so happens when you compare Vega to the 5000 series the only two match-ups that do not provide 50% or better performance per watt increases are Vega 56 to 5700XT and 5500XT (5700XT and 5500XT have the same performance per watt) where the gain was a meager 49% performance per watt increase.

On the basis that the RDNA -> RDNA2 jump is also product to product (and I think it says something to that effect in the footnotes of ones of the slides of one of the presentations that I am not going to spend an hour or so digging out) and that perf/watt scaling is like the 5700XT and 5500XT then we can do some simple maths that says 210W x 2 = 420W * 0.66 ~= 280W ballpark for double 5700XT performance.

We can do the same with transistor counts where 100% more transistors is historically, for AMD, between + 70% and + 139% additional performance or we can take the Series X performance in Gears 5 and see that a 24% Tflop increase is around a 29% performance increase which if that trend holds upto 80CUs also puts an 80CU RDNA 2 part at around double 5700XT performance.

I would not put any stock in PS5 SoC estimates since Sony have not detailed anything.

MS have come out and said what their target is and the 255W of power going to the SoC + NVMe + GDDR6 is more like 205W after you take off efficiency and longevity headroom which means the GPU must be pretty damn good at those clocks. It also suggests that 1.825Ghz is on the low end of the voltage power curve so for a PC part there is headroom to increase clockspeeds, to what I have no clue.

The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
If this review of the 3080 is legit. AMD shouldn't have much of a problem equaling the 3080 with big navi. The power consumption of the 3080 is painful to look at. Remember, most 2080ti's sold were RMA'd within a month or two because of overheating issues. The 3080 uses 60-70w more than the 2080ti.
 

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
IF it scales, then 3080 is the bottom tier what you will achieve with 80 CUs, and over 2 GHz clock speeds.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
If this review of the 3080 is legit. AMD shouldn't have much of a problem equaling the 3080 with big navi. The power consumption of the 3080 is painful to look at. Remember, most 2080ti's sold were RMA'd within a month or two because of overheating issues. The 3080 uses 60-70w more than the 2080ti.

Most 2080 Ti's were RMA'ed for space invaders, not overheating. But one of the reasons for the fancy (and rather expensive) new cooler is because of the very high power consumption.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?
 

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?

:tearsofjoy:

AMD loves money and has been prioritizing their efforts and available wafers based on margins. Evidently consumer GPU is bottom of the barrel. This is exactly what their shareholders expect them to do - make lots of money.

That's why Intel has been so "valuable" for so long. Their stock price is largely reflective of their profitability this quarter/this year. It's not exactly an endorsement of their current product hotness.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
:tearsofjoy:

AMD loves money and has been prioritizing their efforts and available wafers based on margins. Evidently consumer GPU is bottom of the barrel. This is exactly what their shareholders expect them to do - make lots of money.

That's why Intel has been so "valuable" for so long. Their stock price is largely reflective of their profitability this quarter/this year. It's not exactly an endorsement of their current product hotness.

Devil's advocated mode on.

But now they are going to make that Big chip to come out after NVidia has already significantly driven down the price it can get for it, which makes for worse value/profit proposition... /DA mode.

IMO the reason they didn't make Big Navi sooner was because high end chips without Ray Tracing and other advanced features would soon be an expensive albatross. They didn't want to make a chip that big and expensive and have a short selling window for it.

So now that it can come with all the RDNA2 features is the right time for that big expensive chip that can have much longer selling window.
 

randomhero

Member
Apr 28, 2020
184
251
136
After reviews of RTX3080, I am even more confident in my opinion that AMD will have RTX3090 challenger. Must say stars(pun intended!) have aligned pretty well for AMD.

Not so much wait anymore, end of October is near. Till then we can enjoy more bonkers rumours like that 128MB cache. :D
Knowing AMD it could turn out to be true. :D
 

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
But now they are going to make that Big chip to come out after NVidia has already significantly driven down the price it can get for it...

Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.

That's an argument for not making Big Navi at all, not an argument for making it now, as opposed to earlier.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.
In case you guys don't know NV makes tons of money from their GPU, way more than what AMD is making for both cpu and gpu. NV market value is at 300B, Intel only at 211B, AMD at 88B. And before you say NV is overpriced, they have a PE of 90 compare to 145 for AMD. That means even at the crazy price of NV stock, their price to earning ratio is way more sane than AMD's.
 

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
That's an argument for not making Big Navi at all, not an argument for making it now, as opposed to earlier.

I don't follow. They were busy creating APUs for a new generation of consoles. I am sure that required a lot of engineering resources. They were able to wrap that and get these standalone GPUs ready to go in the last couple of years. I'd think current output reflects business conditions 2-3 years ago and from what I've read her the GPU portion of AMD wasn't doing great then.

Clearly they feel like they've created something that can scale and be competitive up the stack?

If not I expect we'll be back to hearing about the "sweet spot" and how we don't need to really consider video cards over some dollar amount ($500 now?) because the volume on those parts are so low.

I already wrote a lengthy post on how I believe RDNA v1 is a stop gap product, we don't disagree.
 

Konan

Senior member
Jul 28, 2017
360
291
106
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

1600356466674.png


I asked a few pages back about scaling....
Scaling..... Why isn't the XSX higher clocked?
52 CU Xbox Series X GPU @ 1.825Ghz
36 CU PS5 GPU can reach 2.23GHz

If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".
 

moinmoin

Diamond Member
Jun 1, 2017
5,064
8,032
136
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

View attachment 30005


I asked a few pages back about scaling....


If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".
Going by the gfx10xy listing, e.g.:
It's in the Linux GPU drivers. So each RDNA product has an identifier along the lines of gfx10XY. The X is like a family of products, the Y is the number of the product in the family - the order in which the design was created. It's fundamentally the same as Navi1X/2X, but only includes dies past a later stage on becoming a real product. That is to say for example, Navi11 - a dead project - does not have a gfx number.

Navi10Lite - gfx1000 (PS5)
Navi14Lite - gfx1001 (Lockhart?)
Navi10 - gfx1010 (5700XT/5700/5600XT)
Navi12 - gfx1011 (Unknown, but 40CUs and HBM2)
Navi14 - gfx1012 (5500XT/5500M/5300M)
Navi21Lite - gfx1020 (Xbox Series X)
Navi21 - gfx1030 (Rumour: ~500mm^2)
Navi22 - gfx1031 (Rumour: ~250mm^2)
Navi23 - gfx1032 (?)
VanGogh - gfx1033
VanGoghLite - gfx1040

So what does all of that mean, well, for starters, anything Lite is semi-custom. That should be fairly obvious given the fact that the PS5 and Xbox SoCs are both there.

Next thing you might notice is the pattern there.

gfx1000, 1020 and 1040 are all used for semi-custom projects.

gfx1010 is RDNA1.

gfx1030 is RDNA2.

It's worth noting that as semi-custom projects they likely have features their gfx names don't let on. For example, the PS5 is closer to RDNA2 in terms of it's performance, given how it can clock so high. Though that might be after future revisions (Oberon, Flute etc). The RTRT functionality was probably brought over from RDNA2 right from the beginning, but no clue as to the rest.
...work on PS5's GPU started along with RDNA1. XSX's joined later along with RDNA2, so it having a more modern base would be natural. So the latter should be more comparable with upcoming consumer RDNA2 GPUs.
 

Paul98

Diamond Member
Jan 31, 2010
3,732
199
106
I have more confidence in AMD executing and not over promising like they have in the past, along with where they are starting from.
They have done a far better job delivering what they promise now than they had when doing GCN revisions.
I can't imagine that they would make the same mistakes they had, especially with what we have seen so far from RDNA1.
They should be able to have a good bit higher transistor density on RDNA2, the 5700xt was quite low from where it could be now.
Unlike on GCN where even the basic math looked bad for trying to scale it up for anything past something like the 290x, RDNA looks to be a far better path forward. Now we just have to wait to see on the execution and if there are any major deficiencies when scaling up.
Now we will have to wait and see obviously, but it seems far more realistic to expect something that should perform well than in the recent past.
 
  • Love
Reactions: spursindonesia

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
This post feels bordering on bait territory, but screw it I'll bite.

I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?

What would be the point in wasting time and resources on a larger die when you're working to get a new generation out with 50% perf/W uplift within 18 months (and looking like ~15 months at that).

And that's ignoring the fact AMD still definitely did not have a great uArch in terms of efficiency and that 7nm prices were still rather high.

2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.

Yeah, just like Nvidia did with Maxwell.

Huge improvements on the same node are most definitely feasible.


3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?

This is literally the same as your first point. Like I said, it wouldn't have been very competitive launching even later than Navi10 did and closer to Ampere. Navi10 was already too late for my liking honestly.
It's the same reason I argue against Nvidia pulling a "Super" on 7nm. It'd be a waste of time and resources as while they put manpower into a GPU that will have a minor affect on the market, their competitor would be well on their way to their next generation products and ready to absolutely clap them with real improvements.

4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.
You mean all the ones using the exact same die?

Lol.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?
Yes, I'm sure it's AMD's marketing that decided when their products would be ready for launch.
 

Mopetar

Diamond Member
Jan 31, 2011
8,113
6,768
136
Without knowing what RDNA2 is its difficult to say how far or close it is to whatever is in the PS5. However that's just one console and the Xbox seems to be close to RDNA2 based on what's been said publicly.

But I'm not sure it really matters. I think we should assume that AMD wouldn't make RDNA2 worse than what's in either console which is some kind of hybrid product to whatever degree. Console comparisons are just a baseline from which AMD could improve in that case.
 

Tup3x

Golden Member
Dec 31, 2016
1,086
1,084
136
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

View attachment 30005


I asked a few pages back about scaling....


If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".
I wouldn't be so surprised if Xbox turned out to be closer to PC RDNA2. It's likely that ~1.8 GHz is the sweet spot and after that the perf/w starts to deteriorate quickly. Also no one knows how things scale past 52CU. I'd imagine ~2 GHz being realistic (that's one way to make use of that improved perf/W - clock it higher before things go out of hand) and definitely closer to 2 GHz than 2,5 Ghz.
 
  • Like
Reactions: Konan

Glo.

Diamond Member
Apr 25, 2015
5,803
4,777
136
If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
If it is inbetween RDNA1 and RDNA2 with physical optimisation, without architectural advancements that increase core clocks, then I think RDNA2 shapes in way better form, than we initially thought ;).

It may also tell why Paul from RedGamingTech in one of videos discussing PS5 said that PS5 had problem clocking past 2.25 GHz, because of inherent nature of CU design, which would not allow clocking past this, without problems. That is interesting point of view.

I wouldn't worry about it. Less efficient architecture, like Vega in Renoir APUs clocks to 2.1 GHz and OC's to 2.4 GHz, without problem, in a design that has 60 mln xTors/mm2(and as we know, the more xTors/mm2 the bigger problem to clock high)
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?

Take a deep breath and slowly exhale. You might need to do this a couple of times to feel the true effect.

The lapse from Navi to Big Navi is what a little over a year? I'd imagine the focus was on console development and Zen 3.

Maybe it's just me, but I believe Lisa Su knows what's best for AMD in the long run.
 

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
XB Series X is show nearly identical transistor density to 5700 XT, so I wouldn't expect a big change.

That means nothing though. MS could have done that as the lower density version was the best price/performance/yield/power density compromise for their requirements.

We already know 7nm can make huge 60M+/mm² dies because GA100 is built on it with 58B transistors.
 
  • Like
Reactions: Tlh97 and Mopetar