Question Speculation: RDNA2 + CDNA Architectures thread

Page 69 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,631
5,943
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

I believe it was. I am sure there was a lot of pressure from MS & Sony to see functional versions of RDNA. And the year in the wild surely helped lay the platform for a more solid footing for the driver software as well.

I’ve also read accounts that there is a fair amount of errata that has costly (expensive in terms of performance due to software workarounds) overhead that seem to indicate that maybe it wasn’t quite fully baked.

It’s lack of full support of DX12 Ultimate likely very quickly ages it, I’ve also read that the bits are there but AMD is choosing to focus on the future rather than fully enabling things like RT on RDNA. Maybe it would take a lot of work, maybe the performance would be bad, maybe that’s not true but the writing on the wall is that there will likely be 150M RDNA2 devices in consoles alone over the next 5-7 years and I am sure they have a PC forecast as well.

Given that all integrated graphics have been Vega based and it sounds like the next laptops chips are based on RDNA2 (rumor? feel like I read this too) means that RDNA install base is going to be a teeny tiny footprint in the grand scheme of things, right? 5 SKUs total between $150 and $400?

Its looking like mission accomplished, right? Drivers are solid, lots of low hanging fruit wins in terms of refining the physical layout, MS and Sony had proof of the RDNA concept in the flesh for nearly a year before their consoles rolled out.

Man, I need an ETH miner to buy my 5700xt right now before the dead end of this particular architecture branch becomes obvious to everyone and not just me 😂. Because I am not crazy, right? 🤔

I guess the 290x will ride again until November!
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
The Series X GPU is drawing roughly 135-140w and the 16 GB GDDR6 alone is likely to draw around 40W. This puts the Series X GPU at 95-100w. Thats very efficient for 12 TF RDNA2 GPU which is likely to perform very close to 2080 Ti .
I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.
 
Last edited:
  • Like
Reactions: Konan

lightmanek

Senior member
Feb 19, 2017
387
754
136
I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.

On PC yes, but in console land gaining % from software optimization and lack of extra layers of abstraction can easily win you back 10%-30%. So in next gen games I expect Series X to offer very similar raster performance to 2080Ti, but in PC land, RDNA2 will need a bit more clock or CU's.
 
  • Like
Reactions: Tlh97

Timorous

Golden Member
Oct 27, 2008
1,613
2,766
136
I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.

Impossible the Series X PSU is 315W but it is split to the mainboard and daughter board. Raghu has the numbers.

I rewatched the DF video on the Gear 5 demo. At the end they talk about comparing the Series X @ locked 4k Ultra in the Gears 5 benchmark to a 3950X + 2080 @ locked 4k Ultra and the performance was similar. The 2080 is 29% faster than the 5700XT in Gears 5. TPU had the average 5700XT clock at around 1.89Ghz so going from 40CU RDNA at that clockspeed to 52CU RDNA2 @ 1.825Ghz is linear scaling using a pretty low baseline.

Transistor count estimates would put 80CU RDNA2 at 3080 level, power scaling estimates do the same, Series X vs 5700XT show similar. All signs at the moment are pointing to something that is going to be playing in 3080 territory.

Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

Yes. RDNA feels very much like VLIW4 as the stepping stone between VLIW5 and GCN.

I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.

AMD 50% perf/watt metric is probably product to product like it was when going from GCN to RDNA (V64 vs 5700XT, this was actually exceeded and the V56 vs 5700XT had a 49% perf/watt increase).

If you reduce clocks and voltage then getting a 100% perf/w increase in specific scenarios seems entirely possible. Look at NV claiming 1.9x perf/watt for Ampere over Turing.
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
On PC yes, but in console land gaining % from software optimization and lack of extra layers of abstraction can easily win you back 10%-30%. So in next gen games I expect Series X to offer very similar raster performance to 2080Ti, but in PC land, RDNA2 will need a bit more clock or CU's.
Yes, that's precisely the problem, the different platform. raghu78 is comparing a PC gpu to a console GPU and saying how It will perform close to It, yet It's not because the GPU Itself is so capable, but It's because of extra work of programmers to gain every single extra % of performance out of the console. So I have to ask what's the point of such a comparison?
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
AMD 50% perf/watt metric is probably product to product like it was when going from GCN to RDNA (V64 vs 5700XT, this was actually exceeded and the V56 vs 5700XT had a 49% perf/watt increase).

If you reduce clocks and voltage then getting a 100% perf/w increase in specific scenarios seems entirely possible. Look at NV claiming 1.9x perf/watt for Ampere over Turing.
The problem is that AMD is claiming only 50% increase yet here some user(s) have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1. Now let's include the conclusion of some users about console gpus that a 36CU 2.23Ghz(Boost) GPU consumes more than 52CU at 1.825Mhz. The 100% increase in perf/w is simply not possible with clocks over 2Ghz when a card with 44% more CU consumes less power just because It has ~10-15% lower clockspeed.
 
Last edited:
  • Like
Reactions: kurosaki

AtenRa

Lifer
Feb 2, 2009
14,001
3,357
136
The problem is that AMD is claiming only 50% increase yet here some users have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1.

AMD talked about 50% higher perf/watt

What some people saying is double the performance of 5700XT for the Big NAVI chip.

The two are not the same.

Also to point out that RDNA 2 may have 50% higher perf/watt at the same performance or at the same power vs RDNA1 , but how efficient the top Big Navi card will be is another thing.
So its not impossible to have a Big Navi card that in order to reach RTX3080 performance its perf/watt could be way lower than the 50% perf/watt that AMD says.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
AMD talked about 50% higher perf/watt

What some people saying is double the performance of 5700XT for the Big NAVI chip.

The two are not the same.
Then please check what Glo. posted more than once about his "expected " performance and TBP of RDNA2 gpus.

BTW If we say 52CU GPU at 1.825Mhz consumes 140W at max including memory and in PC such a GPU performs as 2080(Super), then that's also ~2x better perf/W compared RX5700 XT, but at least the clockspeed is not very high.
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,356
2,848
106
Yes, precisely. It's not like I would be against It, most likely I would buy a new laptop with RDNA2 and Zen3 next year, If this was true, but for me It simply looks like hype and after Polaris and RDNA1 hype I'm very tired of It.
 

Timorous

Golden Member
Oct 27, 2008
1,613
2,766
136
The problem is that AMD is claiming only 50% increase yet here some user(s) have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1. Now let's include the conclusion of some users about console gpus that a 36CU 2.23Ghz(Boost) GPU consumes more than 52CU at 1.825Mhz. The 100% increase in perf/w is simply not possible with clocks over 2Ghz when a card with 44% more CU consumes less power just because It has ~10-15% lower clockspeed.

When AMD compared GCN to RDNA performance per watt they were comparing two products. Namely Vega 64 and 5700XT. If AMD had cherry picked the comparison they could have made a far more outlandish claim like Nvidia did. As it so happens when you compare Vega to the 5000 series the only two match-ups that do not provide 50% or better performance per watt increases are Vega 56 to 5700XT and 5500XT (5700XT and 5500XT have the same performance per watt) where the gain was a meager 49% performance per watt increase.

On the basis that the RDNA -> RDNA2 jump is also product to product (and I think it says something to that effect in the footnotes of ones of the slides of one of the presentations that I am not going to spend an hour or so digging out) and that perf/watt scaling is like the 5700XT and 5500XT then we can do some simple maths that says 210W x 2 = 420W * 0.66 ~= 280W ballpark for double 5700XT performance.

We can do the same with transistor counts where 100% more transistors is historically, for AMD, between + 70% and + 139% additional performance or we can take the Series X performance in Gears 5 and see that a 24% Tflop increase is around a 29% performance increase which if that trend holds upto 80CUs also puts an 80CU RDNA 2 part at around double 5700XT performance.

I would not put any stock in PS5 SoC estimates since Sony have not detailed anything.

MS have come out and said what their target is and the 255W of power going to the SoC + NVMe + GDDR6 is more like 205W after you take off efficiency and longevity headroom which means the GPU must be pretty damn good at those clocks. It also suggests that 1.825Ghz is on the low end of the voltage power curve so for a PC part there is headroom to increase clockspeeds, to what I have no clue.

The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,135
1,089
136
When AMD compared GCN to RDNA performance per watt they were comparing two products. Namely Vega 64 and 5700XT. If AMD had cherry picked the comparison they could have made a far more outlandish claim like Nvidia did. As it so happens when you compare Vega to the 5000 series the only two match-ups that do not provide 50% or better performance per watt increases are Vega 56 to 5700XT and 5500XT (5700XT and 5500XT have the same performance per watt) where the gain was a meager 49% performance per watt increase.

On the basis that the RDNA -> RDNA2 jump is also product to product (and I think it says something to that effect in the footnotes of ones of the slides of one of the presentations that I am not going to spend an hour or so digging out) and that perf/watt scaling is like the 5700XT and 5500XT then we can do some simple maths that says 210W x 2 = 420W * 0.66 ~= 280W ballpark for double 5700XT performance.

We can do the same with transistor counts where 100% more transistors is historically, for AMD, between + 70% and + 139% additional performance or we can take the Series X performance in Gears 5 and see that a 24% Tflop increase is around a 29% performance increase which if that trend holds upto 80CUs also puts an 80CU RDNA 2 part at around double 5700XT performance.

I would not put any stock in PS5 SoC estimates since Sony have not detailed anything.

MS have come out and said what their target is and the 255W of power going to the SoC + NVMe + GDDR6 is more like 205W after you take off efficiency and longevity headroom which means the GPU must be pretty damn good at those clocks. It also suggests that 1.825Ghz is on the low end of the voltage power curve so for a PC part there is headroom to increase clockspeeds, to what I have no clue.

The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
If this review of the 3080 is legit. AMD shouldn't have much of a problem equaling the 3080 with big navi. The power consumption of the 3080 is painful to look at. Remember, most 2080ti's sold were RMA'd within a month or two because of overheating issues. The 3080 uses 60-70w more than the 2080ti.
 

Glo.

Diamond Member
Apr 25, 2015
5,708
4,552
136
The real questions are is 'Big Navi' 80CUs and does it scale upto that without decreasing utilisation in the way GCN did at 64 CUs. If it is and it does scale AMD have a 3080 competitor on their hands, if not then maybe it will land between 3070 and 3080.
IF it scales, then 3080 is the bottom tier what you will achieve with 80 CUs, and over 2 GHz clock speeds.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
If this review of the 3080 is legit. AMD shouldn't have much of a problem equaling the 3080 with big navi. The power consumption of the 3080 is painful to look at. Remember, most 2080ti's sold were RMA'd within a month or two because of overheating issues. The 3080 uses 60-70w more than the 2080ti.

Most 2080 Ti's were RMA'ed for space invaders, not overheating. But one of the reasons for the fancy (and rather expensive) new cooler is because of the very high power consumption.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?
 

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
I think you diehards on the red side are setting yourself up for disappointment again. A couple of things don't make sense to me with expecting Navi to scale linearly.
1) 5700 was a relatively small chip and performed well and scaled compared to the lower chip. So, why didn't AMD release something bigger?
2) They're already on the 7nm, so it's not even like they're moving up a node and have to do everything new.
3) Does AMD hate money? Why sit on producing a 2080ti competitor all these months (year?) when the price was so high? If they can make a 5800xt that's around the 2080ti, do they think there will be no demand if it's $1000 a year ago? 6 months ago?
4) New versions of Ryzen CPU seems to come out all the time, so it's not like the company have a problem with customers doing frequent upgrades.

If this is true, then AMD really should think about changing the marketing team, cause they are idiots. Now the most they can charge for a 2080ti competitor is $500 and $700 for a 3080 competitor and it wouldn't come out until at the end of October at the earliest, giving NV another month to monopolize GPU sales. They screwed their shareholders and board partners out of tons of money and consumers not having any competition on the high end. The 3080 memes shouldn't be about 2080ti owners, it should be on AMD, cause big Navi value just dropped in 1/2. WTF AMD?

:tearsofjoy:

AMD loves money and has been prioritizing their efforts and available wafers based on margins. Evidently consumer GPU is bottom of the barrel. This is exactly what their shareholders expect them to do - make lots of money.

That's why Intel has been so "valuable" for so long. Their stock price is largely reflective of their profitability this quarter/this year. It's not exactly an endorsement of their current product hotness.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
:tearsofjoy:

AMD loves money and has been prioritizing their efforts and available wafers based on margins. Evidently consumer GPU is bottom of the barrel. This is exactly what their shareholders expect them to do - make lots of money.

That's why Intel has been so "valuable" for so long. Their stock price is largely reflective of their profitability this quarter/this year. It's not exactly an endorsement of their current product hotness.

Devil's advocated mode on.

But now they are going to make that Big chip to come out after NVidia has already significantly driven down the price it can get for it, which makes for worse value/profit proposition... /DA mode.

IMO the reason they didn't make Big Navi sooner was because high end chips without Ray Tracing and other advanced features would soon be an expensive albatross. They didn't want to make a chip that big and expensive and have a short selling window for it.

So now that it can come with all the RDNA2 features is the right time for that big expensive chip that can have much longer selling window.
 

randomhero

Member
Apr 28, 2020
181
247
86
After reviews of RTX3080, I am even more confident in my opinion that AMD will have RTX3090 challenger. Must say stars(pun intended!) have aligned pretty well for AMD.

Not so much wait anymore, end of October is near. Till then we can enjoy more bonkers rumours like that 128MB cache. :D
Knowing AMD it could turn out to be true. :D
 

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
But now they are going to make that Big chip to come out after NVidia has already significantly driven down the price it can get for it...

Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,227
5,228
136
Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.

That's an argument for not making Big Navi at all, not an argument for making it now, as opposed to earlier.
 

DJinPrime

Member
Sep 9, 2020
87
89
51
Pretty sure that the market for Epyc CPUs ridiculously dwarfs the market for $1,000 consumer GPUs...

But your not really wrong, in a perfect world they would have had the manpower and resources to pull it off. Consider how important stand alone consumer GPUs are to nvidia vs AMD and you'd see how their focus might differ currently.
In case you guys don't know NV makes tons of money from their GPU, way more than what AMD is making for both cpu and gpu. NV market value is at 300B, Intel only at 211B, AMD at 88B. And before you say NV is overpriced, they have a PE of 90 compare to 145 for AMD. That means even at the crazy price of NV stock, their price to earning ratio is way more sane than AMD's.
 

blckgrffn

Diamond Member
May 1, 2003
9,126
3,066
136
www.teamjuchems.com
That's an argument for not making Big Navi at all, not an argument for making it now, as opposed to earlier.

I don't follow. They were busy creating APUs for a new generation of consoles. I am sure that required a lot of engineering resources. They were able to wrap that and get these standalone GPUs ready to go in the last couple of years. I'd think current output reflects business conditions 2-3 years ago and from what I've read her the GPU portion of AMD wasn't doing great then.

Clearly they feel like they've created something that can scale and be competitive up the stack?

If not I expect we'll be back to hearing about the "sweet spot" and how we don't need to really consider video cards over some dollar amount ($500 now?) because the volume on those parts are so low.

I already wrote a lengthy post on how I believe RDNA v1 is a stop gap product, we don't disagree.
 

Konan

Senior member
Jul 28, 2017
360
291
106
This is why I have an issue using the consoles to compare and try to evaluate RDNA2 in desktop.
(A screenshot / not mine, but seen and is real from a principal engineer at Sony that has been deleted)

1600356466674.png


I asked a few pages back about scaling....
Scaling..... Why isn't the XSX higher clocked?
52 CU Xbox Series X GPU @ 1.825Ghz
36 CU PS5 GPU can reach 2.23GHz

If we are looking at consoles for speculation then I feel the XSX is the best comparison, specifically the clocks. You got a XSX 52CU part at 1.8ghz. I seriously doubt you will get an RDNA2 desktop card that operates at 2.3-2.5ghz plus as standard all the time. I think we'll get boosts to 2.2ghz ish but not a permanent thing.
As for PS5 I believe it will be powerful and compete well and the design is more of a "cousin" to the XSX. Sure it has speed but maybe that is because it is a cousin and not a full v2 approach.
I also don't think it is effective to take a look at PS5 power for example and extrapolate how a desktop RDNA2 card will be because of this "Hybrid".