Question Speculation: RDNA2 + CDNA Architectures thread

Page 70 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,705
6,427
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I think AMD has a big window of opportunity here. Due to NVIDIA opting to save money and go with the vastly cheaper Samsung 8nm process (see the EEE Times article summer of 2019 citing costs as the driving factor NVIDIA switched to Samsung) - NVIDIA only saw a 10-15% gain in frames/watt despite a full node jump.

AMD has been delivering and exceeding its stated efficiency goals since they started talking about them in 2016. They have exceeded every energy efficiency claim since Ryzen 1 and they did that with RDNA 1(they guided the public to a 50% increase in energy efficiency).
1600296427304.png

Now again AMD is stating they will see a 50% jump in energy efficiency in RDNA 2. Now a 50% jump in energy efficiency doesn't mean half the energy used. It is more like a jump from 10 frames/watt to 15 frames/watt. So an RDNA 2 card would use 2/3 (0.67) the power of an RDNA 1 card.

1600296365300.png

5700 XT draws 210 watts on average compared to 322 watts for the 3080 FE which is just about twice as fast. Keep in mind Navi 10 is 251mm^2 while 3080 is a 628mm^2 chip. So we have a ton of die size to work with also.

RDNA 2 could crank the wattage to the exact same as 3080, and performance if it scales well would be ... 210 watts x 2 (to reach 3080) x 0.67 (50% expected efficiency increase for RDNA 2) and you get.... 281 watts average gaming power consumption.

So AMD could equal 3080 performance for 40 less watts.... Yes AMD has not been at the ultra high end for a few gens.. but they DO have a window of opportunity here if they do decide to release the full 80 CU "big" Navi to gamers. They could even fall short of their stated 50% efficiency game and still be just about the exact same as NVIDIA.

Of course, this isn't all GPU magic. AMD is simply using a superior process in TSMC 7nm versus Samsung's revised 10nm node (8nm). But if RDNA 2 can improve that much over RDNA on what is essentially a slightly more mature node then hats off to AMD! I can't wait to see who is going to get my $400-$500 this fall to replace my 1070 Ti :cool:
 

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
Here's something new I think. Sony officially lists the max power usage for the PS5: 350W for the PS5, 340W for the PS5 Digital Edition.

That's some 30W higher than what the PSU in the Xbox Series X is rated as, I believe. The CPUs between both consoles are roughly the same so I wouldn't be surprised if there's like 20-30W more power from the GPU.

Xbox Series X GPU: 56 CUs @ 1.825 GHz, 130W-140W
PS5 GPU: 36 CUs @ 2.23 GHz, 150W-170W

Seems to jive with the rough napkin math I had sometime earlier last month:
130W * (2.23 GHz / 1.825 GHz)^3 * (36 CUs / 52 CUs) = 164W
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
Here's something new I think. Sony officially lists the max power usage for the PS5: 350W for the PS5, 340W for the PS5 Digital Edition.

That's some 30W higher than what the PSU in the Xbox Series X is rated as, I believe. The CPUs between both consoles are roughly the same so I wouldn't be surprised if there's like 20-30W more power from the GPU.

Xbox Series X GPU: 56 CUs @ 1.825 GHz, 130W-140W
PS5 GPU: 36 CUs @ 2.23 GHz, 150W-170W

Seems to jive with the rough napkin math I had sometime earlier last month:
130W * (2.23 GHz / 1.825 GHz)^3 * (36 CUs / 52 CUs) = 164W
I think you are overstating how much 8 cores of Zen 2 @ very modest clock rates - 3.5 ghz are consuming. The CPU elements are probably drawing 60 watts or so maximum. Zen 2 is extremely efficient in the mid 3 ghz range when boost clocks aren't cranked up.

Also remember Sony is using a different storage method and a faster PCIE bus which should draw several more watts than MS's solution. I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.
 

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?
 
  • Like
Reactions: Tlh97

Karnak

Senior member
Jan 5, 2017
399
767
136
The PS4 Pro PSU is rated at 310W yet power consumption is around like what? 165W for the entire console?

I'd say the PS5 with it's 350W PSU will be around 220W and the XSX with it's 315W PSU somewhere around 190W. Maybe a bit more here and there but not that much.

No surprise though that the PS5 will consume more power with it's >2.2GHz IMO. Even though RDNA2 can reach those clock speeds but a chip with more CU and lower clocks will always be more efficient than one with less CU but higher clocks.
 

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?
Nah, it has two 12V rails, one at 21.25A (255W) and the other at 5A (60W). If I'm not mistaken, the former is for the mainboard (SoC) and the latter is for the IO board.

Here. You can scope it out for yourself: Digital Foundry's Xbox Series X Teardown
 

Saylick

Diamond Member
Sep 10, 2012
3,531
7,858
136
I think you are overstating how much 8 cores of Zen 2 @ very modest clock rates - 3.5 ghz are consuming. The CPU elements are probably drawing 60 watts or so maximum. Zen 2 is extremely efficient in the mid 3 ghz range when boost clocks aren't cranked up.

Also remember Sony is using a different storage method and a faster PCIE bus which should draw several more watts than MS's solution. I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.
The SoC is probably ~200W total, or ~130-140W for the GPU, 50W for the CPU, and the remainder is for the rest. Here, Digital Foundry even shows you the Xbox Series X's PSU: Digital Foundry's Xbox Series X Teardown

So, in total, that's a 315W PSU. 255W for the mainboard and 60W for the IO board. Throw in some loss for inefficiencies and its total wattage is in the ballpark.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?

Console PSUs are never run anywhere close to the max rated wattage. Xbox One X had a 245w PSU and drew around 172w in GoW4 in anandtech review. There is good reason for that. These consoles have a long lifetime (in some cases 10 years or more) and the PSU wattage output is likely to reduce with capacitor aging due to heat. These consoles have a small form factor for the amount of compute they pack and run much hotter than your typical PC.


Xbox Series X has dual +12v power rails - 21.25A on the rail which connects to mainboard which houses SoC, GDDR6, NVMe and HDMI connector and 5A on the rail connected to the daughter I/O board. 12x 21.25 + 12 x 5 = 255 + 60 = 315


The Zen 2 cores at 3.6 Ghz draws 55w. The onboard NVMe draws 4-5w and there is a provsion of another NVMe expansion drive. The entire Series X system is likely to draw 210-220w which includes the Bluray drive and other I/O ports. The Series X GPU is drawing roughly 135-140w and the 16 GB GDDR6 alone is likely to draw around 40W. This puts the Series X GPU at 95-100w. Thats very efficient for 12 TF RDNA2 GPU which is likely to perform very close to 2080 Ti .
 

A///

Diamond Member
Feb 24, 2017
4,351
3,158
136
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?
 

SPBHM

Diamond Member
Sep 12, 2012
5,059
413
126
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

the 5700XT didn't use a large die, so there is plenty of potential for more from AMD on that alone.
also with RDNA2 being in parity in terms of features with the new consoles it should hold relevance for much longer.
 
  • Like
Reactions: lightmanek

blckgrffn

Diamond Member
May 1, 2003
9,298
3,440
136
www.teamjuchems.com
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

I believe it was. I am sure there was a lot of pressure from MS & Sony to see functional versions of RDNA. And the year in the wild surely helped lay the platform for a more solid footing for the driver software as well.

I’ve also read accounts that there is a fair amount of errata that has costly (expensive in terms of performance due to software workarounds) overhead that seem to indicate that maybe it wasn’t quite fully baked.

It’s lack of full support of DX12 Ultimate likely very quickly ages it, I’ve also read that the bits are there but AMD is choosing to focus on the future rather than fully enabling things like RT on RDNA. Maybe it would take a lot of work, maybe the performance would be bad, maybe that’s not true but the writing on the wall is that there will likely be 150M RDNA2 devices in consoles alone over the next 5-7 years and I am sure they have a PC forecast as well.

Given that all integrated graphics have been Vega based and it sounds like the next laptops chips are based on RDNA2 (rumor? feel like I read this too) means that RDNA install base is going to be a teeny tiny footprint in the grand scheme of things, right? 5 SKUs total between $150 and $400?

Its looking like mission accomplished, right? Drivers are solid, lots of low hanging fruit wins in terms of refining the physical layout, MS and Sony had proof of the RDNA concept in the flesh for nearly a year before their consoles rolled out.

Man, I need an ETH miner to buy my 5700xt right now before the dead end of this particular architecture branch becomes obvious to everyone and not just me 😂. Because I am not crazy, right? 🤔

I guess the 290x will ride again until November!
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
The Series X GPU is drawing roughly 135-140w and the 16 GB GDDR6 alone is likely to draw around 40W. This puts the Series X GPU at 95-100w. Thats very efficient for 12 TF RDNA2 GPU which is likely to perform very close to 2080 Ti .
I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.
 
Last edited:
  • Like
Reactions: Konan

lightmanek

Senior member
Feb 19, 2017
429
916
136
I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.

On PC yes, but in console land gaining % from software optimization and lack of extra layers of abstraction can easily win you back 10%-30%. So in next gen games I expect Series X to offer very similar raster performance to 2080Ti, but in PC land, RDNA2 will need a bit more clock or CU's.
 
  • Like
Reactions: Tlh97

Timorous

Golden Member
Oct 27, 2008
1,748
3,240
136
I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.

Impossible the Series X PSU is 315W but it is split to the mainboard and daughter board. Raghu has the numbers.

I rewatched the DF video on the Gear 5 demo. At the end they talk about comparing the Series X @ locked 4k Ultra in the Gears 5 benchmark to a 3950X + 2080 @ locked 4k Ultra and the performance was similar. The 2080 is 29% faster than the 5700XT in Gears 5. TPU had the average 5700XT clock at around 1.89Ghz so going from 40CU RDNA at that clockspeed to 52CU RDNA2 @ 1.825Ghz is linear scaling using a pretty low baseline.

Transistor count estimates would put 80CU RDNA2 at 3080 level, power scaling estimates do the same, Series X vs 5700XT show similar. All signs at the moment are pointing to something that is going to be playing in 3080 territory.

Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

Yes. RDNA feels very much like VLIW4 as the stepping stone between VLIW5 and GCN.

I won't bother with the power consumption, I already said in this thread what I think about >100% increase in performance/W.
What I will comment is the performance.
RX 5700XT: 1887Mhz on average(techpowerup) that's 9.66TFlops so 12TFlops is an increase of 24% in Tflops and not in an actual gaming performance, so let's say 18% in actual performance. On the other hand RTX 2080Ti is 50% faster in 4K than RX 5700 XT.
The difference is 27%. So what now? You will bring up the IPC improvement card, right? 10% better IPC -> 118*1.1= 130% now the difference is only 15%, which is still not close to RTX 2080Ti. You will need 25% better IPC to be 48% better than RX 5700XT and only that would be very close to RTX 2080Ti.
What's the likelihood of 25% better IPC? My conclusion is that It won't perform close to RTX 2080Ti.

AMD 50% perf/watt metric is probably product to product like it was when going from GCN to RDNA (V64 vs 5700XT, this was actually exceeded and the V56 vs 5700XT had a 49% perf/watt increase).

If you reduce clocks and voltage then getting a 100% perf/w increase in specific scenarios seems entirely possible. Look at NV claiming 1.9x perf/watt for Ampere over Turing.
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
On PC yes, but in console land gaining % from software optimization and lack of extra layers of abstraction can easily win you back 10%-30%. So in next gen games I expect Series X to offer very similar raster performance to 2080Ti, but in PC land, RDNA2 will need a bit more clock or CU's.
Yes, that's precisely the problem, the different platform. raghu78 is comparing a PC gpu to a console GPU and saying how It will perform close to It, yet It's not because the GPU Itself is so capable, but It's because of extra work of programmers to gain every single extra % of performance out of the console. So I have to ask what's the point of such a comparison?
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
AMD 50% perf/watt metric is probably product to product like it was when going from GCN to RDNA (V64 vs 5700XT, this was actually exceeded and the V56 vs 5700XT had a 49% perf/watt increase).

If you reduce clocks and voltage then getting a 100% perf/w increase in specific scenarios seems entirely possible. Look at NV claiming 1.9x perf/watt for Ampere over Turing.
The problem is that AMD is claiming only 50% increase yet here some user(s) have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1. Now let's include the conclusion of some users about console gpus that a 36CU 2.23Ghz(Boost) GPU consumes more than 52CU at 1.825Mhz. The 100% increase in perf/w is simply not possible with clocks over 2Ghz when a card with 44% more CU consumes less power just because It has ~10-15% lower clockspeed.
 
Last edited:
  • Like
Reactions: kurosaki

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
The problem is that AMD is claiming only 50% increase yet here some users have no problem saying It's double of that while the clockspeed is supposedly over 2Ghz, which is more than RDNA1.

AMD talked about 50% higher perf/watt

What some people saying is double the performance of 5700XT for the Big NAVI chip.

The two are not the same.

Also to point out that RDNA 2 may have 50% higher perf/watt at the same performance or at the same power vs RDNA1 , but how efficient the top Big Navi card will be is another thing.
So its not impossible to have a Big Navi card that in order to reach RTX3080 performance its perf/watt could be way lower than the 50% perf/watt that AMD says.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,522
3,037
136
AMD talked about 50% higher perf/watt

What some people saying is double the performance of 5700XT for the Big NAVI chip.

The two are not the same.
Then please check what Glo. posted more than once about his "expected " performance and TBP of RDNA2 gpus.

BTW If we say 52CU GPU at 1.825Mhz consumes 140W at max including memory and in PC such a GPU performs as 2080(Super), then that's also ~2x better perf/W compared RX5700 XT, but at least the clockspeed is not very high.
 
Last edited: