Question Speculation: RDNA2 + CDNA Architectures thread

Page 68 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,573
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

Glo.

Diamond Member
Apr 25, 2015
5,658
4,416
136
No question 80 CUs could beat if 3080 if you ignore power usage.

But we have no idea how much AMD will have throttle 80 CUs to keep power in check.

You can't simply double Navi 10, without considering that also doubles power usage to 450 W. That's a lot of power to trim with minimal help from the process.

Marketing and Rumors are not iron clad truth to answer that question either.

The final proof when the cards are finalized and in third party hands. This is going to all come down to power usage, and how it is kept in check.
We don't have to. We have seen already the consoles, and its much more efficient physical design. We have seen Renoir APUs and their efficiency, we also have seen how high those APUs IGPUs clock.

There is no reason why we cannot do some maths and with pretty good accuracy to extrapolate based on what already have the data, how RDNA2 GPUs will perform, knowing how RDNA1 scales.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,222
5,224
136
We don't have to. We have seen already the consoles, and its much more efficient physical design. We have seen Renoir APUs and their efficiency, we also have seen how high those APUs IGPUs clock.

There is no reason why we cannot do some maths and with pretty good accuracy to extrapolate based on what already have the data, how RDNA2 GPUs will perform, knowing how RDNA1 scales.

300W PSU in the new consoles, and the underclocked Ryzen sips power so most of it will be for the GPU, so not much info in that either.

We need to see the real card in third party hands to get the real answer.
 

Tup3x

Senior member
Dec 31, 2016
944
925
136
We don't have to. We have seen already the consoles, and its much more efficient physical design. We have seen Renoir APUs and their efficiency, we also have seen how high those APUs IGPUs clock.

There is no reason why we cannot do some maths and with pretty good accuracy to extrapolate based on what already have the data, how RDNA2 GPUs will perform, knowing how RDNA1 scales.
Renoir APU has exactly as much to do with RDNA2 efficiency as Pascal with Ampere. We don't even know for sure that the consoles are 100% the same as upcoming cards. Judging by what Cerny said it might not be the case.

You are often so sure about the things you say yet more often you are wrong. I would refrain from being so sure about everything if I were you.

It's realistic to expect around RTX 3080 performance for AMD's top card. I'd expect that the second best will beat RTX 3070. NVIDIA will definitely launch SUPER/Ti variants after that though.
 

Glo.

Diamond Member
Apr 25, 2015
5,658
4,416
136
Renoir APU has exactly as much to do with RDNA2 efficiency as Pascal with Ampere. We don't even know for sure that the consoles are 100% the same as upcoming cards. Judging by what Cerny said it might not be the case.

You are often so sure about the things you say yet more often you are wrong. I would refrain from being so sure about everything if I were you.

It's realistic to expect around RTX 3080 performance for AMD's top card. I'd expect that the second best will beat RTX 3070. NVIDIA will definitely launch SUPER/Ti variants after that though.
It has so much to do with it that the same physical design team which made it possible is right now making RDNA2 GPUs.

If Vega iGPU is clocking at stock to 2100 MHz at stock, on desktop, and OC's to 2.4 GHz I see absolutely no problem for big GPU, like N21 to clock to 2.2 GHz.
300W PSU in the new consoles, and the underclocked Ryzen sips power so most of it will be for the GPU, so not much info in that either.

We need to see the real card in third party hands to get the real answer.
Yeah, we have seen. The power draw is the same as Xbox One GPU. So 52 CU's at 1.825 GHz sipping 130-140W of power.

No need to downplay.
 

pj-

Senior member
May 5, 2015
481
249
116

Up until now, our artists have always had to create approximations of reflections by hand to achieve that effect – but the ray tracing architecture in PlayStation 5 gives us “more power” to provide true reflections in real time. In DMC5SE, players have the choice to turn on ray tracing and prioritize resolution (targeting 4K @ 30fps), or frame rate (targeting 1080p @ 60fps).

For players who prefer to allocate their horsepower to frame rates over graphics, PS5 also gives console players the ability to play their games at frame rates up to 120fps (with a compatible display, of course), which is enabled in DMC5SE when players turn on High Framerate Mode.

RT reflections in a last gen port cuts framerate in half?
 
  • Like
Reactions: Olikan

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
The consoles will be close enough to RDNA2 for making a good comparison. Neither Sony or MS are going to bankroll any kind of custom solution that deviates too much. At that point why not just make your own chip if you don't want to leverage someone else's IP. Most of the customization is minor tweaks or more on the API side of things.

I don't know if NVidia will have as much leeway for any Super cards as they did before. The OC reviews for the 3080 suggest its already past the point of efficiency in terms of throwing more power at the cards and I even question why they pushed them as far as they did when 50W less wasn't even a 5% performance hit. The 3070 seems like it's pretty close to full die already, so all a 3070 SUPER could hope for is silicon improvements. There's at least room for a 3080 SUPER to have more shaders, but since the 3080 is on GA102 it butts heads with the 3090 pretty quickly. Unless the Samsung process improves considerably then the SUPER cards are going to see a minor boost to clock speed, very few additional shaders, and mainly an increase in the amount of memory.

If AMD actually comes out with a real competitor to NVidia I think that NVidia is going to be on the ropes in a way that they haven't been for almost a decade. They're at the mercy of the Samsung process and moving everything to TSMC would be both expensive and potentially alienate customers that have bought the initial cards.

The pricing alone suggests that NVidia is expecting a lot more from AMD than they've been able to deliver for several generations. How much AMD actually manages to deliver remains to be seen though.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com



RT reflections in a last gen port cuts framerate in half?

Haha, have you seen what RT does in Minecraft? I don't the generation of the title has nearly so much to do with it - RT is just really expensive.

Oh heck yeah I'll take 1080p/60FPS/RT on. Sounds great!
 
  • Like
Reactions: Olikan

GoodRevrnd

Diamond Member
Dec 27, 2001
6,803
581
126
The consoles will be close enough to RDNA2 for making a good comparison. Neither Sony or MS are going to bankroll any kind of custom solution that deviates too much. At that point why not just make your own chip if you don't want to leverage someone else's IP. Most of the customization is minor tweaks or more on the API side of things.

I don't know if NVidia will have as much leeway for any Super cards as they did before. The OC reviews for the 3080 suggest its already past the point of efficiency in terms of throwing more power at the cards and I even question why they pushed them as far as they did when 50W less wasn't even a 5% performance hit. The 3070 seems like it's pretty close to full die already, so all a 3070 SUPER could hope for is silicon improvements. There's at least room for a 3080 SUPER to have more shaders, but since the 3080 is on GA102 it butts heads with the 3090 pretty quickly. Unless the Samsung process improves considerably then the SUPER cards are going to see a minor boost to clock speed, very few additional shaders, and mainly an increase in the amount of memory.

If AMD actually comes out with a real competitor to NVidia I think that NVidia is going to be on the ropes in a way that they haven't been for almost a decade. They're at the mercy of the Samsung process and moving everything to TSMC would be both expensive and potentially alienate customers that have bought the initial cards.

The pricing alone suggests that NVidia is expecting a lot more from AMD than they've been able to deliver for several generations. How much AMD actually manages to deliver remains to be seen though.
With where they are on the efficiency curve it seems like their only option is a Ti refresh on a refined process. Maybe they could get Supers out if 8nm tightens up but it doesn't seem like there's any room to maneuver there.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
I think AMD has a big window of opportunity here. Due to NVIDIA opting to save money and go with the vastly cheaper Samsung 8nm process (see the EEE Times article summer of 2019 citing costs as the driving factor NVIDIA switched to Samsung) - NVIDIA only saw a 10-15% gain in frames/watt despite a full node jump.

AMD has been delivering and exceeding its stated efficiency goals since they started talking about them in 2016. They have exceeded every energy efficiency claim since Ryzen 1 and they did that with RDNA 1(they guided the public to a 50% increase in energy efficiency).
1600296427304.png

Now again AMD is stating they will see a 50% jump in energy efficiency in RDNA 2. Now a 50% jump in energy efficiency doesn't mean half the energy used. It is more like a jump from 10 frames/watt to 15 frames/watt. So an RDNA 2 card would use 2/3 (0.67) the power of an RDNA 1 card.

1600296365300.png

5700 XT draws 210 watts on average compared to 322 watts for the 3080 FE which is just about twice as fast. Keep in mind Navi 10 is 251mm^2 while 3080 is a 628mm^2 chip. So we have a ton of die size to work with also.

RDNA 2 could crank the wattage to the exact same as 3080, and performance if it scales well would be ... 210 watts x 2 (to reach 3080) x 0.67 (50% expected efficiency increase for RDNA 2) and you get.... 281 watts average gaming power consumption.

So AMD could equal 3080 performance for 40 less watts.... Yes AMD has not been at the ultra high end for a few gens.. but they DO have a window of opportunity here if they do decide to release the full 80 CU "big" Navi to gamers. They could even fall short of their stated 50% efficiency game and still be just about the exact same as NVIDIA.

Of course, this isn't all GPU magic. AMD is simply using a superior process in TSMC 7nm versus Samsung's revised 10nm node (8nm). But if RDNA 2 can improve that much over RDNA on what is essentially a slightly more mature node then hats off to AMD! I can't wait to see who is going to get my $400-$500 this fall to replace my 1070 Ti :cool:
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
Here's something new I think. Sony officially lists the max power usage for the PS5: 350W for the PS5, 340W for the PS5 Digital Edition.

That's some 30W higher than what the PSU in the Xbox Series X is rated as, I believe. The CPUs between both consoles are roughly the same so I wouldn't be surprised if there's like 20-30W more power from the GPU.

Xbox Series X GPU: 56 CUs @ 1.825 GHz, 130W-140W
PS5 GPU: 36 CUs @ 2.23 GHz, 150W-170W

Seems to jive with the rough napkin math I had sometime earlier last month:
130W * (2.23 GHz / 1.825 GHz)^3 * (36 CUs / 52 CUs) = 164W
 

swilli89

Golden Member
Mar 23, 2010
1,558
1,181
136
Here's something new I think. Sony officially lists the max power usage for the PS5: 350W for the PS5, 340W for the PS5 Digital Edition.

That's some 30W higher than what the PSU in the Xbox Series X is rated as, I believe. The CPUs between both consoles are roughly the same so I wouldn't be surprised if there's like 20-30W more power from the GPU.

Xbox Series X GPU: 56 CUs @ 1.825 GHz, 130W-140W
PS5 GPU: 36 CUs @ 2.23 GHz, 150W-170W

Seems to jive with the rough napkin math I had sometime earlier last month:
130W * (2.23 GHz / 1.825 GHz)^3 * (36 CUs / 52 CUs) = 164W
I think you are overstating how much 8 cores of Zen 2 @ very modest clock rates - 3.5 ghz are consuming. The CPU elements are probably drawing 60 watts or so maximum. Zen 2 is extremely efficient in the mid 3 ghz range when boost clocks aren't cranked up.

Also remember Sony is using a different storage method and a faster PCIE bus which should draw several more watts than MS's solution. I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,028
136
www.teamjuchems.com
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?
 
  • Like
Reactions: Tlh97

Karnak

Senior member
Jan 5, 2017
399
767
136
The PS4 Pro PSU is rated at 310W yet power consumption is around like what? 165W for the entire console?

I'd say the PS5 with it's 350W PSU will be around 220W and the XSX with it's 315W PSU somewhere around 190W. Maybe a bit more here and there but not that much.

No surprise though that the PS5 will consume more power with it's >2.2GHz IMO. Even though RDNA2 can reach those clock speeds but a chip with more CU and lower clocks will always be more efficient than one with less CU but higher clocks.
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?
Nah, it has two 12V rails, one at 21.25A (255W) and the other at 5A (60W). If I'm not mistaken, the former is for the mainboard (SoC) and the latter is for the IO board.

Here. You can scope it out for yourself: Digital Foundry's Xbox Series X Teardown
 

Saylick

Diamond Member
Sep 10, 2012
3,084
6,184
136
I think you are overstating how much 8 cores of Zen 2 @ very modest clock rates - 3.5 ghz are consuming. The CPU elements are probably drawing 60 watts or so maximum. Zen 2 is extremely efficient in the mid 3 ghz range when boost clocks aren't cranked up.

Also remember Sony is using a different storage method and a faster PCIE bus which should draw several more watts than MS's solution. I'd imagine the GPU cores are drawing 200-210 watts for the new consoles.
The SoC is probably ~200W total, or ~130-140W for the GPU, 50W for the CPU, and the remainder is for the rest. Here, Digital Foundry even shows you the Xbox Series X's PSU: Digital Foundry's Xbox Series X Teardown

So, in total, that's a 315W PSU. 255W for the mainboard and 60W for the IO board. Throw in some loss for inefficiencies and its total wattage is in the ballpark.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
The power brick for the new Series X is rated for 2.2 Amps at 200-220v vs the One X at 1.3 amps at the same voltage. That’s 70% overall power usage increase.

I have no idea what that Jaguar CPU was pulling for wattage but it probably wasn’t much. It seems like the One X power consumption was driven by the GPU, what with a heatsink revision and anecdotal high failure rates.

I mean, the PSU for the Series X, if I am doing the math right, is 440 to 480W? That doesn’t seem right but I can’t find better information at the moment. Shouldn’t the power supply for this thing have UL very or something?

Console PSUs are never run anywhere close to the max rated wattage. Xbox One X had a 245w PSU and drew around 172w in GoW4 in anandtech review. There is good reason for that. These consoles have a long lifetime (in some cases 10 years or more) and the PSU wattage output is likely to reduce with capacitor aging due to heat. These consoles have a small form factor for the amount of compute they pack and run much hotter than your typical PC.


Xbox Series X has dual +12v power rails - 21.25A on the rail which connects to mainboard which houses SoC, GDDR6, NVMe and HDMI connector and 5A on the rail connected to the daughter I/O board. 12x 21.25 + 12 x 5 = 255 + 60 = 315


The Zen 2 cores at 3.6 Ghz draws 55w. The onboard NVMe draws 4-5w and there is a provsion of another NVMe expansion drive. The entire Series X system is likely to draw 210-220w which includes the Bluray drive and other I/O ports. The Series X GPU is drawing roughly 135-140w and the 16 GB GDDR6 alone is likely to draw around 40W. This puts the Series X GPU at 95-100w. Thats very efficient for 12 TF RDNA2 GPU which is likely to perform very close to 2080 Ti .
 

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?
 

SPBHM

Diamond Member
Sep 12, 2012
5,056
409
126
Was the 5700XT a stopgap for AMD between GCN and RDNA2? I keep seeing this statement, and not being a video cards guy myself, I'm not sure how true this is or is it a meme?

the 5700XT didn't use a large die, so there is plenty of potential for more from AMD on that alone.
also with RDNA2 being in parity in terms of features with the new consoles it should hold relevance for much longer.
 
  • Like
Reactions: lightmanek