• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA2 + CDNA Architectures thread

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Guru

Senior member
May 5, 2017
830
361
106
Don't think they are going to use 384bit memory interface, no point. I mean GDDR6 at 256bit, with just the higher range ones of 16gbps are capable of feeding as much cores as you throw at them.

I think their professional cards on the other hand are going to be using hbm 2+ with 32gb, 24gb and 16gb of it.

I think AMD are going to stay with the small package, small die size, keep it basic and compact. I basically expect much higher clocks, we've seen the PS5 tech presentation, the gpu is capable of running 2200+mhz all the time at 150W or less, that is impressive.

Even the RX 5700xt is not the full CU's, so you add in the additional CU's, + 400mhz core, +400mhz memory, add in further architecture improvements and you have a 2080super competitor without even touching the whole CU structure or adding more of it.

Though I do expect a big navi that is going to have a lot more CU's too.
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Don't think they are going to use 384bit memory interface, no point. I mean GDDR6 at 256bit, with just the higher range ones of 16gbps are capable of feeding as much cores as you throw at them.

I think their professional cards on the other hand are going to be using hbm 2+ with 32gb, 24gb and 16gb of it.

I think AMD are going to stay with the small package, small die size, keep it basic and compact. I basically expect much higher clocks, we've seen the PS5 tech presentation, the gpu is capable of running 2200+mhz all the time at 150W or less, that is impressive.

Even the RX 5700xt is not the full CU's, so you add in the additional CU's, + 400mhz core, +400mhz memory, add in further architecture improvements and you have a 2080super competitor without even touching the whole CU structure or adding more of it.

Though I do expect a big navi that is going to have a lot more CU's too.
You underestimate the requirements of RTRT for Memory Bandwidth.
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
If there is a 240mm^2 RDNA2 GPU, the entire RDNA1 lineup is dead. All in one go. Worthless.
What else makes even a little sense from the bits information we got?

RDNA1 for lowest end, with price cuts, massive, and RDNA2 with Ray Tracing, higher performance for higher price margins, and better performance than ALL of RDNA1 stack.

Otherwise - what else makes sense?
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Senior member
May 1, 2020
450
440
96
If the price is set right then I don't see a problem with having RDNA1 along with RDNA2, It's unlikely Navi 23 would be sold under $350. Maybe the rumors about RT capability for only highend was because RDNA1 will stay and only RDNA2 will have It.
 

randomhero

Member
Apr 28, 2020
118
181
76
If the price is set right then I don't see a problem with having RDNA1 along with RDNA2, It's unlikely Navi 23 would be sold under $350. Maybe the rumors about RT capability for only highend was because RDNA1 will stay and only RDNA2 will have It.
Current RDNA1 GPUs are almost 1 year old built on a process that is in its 3rd year. Also, GGDR6 came substantially down in price. Prices can be slashed by 30-40 % and still be profitable.
Remember, Polaris 10 and Navi10 have similar die sizes and memory configuration. And RX480 was 250 on launch.
So yeah, they could have their place in new lineup.
 

NTMBK

Diamond Member
Nov 14, 2011
9,403
2,885
136
You underestimate the requirements of RTRT for Memory Bandwidth.
AMD are building a product to fit a certain price point. If they massively over-engineer the memory system just to improve RT performance, that means they need to cut back elsewhere in the chip to got l hit the same price- meaning worse performance in 99.9% of games.

Also remember that most games with RT will be built to target the next gen consoles, both of which have fairly conventional memory systems. Expect lots of clever optimisations to get RTRT running well within console memory bandwidth, not bandwidth-hungry monsters.
 

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
What else makes even a little sense from the bits information we got?

RDNA1 for lowest end, with price cuts, massive, and RDNA2 with Ray Tracing, higher performance for higher price margins, and better performance than ALL of RDNA1 stack.

Otherwise - what else makes sense?
I don't see a point in keeping RDNA1 around at all. The largest RDNA1 die would probably pull power on par with the mid-range RDNA2 die while performing worse than the lowest end one at best. It's still on 7nm, and so is just as costly to produce as the lowest end RDNA2 die.

That would be like Nvidia rebranding the 970 after they already released the entire Pascal line.

You have to remember, AMD are still up against Nvidia and will probably try to beat them - just slightly - in terms of value. Navi23 would not be priced any higher than Ampere's version of the 2080 Super (which will likely become the -70 GPU next generation), and ~2070 Super performance will likely end up in the -60 tier, for which it makes more sense to sell a cut down die as opposed to Navi10 (as Navi10 still performs notably worse).
 
  • Like
Reactions: Tlh97

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
I don't see a point in keeping RDNA1 around at all. The largest RDNA1 die would probably pull power on par with the mid-range RDNA2 die while performing worse than the lowest end one at best. It's still on 7nm, and so is just as costly to produce as the lowest end RDNA2 die.

That would be like Nvidia rebranding the 970 after they already released the entire Pascal line.

You have to remember, AMD are still up against Nvidia and will probably try to beat them - just slightly - in terms of value. Navi23 would not be priced any higher than Ampere's version of the 2080 Super (which will likely become the -70 GPU next generation), and ~2070 Super performance will likely end up in the -60 tier, for which it makes more sense to sell a cut down die as opposed to Navi10 (as Navi10 still performs notably worse).
Then what is the point of keeping RX 570, 580 and 590 around? Especially since RX 5500 XT has the same or higher performance level.

I don't believe people will care about how, for example, RX 5600 XT stacks against Ampere/Nvidia products, if it costs 199$, even if there is better GPU above that.
 

maddie

Diamond Member
Jul 18, 2010
3,513
2,497
136
Then what is the point of keeping RX 570, 580 and 590 around? Especially since RX 5500 XT has the same or higher performance level.

I don't believe people will care about how, for example, RX 5600 XT stacks against Ampere/Nvidia products, if it costs 199$, even if there is better GPU above that.
WSA? Have we forgotten about this?
 
  • Like
Reactions: Tlh97 and uzzi38

maddie

Diamond Member
Jul 18, 2010
3,513
2,497
136
The 570 and 580 are only really still being produced thanks to the WSA. Those are gonna start dying out in the next few months as well.
Agree with WSA reason, but there are countries/areas besides North America and Europe. Polaris is still considered good enough in those places.
 
  • Like
Reactions: ryan20fun

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Agree with WSA reason, but there are countries/areas besides North America and Europe. Polaris is still considered good enough in those places.
Here, in Poland RX, 580 and 570 are STILL best selling Radeon GPUs.
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Yes, Uzzi is correct that 240 mm2 die is completely wiping the point of ofering RDNA1 GPUs, but first questions, that pops to my mind:

1) Would they completely retire GPU architecture after only 12 months of manufacturing?
2) If 240 mm2 die has 48 CUs, then it MUST mean that below this GPU there is sub-200mm2 die, with 32 CUs, and 128 bit GDDR6 memory bus, right? So far we haven't seen any clues pointing towards this scenario, however.

From value perspective if N23 is the smallest RDNA2 die - the only real scenario that makes sense is that RDNA1 GPUs serve the same purpose as GTX 16XX GPUs served in Turing lineup.
 

TESKATLIPOKA

Senior member
May 1, 2020
450
440
96
There is no need to design something smaller than Navi 23 when they already have RDNA1, It would be a waste of money to make something with similar performance to RDNA1. The advantages would be better perf/W ratio than RDNA1 and RT capability, but that would be too weak to be of any real use in games.
 
  • Like
Reactions: french toast

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
There is no need to design something smaller than Navi 23 when they already have RDNA1, It would be a waste of money to make something with similar performance to RDNA1. The advantages would be better perf/W ratio than RDNA1 and RT capability, but that would be too weak to be of any real use in games.
Everything depends on the IPC uplift AMD got withRDNA2 and how high they are able to clock this arch. If we are etalking about 50% performance per watt improvement across the board, it means that that 32 CU chip would still be faster than Navi 10 GPU, even with 128 bit GDDR6 memory bus.
 

TESKATLIPOKA

Senior member
May 1, 2020
450
440
96
Everything depends on the IPC uplift AMD got withRDNA2 and how high they are able to clock this arch. If we are etalking about 50% performance per watt improvement across the board, it means that that 32 CU chip would still be faster than Navi 10 GPU, even with 128 bit GDDR6 memory bus.
Hopefully AMD won't clock RDNA2 to the extreme and hurt the perf/W ratio.

That statement about 50% improvement in perf/W could mean many things:
1.) 50% more performance and the same power consumption
2.) same performance but only 2/3 of power consumption
3.) 75% of the original performance with 1/2 of power consumption

So for example If RDNA2 has 25% better IPC than RDNA1 and we want to keep the same performance, then It could look like this:
1.) 28CU 2.06GHz RDNA2 chip will perform as 1.8GHz Navi 10
2.) 32CU 1.8GHz RDNA2 chip will perform as 1.8GHz Navi 10
3.) 40CU 1.44Ghz RDNA2 chip will perform as 1.8GHz Navi 10
4.) 48CU 1.2Ghz RDNA2 chip will perform as 1.8GHz Navi 10
Which one of these chips would have the lowest power consumption and with It the best perf/W ratio? The one with the lowest CU count or with the highest or something in between? What do you people think?:cool:
 
Last edited:

Guru

Senior member
May 5, 2017
830
361
106
You underestimate the requirements of RTRT for Memory Bandwidth.
Didn't Xbox tech team say they are going to use raytracing a lot with the overall computing power of the gpu, not necessarily the hardware specific cores. I think the whole xbox 4 is build to compute ray tracing at the gpu core level, dx12 is designed that way, I think specific hardware units are going to be used to give that additional boost of performance.

I don't expect miracles, I think its going to be probably 30% better than current Nvidia's RT cores, which I'm guessing Nvidia next get is also going to be similarly better, unless they go all in with some monstrosity of a chip, but its going to be hard to go with a big chip again on 7nm for Nvidia, since the cost for the wafers are more expensive at 7nm over their 12nm process they used for Turing.
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
Hopefully AMD won't clock RDNA2 to the extreme and hurt the perf/W ratio.

That statement about 50% improvement in perf/W could mean many things:
1.) 50% more performance and the same power consumption
2.) same performance but only 2/3 of power consumption
3.) 75% of the original performance with 1/2 of power consumption

So for example If RDNA2 has 25% better IPC than RDNA1 and we want to keep the same performance, then It could look like this:
1.) 28CU 2.06GHz RDNA2 chip will perform as 1.8GHz Navi 10
2.) 32CU 1.8GHz RDNA2 chip will perform as 1.8GHz Navi 10
3.) 40CU 1.44Ghz RDNA2 chip will perform as 1.8GHz Navi 10
4.) 48CU 1.2Ghz RDNA2 chip will perform as 1.8GHz Navi 10
Which one of these chips would have the lowest power consumption and with It the best perf/W ratio? The one with the lowest CU count or with the highest or something in between? What do you people think?:cool:
32 CU die, with 1.8 GHz core clock and 128 bit GDDR6 bus would be around 75-90W Total power draw.

So that would fit the bill. RX 5700 XT reference is consuming around 190W of power.
 
  • Like
Reactions: Tlh97

TESKATLIPOKA

Senior member
May 1, 2020
450
440
96
32 CU die, with 1.8 GHz core clock and 128 bit GDDR6 bus would be around 75-90W Total power draw.

So that would fit the bill. RX 5700 XT reference is consuming around 190W of power.
1.) That's too low. AMD talked about 50% better perf/W, but 75-90W total power draw would mean >100% better perf/W ratio than 5700XT. Not to mention a much weaker 5500XT consumes on average ~117W (TDP: 130W), which is still a lot more than your numbers.
5500XT power consumption

2.) Just 128bit GDDR6 for 5700XT performance? I say that's unrealistic. 5700XT uses 256bit 14GHz GDDR6, this is only half of that or a bit more If you use 16GHzGDDR6 instead.

3.) According to Techpowerup It's 219W for 5700XT(TDP: 225W).
5700XT power consumption
In this review It's 218W.
 

Glo.

Diamond Member
Apr 25, 2015
4,835
3,457
136
1.) That's too low. AMD talked about 50% better perf/W, but 75-90W total power draw would mean >100% better perf/W ratio than 5700XT. Not to mention a much weaker 5500XT consumes on average ~117W (TDP: 130W), which is still a lot more than your numbers.
5500XT power consumption

2.) Just 128bit GDDR6 for 5700XT performance? I say that's unrealistic. 5700XT uses 256bit 14GHz GDDR6, this is only half of that or a bit more If you use 16GHzGDDR6 instead.

3.) According to Techpowerup It's 219W for 5700XT(TDP: 225W).
5700XT power consumption
In this review It's 218W.
36 CU RX 5600 XT with initial clocks: 1.35 GHz base, 1.56 Boost, 288 GB/s bandwidth was consuming around 120W of power. Just 3-5W more, than 1.845 GHz RX 5500 XT that has 22 CUs.

RDNA2 GPUs will clock to 2.2 GHz based on PS5 clocks, solely. So 1.8 GHz Chips is heavily downclocked, and heavily downvolted to what it really can, regardless of the CU count.

Xbox Series X SoC GPU consumes, based on rough estimates around 120W of power for the GPU, alone. And it has 52 CUs, and 1.8 GHz.

32 CU chip, clocked at 1.8 GHz can easily be fit with GDDR6 memory into sub-90W thermal Envelope, based on this, IMO. especially if those GPUs are designed to run at 2.2 GHz. Then at 1.8 GHz - they are heavily downclocked.
 
Mar 11, 2004
21,537
3,696
126
I'm not sure why people are speculating on RDNA1 continuing. Didn't AMD themselves outright say that? Think Lisa Su said it during one of the investor calls this year or maybe it was at CES during interviews or something, but I'm almost completely certain AMD has explicitly said there will be RDNA 1 and 2 this year.

Yep:

In 2019, we launched our new architecture in GPUs, it's the RDNA architecture, and that was the Navi based products. You should expect that those will be refreshed in 2020 - and we'll have a next generation RDNA architecture that will be part of our 2020 lineup.
Unfortunately, I have a bad hunch that means we'll be getting almost entirely RDNA1 in mobile. Also hopefully that doesn't mean OEMs don't pair Renoir with 2070/2080 Super to save money by pairing it with the mobile 5700.

I'd love to find out that the small RDNA2 is intended for higher end laptops, but if it is, it might be for a pretty select few (Macbook Pro).

I also have a bad feeling that we might not see much price changes, but rather they'll feel that the refresh warrants keeping prices pretty much where they are.
 

uzzi38

Golden Member
Oct 16, 2019
1,817
3,654
116
I'm not sure why people are speculating on RDNA1 continuing. Didn't AMD themselves outright say that? Think Lisa Su said it during one of the investor calls this year or maybe it was at CES during interviews or something, but I'm almost completely certain AMD has explicitly said there will be RDNA 1 and 2 this year.

Yep:



Unfortunately, I have a bad hunch that means we'll be getting almost entirely RDNA1 in mobile. Also hopefully that doesn't mean OEMs don't pair Renoir with 2070/2080 Super to save money by pairing it with the mobile 5700.

I'd love to find out that the small RDNA2 is intended for higher end laptops, but if it is, it might be for a pretty select few (Macbook Pro).

I also have a bad feeling that we might not see much price changes, but rather they'll feel that the refresh warrants keeping prices pretty much where they are.
When talking to inventors the meaning of the word refresh is entirely different to when talking to enthusiasts.

Refresh in that sense just means they'll be releasing a new lineup. It doesn't detail what is in that lineup. It could be a refresh as we know it, or it could be brand new GPUs from top to bottom.
 
Mar 11, 2004
21,537
3,696
126
When talking to inventors the meaning of the word refresh is entirely different to when talking to enthusiasts.

Refresh in that sense just means they'll be releasing a new lineup. It doesn't detail what is in that lineup. It could be a refresh as we know it, or it could be brand new GPUs from top to bottom.
It literally cannot be. They already brought out new Navi/RDNA1 products this year. They just brought out the mobile versions. But you guys can keep going in circles debating if they'll overhaul their entire GPU stack with RDNA2 this year. At least the talk about APUs going chiplets and/or jumping to the latest GPU has some reason to consider it.
 

ASK THE COMMUNITY