Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,627
5,929
146

Trumpstyle

Member
Jul 18, 2015
76
27
91
240 CU's seems extremely unlikely given AMD probably had a good idea of fab capacity constraints coming before Zen3 and RDNA2 were announced last October.

Much more likely is 10-13% clock and 10-13% IPC (FPS per Mhz per CU) boost for each GCD chiplet.

Multiplied by 2 CU's gives a more or less a 2.5x performance increase from the RDNA2 flagship GFX card.

Obviously this is not counting overheads and they may be comparing using a favourable game engine.

I would not at all be surprised to see RT performance gain by more than 25% per CU though - given how early we are in the RT HW saga and how much low hanging fruit likely left to be picked with a base µArch to build on it seems guaranteed that 2.5x would be conservative on that score if they can manage so much with raster gfx.
The problem is you don't get perfect scaling increase CU's and clocks. We can see this comparing 5700xt and 6900xt, 6900xt is about 2x faster despite 1.18x higher clocks.

A 1.13x ipc and 1.13x clock gain would land in the 2.1x area for average performance increase, so to me 240CU's make more sense.


A 240 CU RDNA3 GPU would be significantly larger than a 240 CU CDNA GPU, and there's no chance AMD would sell a larger die for a lower price to gamers vs selling the insane high profit margins datacenter GPUs.

So far I haven't seen Navi32 anywhere so I don't know what it is.
You need brand to make money but I still don't see what mi200 has to do with RDNA3. AMD is selling gaming cards right now despite having higher margins on mi100.

I get 800-1000mm2 for a 240CU monolith die using 0.6-0.7x scaling for 5nm, 1 chiplet should be around 300mm2.
 

uzzi38

Platinum Member
Oct 16, 2019
2,627
5,929
146
I seriously doubt they would do that. Implementing the same RDNA 3 IP in two incompatible designs seems a lot of extra effort for questionable gain. Time, money, manpower. Especially since after Rembrandt and Raphael the low end GPU market might be completely decimated.
Somehow I missed this.

To explain my reasoning here, first we need to be on the same page here. So to try and get my point across, I'd like for you to think about why Polaris never really helped AMD gain significant market share on the desktop, despite being much better value in it's later years (GTX1060 vs RX580).

Your hint is: you can draw parallels to the current desktop market as well.
 
  • Like
Reactions: Tlh97

Ajay

Lifer
Jan 8, 2001
15,448
7,858
136
Somehow I missed this.

To explain my reasoning here, first we need to be on the same page here. So to try and get my point across, I'd like for you to think about why Polaris never really helped AMD gain significant market share on the desktop, despite being much better value in it's later years (GTX1060 vs RX580).

Your hint is: you can draw parallels to the current desktop market as well.
Are you excluding the terrible initial RX580 drivers (initial impression have a big impact)? How about the lame marketing by AMD compared to NV? AMD has made good gains on the former and small gains on the latter, IMHO.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
You guys are assuming the CUs are getting larger along with the CU count. I am thinking they simplify the design and optimize it for high clocks. If you read through driver source code for RDNA2, you can see that they they were heaving in that direction, and there are some other indications that this will continue. Don’t be surprised if game clocks and boost clocks exceed 3.5-3.75 ghz next gen.

A part of me also wants to question a 40CU chiplet design. Seems like it would be more cost effective to go with 20CUs. Going with smaller chiplets can help with cooling as well…which also helps with higher clocks.
 

uzzi38

Platinum Member
Oct 16, 2019
2,627
5,929
146
Are you excluding the terrible initial RX580 drivers (initial impression have a big impact)? How about the lame marketing by AMD compared to NV? AMD has made good gains on the former and small gains on the latter, IMHO.

I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
 

Ajay

Lifer
Jan 8, 2001
15,448
7,858
136
I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
Well, the mining craze hit again around that time frame. I know that I paid 15% above MSRP for my 1070 'on sale' (haha) after waiting a few months to find one. IIRC, the hashrate was higher on the 580.
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
The problem is you don't get perfect scaling increase CU's and clocks. We can see this comparing 5700xt and 6900xt, 6900xt is about 2x faster despite 1.18x higher clocks.

A 1.13x ipc and 1.13x clock gain would land in the 2.1x area for average performance increase, so to me 240CU's make more sense.
I was talking average performance over RDNA2 when I made those points, not absolute.

As for RDNA2 being slower per clock/CU currently than RDNA1, this is not greatly surprising as RDNA2 is much more recent and AMD tend to lag a bit on optimising their drivers properly for new µArchs at least as far back as the early GCN era.

Give it another six months before making such judgements.
I get 800-1000mm2 for a 240CU monolith die using 0.6-0.7x scaling for 5nm, 1 chiplet should be around 300mm2.
Is each chiplet 80CU in this scenario?

If so, regardless of it being chiplets or not that is still a huge 900mm2 at a state of the art fab node even without the IO/Cache die - yield improvements from smaller compute dies only decrease the price so much.

This is beyond even Threadripper 3 at 7nm - which is what, 600mm2 minus the 12nm IOD with 3990X?

Such a 3 GCD SKU would be insanely expensive.

Likely eclipsing the 3990X launch price by at least $1000.

This is not a viable market strategy considering how much people are complaining about high end GPU costs already.
 

Glo.

Diamond Member
Apr 25, 2015
5,707
4,551
136
I did say a while after launch. Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order, but, with pretty hefty markup, however, nowhere near as high as Nvidia GPUs have.

Like, I mean, seriously, you have to pay 50€ more to jump from RTX 3060 to RX 6700 XT, and you don't have to wait for it few weeks!
 

Gideon

Golden Member
Nov 27, 2007
1,628
3,658
136
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order, but, with pretty hefty markup, however, nowhere near as high as Nvidia GPUs have.

Like, I mean, seriously, you have to pay 50€ more to jump from RTX 3060 to RX 6700 XT, and you don't have to wait for it few weeks!
It's the exact same here
6700xt used to be 1150€ around release and has gone down to 1000€, wildly available. From Nvidia you maybe have a couple 3060s for 950€ and nothing above that, except a few 3090s for 3000€ every now and then
 

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
In my, European, country, there is PLENTY of RX 6700 XTs in stock, available for order,

Can't say the same. In fact pure opposite. Biggest shop will open orders for some GPU models tomorrow. I looked at the prices. Ridiculous. banging my head I didn't get a 3060 TI last December. 6700XT are listed for >$900, RTX 3060 > $600, RTX 2060, sic, for $400
 
  • Wow
Reactions: lightmanek

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Polaris really shone in value shortly before and for a long time after Turing. By which point the media had covered it several times over and praised the value it brought on several occasions. Yet still no progress.

Well because it wasn't a lot better than say a 290(x) in terms of pure performance. For sure not an upgrade route for many previous "high end" owners. Where high end meant a $250-$300 price point hence the 580 was often worse in terms of performance/$ due to the 290x fire sales year(s) before. Only thing going for it was the reduced power use. The issue is the RX 580 looks good now because of current insane pricing but when it launched it was mostly "meh" in performance/$. Either you had something a lot worse than a 290(x) and then you went with a 570 or you went with something better than a 580 (= Nvidia).

I did neither due to what I thought to be terrible offers. Well I look like a fool now and just hoping my 290x will last another year, at least else I'm looking to paying more for the same performance than I did 6 years ago. think about that.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Well because it wasn't a lot better than say a 290(x) in terms of pure performance. For sure not an upgrade route for many previous "high end" owners. Where high end meant a $250-$300 price point hence the 580 was often worse in terms of performance/$ due to the 290x fire sales year(s) before. Only thing going for it was the reduced power use. The issue is the RX 580 looks good now because of current insane pricing but when it launched it was mostly "meh" in performance/$. Either you had something a lot worse than a 290(x) and then you went with a 570 or you went with something better than a 580 (= Nvidia).

I did neither due to what I thought to be terrible offers. Well I look like a fool now and just hoping my 290x will last another year, at least else I'm looking to paying more for the same performance than I did 6 years ago. think about that.

The 290X was $549 at launch. The RX480 launched at $200 for a 4GB, and $239 for an 8GB. Polaris was never marketed as a high end card. But it was a rock solid mid range card. And when it came to performance per watt, it blew the 290X/390X out of the water.

Yes, the 290X/390X did drop in price after the mining crash, but those cheap used cards should not be considered when comparing prices of a new product.
 

Trumpstyle

Member
Jul 18, 2015
76
27
91
I was talking average performance over RDNA2 when I made those points, not absolute.

As for RDNA2 being slower per clock/CU currently than RDNA1, this is not greatly surprising as RDNA2 is much more recent and AMD tend to lag a bit on optimising their drivers properly for new µArchs at least as far back as the early GCN era.

Give it another six months before making such judgements.

Is each chiplet 80CU in this scenario?

If so, regardless of it being chiplets or not that is still a huge 900mm2 at a state of the art fab node even without the IO/Cache die - yield improvements from smaller compute dies only decrease the price so much.

This is beyond even Threadripper 3 at 7nm - which is what, 600mm2 minus the 12nm IOD with 3990X?

Such a 3 GCD SKU would be insanely expensive.

Likely eclipsing the 3990X launch price by at least $1000.

This is not a viable market strategy considering how much people are complaining about high end GPU costs already.
It might not matter ek2121 is suggesting we could see very high clocks for RDNA3, then it's possible to hit those +2.5x performance target.

I'm still 80CU's per chiplet despite there been some speculation on small chiplets like 8x20 or 4x40CU for navi31, because of the MacOS leak.

Gamers are willing to pay over $2000 for a gpu, we seeing it now :)
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
As long as we still buy them, we can complain all what we want. :p
True, but the question is will the number of people willing to shell out for such an insane SKU justify reducing the number of chiplets available for lower priced SKU's that will attract far more buyers?
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,026
136
The 290X was $549 at launch. The RX480 launched at $200 for a 4GB, and $239 for an 8GB. Polaris was never marketed as a high end card. But it was a rock solid mid range card. And when it came to performance per watt, it blew the 290X/390X out of the water.

Yes, the 290X/390X did drop in price after the mining crash, but those cheap used cards should not be considered when comparing prices of a new product.

It kind of makes you wonder why AMD didn't keep polaris alive. Maybe push it to TSMC 10nm (or they could've just left it on 14nm). They could have released it as a low end Radeon 6300 or something. Add 8gb to it. It might've helped their capacity issues.
 

Tup3x

Senior member
Dec 31, 2016
963
948
136
Can't say the same. In fact pure opposite. Biggest shop will open orders for some GPU models tomorrow. I looked at the prices. Ridiculous. banging my head I didn't get a 3060 TI last December. 6700XT are listed for >$900, RTX 3060 > $600, RTX 2060, sic, for $400
I did buy 3060 Ti when it was available immediately after it launched. I didn't think 559 € was good value (at least this MSI Gaming X Trio is cool and quiet) but considering how much this thing costs now and how bad the availability is, it was a deal of the century.

I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
 

soresu

Platinum Member
Dec 19, 2014
2,662
1,862
136
I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
Not necessarily.

It can just be that far more people are still cash strapped than usual at this point in a release cycle and those that do want them/have cash may have simply given up waiting and looking for supply.

As someone who has watched stock market prices go up and down I can sympathize with the monotony of the waiting game.
 

YBS1

Golden Member
May 14, 2000
1,945
129
106
I have noticed that Radeons have better availability. Or to be more specific: people do not want to buy them. I do have my reasons why I would prefer green team but that doesn't necessarily apply to everyone, so I do wonder...
How do you figure? I went to both NewEgg and Amazon to look for available stock of 3080/90 and 6800/6900 cards in stock and sold by 1st party at or around msrp, not marketplace sellers.......nothing from either vendor. NewEgg did have two Radeon models in stock, a 6800XT for double MSRP and a 6900XT Red Devil for $2619. I don't call that better availability, but in any case pretty much everything is available right now if you're willing to pay enough for it, better mining performance is going to cause the nVidia models to be snapped up at a higher price than the Radeons and no one can deny that fact, but that's not by gamers. People buying for games are pretty much still snapping up whichever they can get at the price point they are willing to pay at this point. I know I wasn't brand particular this go around, I would have jumped on a 3090, 3080 or 6800, any would do. In fact, my kick myself moment was back when just after the 3090 was released NewEgg had the Gigabyte Aorus 3090 eGPU external gaming box available for MSRP briefly and I passed on it thinking "I'll just wait for a normal 3090".....big mistake.
 
Last edited:

Tup3x

Senior member
Dec 31, 2016
963
948
136
How do you figure? I went to both NewEgg and Amazon to look for available stock of 3080/90 and 6800/6900 cards in stock and sold by 1st party at or around msrp, not marketplace sellers.......nothing from either vendor. NewEgg did have two Radeon models in stock, a 6800XT for double MSRP and a 6900XT Red Devil for $2619. I don't call that better availability, but in any case pretty much everything is available right now if you're willing to pay enough for it, better mining performance is going to cause the nVidia models to be snapped up at a higher price than the Radeons and no one can deny that fact, but that's not by gamers. People buying for games are pretty much still snapping up whichever they can get at the price point they are willing to pay at this point. I know I wasn't brand particular this go around, I would have jumped on a 3090, 3080 or 6800, any would do. In fact, my kick myself moment was back when just after the 3090 was released NewEgg had the Gigabyte Aorus 3090 eGPU external gaming box available for MSRP briefly and I passed on it thinking "I'll just wait for a normal 3090".....big mistake.
Well, that's the situation here in Finland.
 

Mopetar

Diamond Member
Jan 31, 2011
7,837
5,992
136
With the Computex announcements of AMD using stacked SRAM on their CPUs, I'm wondering what implications this has for infinity cache in future AMD GPUs. It seems like they'll be able to scale the size of infinity cache considerably or even utilize stacking to produce chips with a smaller die size without sacrificing cache size.
 
  • Like
Reactions: Tlh97 and moinmoin

maddie

Diamond Member
Jul 18, 2010
4,740
4,674
136
With the Computex announcements of AMD using stacked SRAM on their CPUs, I'm wondering what implications this has for infinity cache in future AMD GPUs. It seems like they'll be able to scale the size of infinity cache considerably or even utilize stacking to produce chips with a smaller die size without sacrificing cache size.
The stacked cache was designed to only cover the lowest power part of the CPU chiplet die and not by accident. I don't think those copper vias can handle the thermals from logic loads. The total area of the copper vias must be only a few mm^2. How do you dissipate 100+ W through this at a low temp delta.