Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 62 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,617
5,867
146

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
No way was a 3090 anywhere near $1200 until basically recently. Most AIBs would have originally priced their cards above the $1499 MSRP. Maybe if you got it off a back of a truck?

I am interested to see if nVidia does a Fake MSRP like they did with the 3080 and 2080 Ti.


I had a friend who paid $1,199 + tax for a 3090 at launch. They stayed that price until the shortage hit. I regret not driving 4 hours to Micro Center to pick one up myself, since later I bought one for something like $1,600-$1,700.

I am hoping to be able to go AMD this time around for better Linux compatibility. Hopefully their RT performance can match or exceed the 3090.
 
  • Like
Reactions: scineram

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
N31 is rumored to be coming out sometime in November so pretty unlikely. Although you might see something from AMD before the 4090's release depending on how deep into October the 4090's release is.
Well "together we advance--" Ryzen 7000 event was announced on August 16th and retail launch is on September 27th. Almost a month and a half starting ~September 20th lines up just right with the begging of November.

Anyway, did anyone catch this? (from AMD's Senior Gaming Marketing suit, obviously in reference to 4 slot/600mm2+ Lovelace):


 
  • Like
Reactions: Tlh97 and scineram

Frenetic Pony

Senior member
May 1, 2012
218
179
116
Well "together we advance--" Ryzen 7000 event was announced on August 16th and retail launch is on September 27th. Almost a month and a half starting ~September 20th lines up just right with the begging of November.

Anyway, did anyone catch this? (from AMD's Senior Gaming Marketing suit, obviously in reference to 4 slot/600mm2+ Lovelace):



Feels slightly dumb, "bigger isn't better" is just... really that's the quote you go with? I thought you were supposed to be a PR professional.

Anyway I wouldn't be surprised if AMD's own flagship draws less power than Nvidia's. AMD was already better in perf/power and they kept that going with RDNA3. Combined with the bandwidth limitations I'd hardly be surprised at a 400w ish card from AMD vs a 450+ for that Nvidia behemoth.

But max power for consumers is 675 now. I do wonder if there's a growing market for "Uber GPUs" that you could put in render farms/datacenters (VGPU and split it up to like, a dozen workloads or more?) and the just appearing holographic art displays (no seriously: https://www.lightfieldlab.com/ and etc.). Maybe a 600-800w HBM3 GPU could find a big enough audience to make it worth it, at least for AMD and their chiplet arch. Make a new HBM memory controller chiplet (or just adapt CDNA 3's one) and N extra workgroup chiplets in... I can see it working. $3k for the base consumer chiplet, $5k for the 128gb, 800w "pro/datacenter" chiplet? Even if you were limited to the tens of thousands of units there that's in the tens to hundreds of millions in sales with high profit margins. Maybe not a "priority" market, but I could see it as a "release it a year+ later" market.
 

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Feels slightly dumb, "bigger isn't better" is just... really that's the quote you go with? I thought you were supposed to be a PR professional.
That's not the quote now, is it?
It's also just a tweet, not a freakin' super bowl commercial. So... who cares?

As I see it, it has served it's purpose just fine. Low key teasing their strengths.
The vast majority of people are not taking the quad slot/660W madness nVidia is pushing very well. If AMD can deliver something more sensible, they already have a leg up on nVidia, even before any other consideration.
 

fleshconsumed

Diamond Member
Feb 21, 2002
6,483
2,352
136
Eh, I'm too lazy to dig out exact die figures, but from what I remember ATI 3870/4870/5870 dies were much smaller and efficient compared to nvidia while being nearly as fast. So yeah, bigger is not always better. Personally I'm hoping AMD can repeat history.

EDIT: by all reports the power consumption of next gen nVidia cards is totally bonkers, if AMD can pull out another 5870 out of the hat, I'd totally buy that instead.
 
Last edited:

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
Eh, I'm too lazy to dig out exact die figures, but from what I remember ATI 3870/4870/5870 dies were much smaller and efficient compared to nvidia while being nearly as fast. So yeah, bigger is not always better. Personally I'm hoping AMD can repeat history.
HD3870-4870 were a lot smaller, but weren't more efficient and were not nearly as fast.
What you said is true for HD5870.
Data from TPU reviews.
LinkHD 3870GeForce 8800 GTS (G92)
Die size (process)192 mm2 (55nm)324 mm2 (65nm)
Performance100 %119 %
Power consumption Peak
(Whole system)
217 W239 W

LinkHD 4870GTX 280
Die size (process)256-282? mm2 (55nm)576 mm2 (65nm)
Performance100 %113 %
Power consumption Peak
(card only)
151 W171 W

LinkHD 5870GTX 480
Die size (process)334 mm2 (40nm)529 mm2 (40nm)
Performance100 %108 %
Power consumption Peak
(card only)
144 W257 W

I don't see a repeat of this possible with RDNA3.
 
Last edited:
  • Like
Reactions: bearmoo and RnR_au

fleshconsumed

Diamond Member
Feb 21, 2002
6,483
2,352
136
HD3870-4870 were a lot smaller, but weren't more efficient and were not nearly as fast.
What you said is true for HD5870.
Data from TPU reviews.
LinkHD 3870GeForce 8800 GTS (G92)
Die size (process)192 mm2 (55nm)324 mm2 (65nm)
Performance100 %119 %
Power consumption Peak
(Whole system)
217 W239 W

LinkHD 4870GTX 280
Die size (process)256-282? mm2 (55nm)576 mm2 (65nm)
Performance100 %113 %
Power consumption Peak
(card only)
151 W171 W

LinkHD 5870GTX 480
Die size (process)334 mm2 (40nm)529 mm2 (40nm)
Performance100 %108 %
Power consumption Peak
(card only)
144 W257 W

I don't see a repeat of this possible with RDNA3.
Thanks for looking up the numbers. Yes, 5870 was a beast for its time. The thing is for the past 5 years AMD has consistently delivered, and not only that, they've overdelivered compared to what they promised. If they're to be believed, and as I said they have a track record of over delivering for the past 5 years, RDNA3 sounds really good. We'll see, won't be too long now, personally, I'm hopeful RDNA3 will deliver as promised.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,780
7,108
136
Eh, I'm too lazy to dig out exact die figures, but from what I remember ATI 3870/4870/5870 dies were much smaller and efficient compared to nvidia while being nearly as fast. So yeah, bigger is not always better. Personally I'm hoping AMD can repeat history.

EDIT: by all reports the power consumption of next gen nVidia cards is totally bonkers, if AMD can pull out another 5870 out of the hat, I'd totally buy that instead.

-No. I was arguing this on the forums back in the day from the opposite perspective: The small die strategy really ****ed AMD long term. Retail users care about performance numbers on reviewer charts. FPS is king. Its why NV will go for the halo spot at the expense of absolutely everything else. Its why AMD has begin adopting this strategy as well.

For enthusiasts such as this forum the small die strat made sense because we tend to look at a few more dimensions of a product than the average retail user. As a business strategy, it was a huge misstep.

Let's count the ways the small die strategy ****ed AMD:

- It cemented AMD as the "value brand" in the GPU market. Never as fast or feature rich as NV, always cheaper.
- A race to the bottom in terms of revenue. AMD was so marketshare driven that they seemed to have forgotten that the need to actually make money to keep bankrolling their operation. You don't do that by going small and cheap, especially when you're competing against someone like NV.
- CUDA. By going small but never winning the performance crown or going feature rich, AMD allowed NV to solidify CUDA as the defacto GPGPU programming language during the small die era. This is a mistake that has butt****ed AMD to this very day.

AMD needs to go Halo. Dump whatever you need to into your top SKU to either get or convincingly contest the performance crown at the top. You can always pare back from the top, but you cannot add more to an inherently underdeveloped die to get the crown and win the hearts and minds of retail users.

Think of it like the hot/cold thing: I can always put on more clothes to warm up if its cold, but there are only so many clothes I can take off to cool down when its hot. You can always overdesign an arch then scale it back to hit power and die size targets, but there is only so much clockspeed you can throw at a small design before you hit the limits of physics.

Edit: Found it, my OG thread from ye olde days https://forums.anandtech.com/threads/was-amds-small-die-strategy-a-huge-mistake.2257367/
 
Last edited:

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
That's not the quote now, is it?
It's also just a tweet, not a freakin' super bowl commercial. So... who cares?

As I see it, it has served it's purpose just fine. Low key teasing their strengths.
The vast majority of people are not taking the quad slot/660W madness nVidia is pushing very well. If AMD can deliver something more sensible, they already have a leg up on nVidia, even before any other consideration.

It wouldn't surprise me if the AMD cards end up being significantly more efficient than the NVIDIA parts. I'm not saying they will or won't beat NVIDIA in terms of absolute performance, but something tells me we'll see them hit the perf/watt/$ right on the nose, coming out in front of NVIDIA for those metrics at least.

EDIT: Also thanks to the chiplet design, the cards should be slimmer as well.
 
Last edited:
  • Like
Reactions: Kaluan

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
It wouldn't surprise me if the AMD cards end up being significantly more efficient than the NVIDIA parts. I'm not saying they will or won't beat NVIDIA in terms of absolute performance, but something tells me we'll see them hit the perf/watt/$ right on the nose, coming out in front of NVIDIA for those metrics at least.

EDIT: Also thanks to the chiplet design, the cards should be slimmer as well.
I am not sure about RDNA3 having significantly better power efficiency than ADA, at least not with RT enabled, but we will see.
What little info we know is from Bondrewd, because he said N33 consumes less than 160-170W in games.
That looks great, but we don't know the actual performance. Supposedly, N33 is attacking RX 6900xt level of performance at 1080p, but that's game dependent. Performance could be RX 6800-6900XT depending on game, that's also very good from a 6nm chip using less than 160W.

P.S. Nvidia's announcement of Ada is only 3 days away. AMD shouldn't be too far away with RDNA3's announcement.
 
Last edited:
  • Like
Reactions: Tlh97 and Kaluan

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Well, if AMD can bring RX 6800 to 6900XT performance (possibly w/ much better ray capabilities) on a similar node they are made on with a < 180W consumption (for < $400 preferably) than that can only mean great things for their N5-based stuff (Navi32/31).

That is, if chiplet design doesn't incur a big power cost or some sort (I don't see this being a thing but just saying).
 
  • Like
Reactions: Tlh97
Mar 11, 2004
23,070
5,545
146
Well, if AMD can bring RX 6800 to 6900XT performance (possibly w/ much better ray capabilities) on a similar node they are made on with a < 180W consumption (for < $400 preferably) than that can only mean great things for their N5-based stuff (Navi32/31).

That is, if chiplet design doesn't incur a big power cost or some sort (I don't see this being a thing but just saying).

I think chiplet will limit how far they can get the minimum power use (although perhaps not, like if they could shutoff entire chiplets when at low utilization), but will be beneficial for limiting max power use. So at idle it might use a chunk more than a single monolithic GPU, but at higher utilization, it could significantly improve things. Which, with it spreading the heat/power density out, it might be beneficial for the HSF as well, where at non-max/non-low loads it can balance it across the GPUs (i.e. rely on running more compute units at lower more efficient clock speeds).
 
  • Like
Reactions: Kaluan

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
I think chiplet will limit how far they can get the minimum power use
That's indeed the case. Think of it this way: a big part of power usage of chips is transferring and holding data. Transferring the data next to each other within the same die is always the best case. The longer the traces and the more jumps between different materials (e.g. die and substrate) and techniques (e.g. parallel and serial) the more energy will be spent on translation and transportation of the data. That's usually measured in picojoules per bit transferred. And holding data in memory and caches costs energy as well.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
:eek:
I don't think he meant after OC, but that would be even more shocking.
Even If we were talking only about 3.6GHz, that's still 29% higher than 6500XT at 2.8GHz, and we all know that chip was the least efficient compared to other RDNA2 based SKUs.
Now the big question is, could they improve perf/W by >=50% even with such high clocks?
The second question is If N33 has comparable clocks to It's 5nm siblings or not.

edit: If true, then It's peak boost for sure.
 
Last edited:
  • Like
Reactions: Mopetar

Saylick

Diamond Member
Sep 10, 2012
3,124
6,291
136
  • Like
Reactions: Joe NYC and Mopetar

beginner99

Diamond Member
Jun 2, 2009
5,210
1,580
136
Even If we were talking only about 3.6GHz, that's still 29% higher than 6500XT at 2.8GHz, and we all know that chip was the least efficient compared to other RDNA2 based SKUs.
Yeah I'm not sure why such high clocks should be good. I fear high power usage and possibly even in typical AMD fashion clocked way too much above the sweet spot. Wouldn't one in general prefer lower clock if you could hit your performance target that way?
 

DisEnchantment

Golden Member
Mar 3, 2017
1,601
5,779
136

AMD's Sam Naffziger seems to hint something.

Contributing to this energy-conscious design, AMD RDNA™ 3 refines the AMD RDNA™ 2 adaptive power management technology to set workload-specific operating points, ensuring each component of the GPU uses only the power it requires for optimal performance. The new architecture also introduces a new generation of AMD Infinity Cache™, projected to offer even higher-density, lower-power caches to reduce the power needs of graphics memory, helping to cement AMD RDNA™ 3 and Radeon™ graphics as a true leader in efficiency.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
Yeah I'm not sure why such high clocks should be good. I fear high power usage and possibly even in typical AMD fashion clocked way too much above the sweet spot. Wouldn't one in general prefer lower clock if you could hit your performance target that way?
In my opinion they need such high clocks to meet their performance target.
They want to save die space, so they use high clocks. Power consumption will certainly suffer, but they officially said at least 50% better power efficiency.
 

leoneazzurro

Senior member
Jul 26, 2016
919
1,450
136
:eek:
I don't think he meant after OC, but that would be even more shocking.
Even If we were talking only about 3.6GHz, that's still 29% higher than 6500XT at 2.8GHz, and we all know that chip was the least efficient compared to other RDNA2 based SKUs.
Now the big question is, could they improve perf/W by >=50% even with such high clocks?
The second question is If N33 has comparable clocks to It's 5nm siblings or not.

edit: If true, then It's peak boost for sure.

Yes, probably we are speaking about peak boost here (and the same was true for the 2.8GHz of the 6500XT). In the rest of the discussion another "serious" leaker said it was referring to a Sapphire Pulse, that is, the value proposition (slight to no OC).

Big, if true. I'm still keeping my expectations low and only assuming that the expected sustained boost clock is in the low 3 GHz range, or in other words still roughly 75-80 TFLOPS. 3.5+ GHz, if it can happen, will either be the peak boost and/or possible only with an OC.

It depends all on the power/clock management. We will see what AMD manages to do here. But yes, generally clock should be lower than peak boost .

Yeah I'm not sure why such high clocks should be good. I fear high power usage and possibly even in typical AMD fashion clocked way too much above the sweet spot. Wouldn't one in general prefer lower clock if you could hit your performance target that way?

These high clocks are there because RDNA3 seems to be high area efficient, that is ,by using fewer shaders but with high clocks. What AMD seems to have reached in the recent years, however, is a magical combination of high clocks and reasonable power demands, a trend followed by both their CPU and recently, GPU, divisions.
 
  • Like
Reactions: Tlh97 and moinmoin

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
Yeah I'm not sure why such high clocks should be good. I fear high power usage and possibly even in typical AMD fashion clocked way too much above the sweet spot. Wouldn't one in general prefer lower clock if you could hit your performance target that way?

High clocks do not necessarily mean high power consumption. See: EPYC or even Threadripper. Note that AMD is able to keep die area down because they moved memory controllers onto MCDs. This alone would allow for higher clocks, to say nothing of being on a node with 40% better power consumption.
 
  • Like
Reactions: Tlh97 and Kaluan

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
....
What AMD seems to have reached in the recent years, however, is a magical combination of high clocks and reasonable power demands, a trend followed by both their CPU and recently, GPU, divisions.
Is It really magical?
In desktop RDNA2 is more efficient than the competition in raster, If we exclude 6500xt and 6700XT.
In mobile, they are clocked too high, so power consumption is also high and performance is not so great compared to Nvidia from what I saw.

I have high expectations for RDNA3 in mobile.
 
Last edited:
  • Like
Reactions: Joe NYC