Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 240 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,552
5,527
146

Tup3x

Senior member
Dec 31, 2016
933
920
136
Interesting... Exynos 2400 will indeed use RDNA3 instead of RDNA2.

The bad stuff? It apparently can't beat Snapdragon 8 Gen 2:
 

coercitiv

Diamond Member
Jan 24, 2014
6,111
11,527
136
I don't understand why Samsung wasted transistors on RDNA3 CUs instead of just using more RDNA2 CUs.
N33 has almost 40% higher density over N23, even though N6 is marketed as having ~18% density over N7.

N23 is ~237mm2. N33 die is ~204mm2. How much smaller do you reckon N23 would be on N6 if you think RDNA3 CUs are a waste of transistors?
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,314
2,799
106
N33 has almost 40% higher density over N23, even though N6 is marketed as having ~18% density over N7.

N23 is ~237mm2. N33 die is ~204mm2. How much smaller do you reckon N23 would be on N6 if you think RDNA3 CUs are a waste of transistors?
As you said, N33 has 40% higher density.
N23: 46.67 MTr/mm2
N33: 65.19 MTr/mm2
N23 could be 201mm2 in the best case, If It used N6.
 

Panino Manino

Senior member
Jan 28, 2017
809
1,009
136
N33 has almost 40% higher density over N23, even though N6 is marketed as having ~18% density over N7.

N23 is ~237mm2. N33 die is ~204mm2. How much smaller do you reckon N23 would be on N6 if you think RDNA3 CUs are a waste of transistors?
RDNA3 is more area efficient than RDNA2.

All wonderful in theory.
In practice it delivers last year's performance. I noticed that the official blog post confirming RDNA3 talked about performance, except GPU performance.
 

gdansk

Golden Member
Feb 8, 2011
1,905
2,228
136
All wonderful in theory.
In practice it delivers last year's performance. I noticed that the official blog post confirming RDNA3 talked about performance, except GPU performance.
RDNA3 is more performance per area. Even on same process. That was the goal and at least they achieved that. Total performance isn't amazing but oh well. It's a good design for iGPUs if they can get it to clock right eventually.
 

Kemano

Junior Member
Sep 5, 2023
5
3
41
RDNA3 is more performance per area. Even on same process. That was the goal and at least they achieved that. Total performance isn't amazing but oh well. It's a good design for iGPUs if they can get it to clock right eventually.
I think better utlization of wafers (or wafer cost) by leveraging chiplets and using 3D stacking was also amongst the goals of RDNA3.

In an economy where demand for GPUs is unlimited, optimizing your architecture for area seems a good idea, but then crypto crashed and as wafer supply became abundant, all the benefits coming from intense area efficiency suddenly became less advantageous.
As high density and high Fmax is often mutually exIusive (see Zen4 vs Zen4c), I'm not quite sure that they can carry over the area efficiency what they had achieved with RDNA3 to future generations.

And we have yet to see what benefit 3D stacked IC would have provided.

To me the achievements of RDNA3 seem a bit useless or dead end.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
6,577
6,794
136
RDNA3's would have had to have been in the design phase well before Crypto, so I don't think "unlimited GPU demand" was the motivating factor.

Large monolithic dies are just too damn expensive on newer processes and AMD doesn't have the Professional market or even enough of the consumer market to compete big due to big die with NV, so Chiplets keep the individual die cost down.

They'd also prefer to use their wafer allocation on their core business, which is CPUs, as well.

It would have been a coup if N31 even came within striking distance of AD102, it just didn't and now AMD looks kinda silly.
 
  • Like
Reactions: Mopetar and Tlh97

biostud

Lifer
Feb 27, 2003
18,111
4,542
136
RDNA3 is more performance per area. Even on same process. That was the goal and at least they achieved that. Total performance isn't amazing but oh well. It's a good design for iGPUs if they can get it to clock right eventually.
The major problem being if your competition is even better. 379mm^2 vs 304+225m^2

It would be interesting to know the cost to build a RTX 4080 vs a 7900XTX.
 

dr1337

Senior member
May 25, 2020
293
489
106
It would have been a coup if N31 even came within striking distance of AD102, it just didn't and now AMD looks kinda silly.
RTX 4080 vs a 7900XTX.
It all really depends on what you're benchmarking, and even then who is doing it. 7900xtx beats 4090 in games like Farcry, AC, Cod, but then will be slower than a 4080 in others like Halo and Rainbow 6. But even then sites like Techspot and Techpower up will disagree on the exact placing for the cards in the same games and settings.

I kinda wish someone would do a GPU roundup where they take the 10 best and worst games for each card and compare them. Its kinda weird to me with how much controversy there was over the supposed starfield FSR exclusivity, but nobody is upset that AMD apparently pays developers to make some games run poorly on a card with nearly half the effective shader count as a 4090. Computationally speaking, physically based on specs, a 4090 should never have lower performance than a 7900xtx yet sometimes it does somehow.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,314
2,799
106
Its kinda weird to me with how much controversy there was over the supposed starfield FSR exclusivity, but nobody is upset that AMD apparently pays developers to make some games run poorly on a card with nearly half the effective shader count as a 4090.
So now AMD doesn't just pay developers to exclude DLSS from games, but also pay to directly cripple the competition? :eek:
far-cry-6-3840-2160.png
rt-far-cry-6-3840-2160.png

I will write to TPU to stop testing a game, which clearly shows AMD is paying developers to cripple the competition. ;)
BTW, what will be the next conspiracy? :D

P.S. If there were more games like Far Cry 6, then RDNA3 wouldn't be considered as a flop. :p

edit: this post is intended to be sarcastic.
 
Last edited:

biostud

Lifer
Feb 27, 2003
18,111
4,542
136
It all really depends on what you're benchmarking, and even then who is doing it. 7900xtx beats 4090 in games like Farcry, AC, Cod, but then will be slower than a 4080 in others like Halo and Rainbow 6. But even then sites like Techspot and Techpower up will disagree on the exact placing for the cards in the same games and settings.

I kinda wish someone would do a GPU roundup where they take the 10 best and worst games for each card and compare them. Its kinda weird to me with how much controversy there was over the supposed starfield FSR exclusivity, but nobody is upset that AMD apparently pays developers to make some games run poorly on a card with nearly half the effective shader count as a 4090. Computationally speaking, physically based on specs, a 4090 should never have lower performance than a 7900xtx yet sometimes it does somehow.
on average they perform close equal when you exclude ray tracing. Obviously you can always cherry pick games to favor one or the other, and in the end, it is only the games you actually play, where the performance matters. :)

The total die area of the N21 (304+225m^2) is larger than AD103 379mm^2, and if you enable ray tracing heavy titles the 4080 wins over the 7900XTX.

Personally I think that nvidia has the best technology in almost every aspect, but I'm not willing to pay the premium for it, which is why I bought a 6800XT.
 

blackangus

Member
Aug 5, 2022
61
82
51
So now AMD doesn't just pay developers to exclude DLSS from games, but also pay to directly cripple the competition? :eek:
You mean like NVidia has done for years?
Maybe, just maybe, FC was optimized for consoles and AMD is in both major consoles so it just makes better use of AMD hardware.
Both cases are just "The way its meant to be played" in different wrappers.
With AMD owning the gaming market (which is consoles) this will get to be more and more the case as developers optimize for AMD because they are console.
Smart move by AMD, to get the console market for the long term, as it helps them in the PC space as well.
It is now in the best interest of developers to optimize for AMD by defacto because that is optimizing for both consoles and PC where applicable.
 
  • Haha
Reactions: TESKATLIPOKA

candasulas

Junior Member
Sep 30, 2020
24
1
41
Everyone is drowning in pages and pages of technical details. As a hardware lover for about 25 years, there are a few questions I would like to ask or examine, and that is;

Why does AMD, which has done great work with RDNA 2 on the console side, fall behind its rival when it comes to the PC side? I think these should be questioned because of games with disgraceful optimization. Graphics in games have now reached such a point that there is no longer a huge difference between the game running at the lowest setting and the highest setting. Most importantly, does the technology called Ray Tracing have an effect on the texture quality other than the light hitting the surfaces (in puddles, on walls and glass surfaces, or the light beams of the sun hitting the sky or the light beams coming from the lighting) So in general, does it provide depth in the textures of the games in the game's texture quality? No.

The main problem is that it is behind its rival in calculating Ray Tracing. For this reason, they added AI cores in the 7000 series, but it is still not established on the software side. I will not mention the DLSS/FSR issue at all. Because both sides were going through a period when the power of the cards they produced was not fully mature, they invented this technology software to provide extra FPS. Just like the Hyper Threading (HT) technology, which they introduced in order to show more processor cores when CPUs do not have real physical cores, they are trying to use software interleaved image frame creation technology to make the FPS appear high. This seems a bit fraudulent to me. I used Nvidia for about 5 years. Although DLSS / FSR technology is a useful thing for the player, I rarely used this technology. Frankly, this is nothing more than cheating.

It's a software thing that emerged when the real computing power of the cards was not enough and was invented to get more FPS. That's why DLSS / FSR comparisons are not important.

As someone who switched from RTX 3070TI to RX 7800XT and used legendary ATI Cards such as XT1950XTX in the past, my only expectation from AMD is to further improve software optimization. More performance like Nvidia CUDA in third party applications (3DMAX, Photoshop, Vray, Autocad, etc.). And it is more stable in games and can use all the hardware features of the card.

When it achieves this, the Radeon series will truly compete with its rival.

Regarding RDNA 3, it is a really good GPU, but AMD needs to provide better optimization. In order to overcome the RTX 4000 Series in terms of performance (FSR, Frame Generation and Ray Tracing).
 

Tigerick

Senior member
Apr 1, 2022
497
415
96
AMD make 6750GRE official in the US as well:


10GB version costs $269 and 12GB version costs $289.

Hmm, I think it is time for 7600 8GB to drop price
 

jpiniero

Lifer
Oct 1, 2010
14,409
5,117
136
AMD make 6750GRE official in the US as well:


10GB version costs $269 and 12GB version costs $289.

Hmm, I think it is time for 7600 8GB to drop price

It's only getting a DIY release in China.