• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 109 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It looks to me we will have the following,

RX7900XTX vs RTX4080 16GB

Price MSRP : $999 vs $1199 = 17% lower
vRAM : 24GB vs 16GB = 50% more
Raster Performance 4K = 25-30% higher
Ray Tracing Performance 4K = 25-30% slower
TDP : 355W vs 320W

Which one would you choose ??
And don't forget there is a 3rd way: wait for the 4080 Ti 😎
 
Fair enough. Maybe in your case, I would wait for N32, but who know when they will release It and for what price.
the 7900x is what I'd go for right now. 7900xt sapphire nitro+ is my end goal. unless the pricing on the xtx is so good i'll spring for that. It'll be enough for my needs and be good for all the old games i play. I'll pick it up around the time the x3d processors hit the floors.
 
It looks to me we will have the following,

RX7900XTX vs RTX4080 16GB

Price MSRP : $999 vs $1199 = 17% lower
vRAM : 24GB vs 16GB = 50% more
Raster Performance 4K = 25-30% higher
Ray Tracing Performance 4K = 25-30% slower
TDP : 355W vs 320W

Which one would you choose ??
I saw comments expecting RXT4080 16GB to be discounted, because It's "unsellable" for the current price.
I have a question for you all.
If Nvidia discounts It significantly to $999, then which one would you choose?
 
Just because you have no interest in RT doesn't mean RDNA3 is actually a fantastic product. Maybe for some It is, but as a complete product It is not.
Beauty is in the eye of the beholder. As a player of twitchy multiplayer games, a large portion of the features that makes up the NVidia feature portfolio is just not relevant to me. So why should I pay their premium to have a 'complete product' ?
BTW, those fake frames are coming to RDNA3, too. 😛
And I won't be using them 😉

Don't get me wrong. I think frame interpolation tech is a nice tech for single player 'pretty' games and I'm happy that AMD will have it in its portfolio at some point next year.
 
for older people with bad eye sight with corrective lenses it doesn't make a huge difference. My eyes aren't the same as a 20 year olds or even a 30 year olds or even a 40 year old's. I can make out major details in rt scenes, anything other than that my eyes won't pick up on. I want good graphics with good frames. I'll leave stellar graphics and less frames to the young.

That's a slippery slope, if your eyes aren't that great at discerning minor details anyway, surely the solution is to lower settings/FSR? Otherwise I am not sure what logic is there in spending $1k for a new GPU?
 
It looks to me we will have the following,

RX7900XTX vs RTX4080 16GB

Price MSRP : $999 vs $1199 = 17% lower
vRAM : 24GB vs 16GB = 50% more
Raster Performance 4K = 25-30% higher
Ray Tracing Performance 4K = 25-30% slower
TDP : 355W vs 320W

Which one would you choose ??
You forget that RTX 4080 Ti and RTX 4090 Ti will launch next year . RTX 4090 is not even the full die that is the issue and RDNA RX 7900XTX is full die so it means that Nvidia has not even launch the real flagship and AMD has clearly shown the white flag for this generation.
 
Beauty is in the eye of the beholder. As a player of twitchy multiplayer games, a large portion of the features that makes up the NVidia feature portfolio is just not relevant to me. So why should I pay their premium to have a 'complete product' ?
Ok, In your case, RT is not worth It and RDNA3 is way better for that price.
And I won't be using them 😉

Don't get me wrong. I think frame interpolation tech is a nice tech for single player 'pretty' games and I'm happy that AMD will have it in its portfolio at some point next year.
I wouldn't use It, either. Even DLSS(FSR) is not my cup of tea.
 
Why AMD did not compare it's products with RTX 4090 card is because it is not competing with it. When AMD says that you buy our products just because of power connector means that they have no performance to convice you that over RTX 4090.
 
Why AMD did not compare it's products with RTX 4090 card is because it is not competing with it. When AMD says that you buy our products just because of power connector means that they have no performance to convice you that over RTX 4090.
Reading AMD RTG slides is like reading tea leaves. No one really knows what they mean. In fact they might just say nothing.
 
You forget that RTX 4080 Ti and RTX 4090 Ti will launch next year . RTX 4090 is not even the full die that is the issue and RDNA RX 7900XTX is full die so it means that Nvidia has not even launch the real flagship and AMD has clearly shown the white flag for this generation.

They chose to compete up to $999 market.
 
You forget that RTX 4080 Ti and RTX 4090 Ti will launch next year . RTX 4090 is not even the full die that is the issue and RDNA RX 7900XTX is full die so it means that Nvidia has not even launch the real flagship and AMD has clearly shown the white flag for this generation.

The chose to compete up to $999 market.
 
That's a slippery slope, if your eyes aren't that great at discerning minor details anyway, surely the solution is to lower settings/FSR? Otherwise I am not sure what logic is there in spending $1k for a new GPU?
if I wanted a slide show in high resolution that made me sick I'd visit a bar in tijuana that starred a burro and a one legged woman.
 
View attachment 70389
WGPs +20%, ROPs +50%, TMUs stayed the same per CU. Dual issue shaders, 50% wider bus, -25% IC?
58B transistors (+116%) for this? Are you kidding?
Don't be angry mate, wait for RDNA4.
RDNA 4 I promise will be awesome.😊

Jokes aside,
The GCD is too small to compete at the top.
If you take out interconnect area the GCD is around ~265mm2 only. Not sure what is the thought process there.

They have the capability to go big, but they did not. Why do chiplet at all with such a small die, if monolithic it could even be around 450mm2 when all that interconnect logic is cut out
 
Last edited:
They chose to compete up to $999 market.
Sir this AMD strategy has ever paid the desired results? This is why AMD is second choice. When you it's self admit that you are not in the competition from the to tackle the top teir than people do not care. That is why Nvidia will always increase the price because they know people who buys their top tier GPU is not look at the price but it is looking at the performance.


Price Quotation or any kind of Law never paid AMD in PC GPU market.
 
What the heck happened to 7900xt price? Cut down 15%, 12.5% lower clock, but priced only 10% less?

I am not paying $1k for a GPU, and the cut down part seems to have terrible perf/$ ratio. I guess I wait for N32 or get some N2x part.

Unless 7900xt is really only 10% slower? That would be terrible scaling on the top die...
 
Reading AMD RTG slides is like reading tea leaves. No one really knows what they mean. In fact they might just say nothing.
When AMD it self is showing the best case situation for RX 7900 XTX that is avg 60% faster means that the number will be more lower.
 
When AMD it self is showing the best case situation for RX 7900 XTX that is avg 60% faster means that the number will be more lower.
We don't really know where it lies relative to its actual competition because AMD decided to announce it before the 4080 launched. Does that indicate the 7900XTX is worse? The RTX 4080 is $1200 and AMD dare not compare their product to it or price match Nvidia. Because it is inferior.

Halo product marketing works so well sometimes I wonder why Nvidia makes anything but halo products. It seems they question this as well.
 
Jokes aside,
The GCD is too small to compete at the top.
If you take out interconnect area the GCD is around ~265mm2 only. Not sure what is the thought process there.

Quite probably, cost saving and not much other. They went to save as much 5nm area as they could, as these wafers cost quite a lot. At this point I wonder why they did not go directly N4, which could have given some better characteristics and Nvidia is already in N4, which gives this time a slight edge in the process to the green team.
 
I still think it's early days for RT. Looking at the gaming market as a whole RT is a luxury bonus only really usable at the top end.

For RX 7000 it appears AMD was still in the mainstream mindset, covering the majority of the market and leaving the top to Nvidia. I'd like to think that they tried a multi-GCD approach that would have attacked the top, but realistically the lackluster RT performance would have limited the competitiveness felt by the public anyway.

For RT AMD is still pushing for an alternate approach to Nvidia's brute force one. The latter has little chance to really become mainstream anytime soon, with mainstream meaning including consoles, iGPUs and handhelds. So realistically there has to be more room for scaling with RT rendering beyond silly tech like upscaling and frame interpolation. This is still a pretty major unsolved problem if the gaming industry is supposed to be able to use RT across all possible gaming systems eventually. Nvidia doesn't really seem to be interested in tackling that aspect (though their collaboration with Nintendo over the next decade will be interesting to watch in that regard).

The biggest take away to me has been the contrast in RT support between AMD and Intel, as Intel didn't cheapen out on spending transistors on it and managed to offer surprisingly high RT performance on its cards relative to raster performance. Though doing that with RT the distinction between Intel iGPU and dGPU turned out to be surprisingly strong, repeating the above conundrum.
 
Back
Top