Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 183 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,621
5,872
146

GodisanAtheist

Diamond Member
Nov 16, 2006
6,783
7,113
136
I think no one realistically thinks about full N32 being as fast as the 7900XT, the possible target is the 6900XT.

-The crux of this whole issue is that the 6900xt is 7% faster than the 6800xt. Basically 0 movement on price to performance assuming the 7800xt costs $699 (which it likely will).

But I guess that's just the way this generation is gonna go.
 
  • Like
Reactions: ZGR and biostud

leoneazzurro

Senior member
Jul 26, 2016
919
1,450
136
Yes, but I think they are segmenting differently now, by adaptating to what Nvidia is doing (4090-4080-4070-4060 being essentially way different dies and as such aimed at different performance levels). Probably they aimed a bit too low or they did not reach their target clocks so the increase respect to the previous generation is limited.
 

eek2121

Platinum Member
Aug 2, 2005
2,930
4,025
136
-The crux of this whole issue is that the 6900xt is 7% faster than the 6800xt. Basically 0 movement on price to performance assuming the 7800xt costs $699 (which it likely will).

But I guess that's just the way this generation is gonna go.

I suspect the 7800XT will have a perf/$ improvement, if they ever launch it. It will likely perform around the 6900/6950XT and those cards will be discontinued. That isn’t a bad thing. Both the 6900 and 6950XT are faster than the 6800XT.

Remember, the 6900XT was $1,000 at launch. AMD would be offering that performance for $300-$350 less. Also ignore the x50 refresh for a moment. We will likely get another x50 refresh later this year.

Note that AMD has dropped the 7900XT to $849.

That means that pricing for the 7800XT is likely to come in under that. Hopefully for consumers it won’t be $799, but rather $649-$699.

EDIT: part of me wonders if AMD does have chips binned for 3ghz+, but they are saving them for the x50 refresh.
 
Jul 27, 2020
16,153
10,234
106
EDIT: part of me wonders if AMD does have chips binned for 3ghz+, but they are saving them for the x50 refresh.
I bet you wonder right. Why sell premium chips first when you can satisfy current demands of rabid gamers with worse chips and later show them the better thing, forcing them to upgrade and sell their cards at a loss? It makes more business sense for them to do that. The only losers are the gamers who want the very best and for them, such losses are just part of the game.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,226
5,228
136
There's a 7900 XT on Newegg for $799 right now. Seems like AMD encouraged a $100 across the board price cut. Clearly it's not selling at all.

Selling at $100 more than 4070 Ti was never going to work out well.

Given the market propensity for NVidia, even a at the same price it will be a slow mover.
 
  • Like
Reactions: Ranulf

maddie

Diamond Member
Jul 18, 2010
4,738
4,667
136
I suspect the 7800XT will have a perf/$ improvement, if they ever launch it. It will likely perform around the 6900/6950XT and those cards will be discontinued. That isn’t a bad thing. Both the 6900 and 6950XT are faster than the 6800XT.

Remember, the 6900XT was $1,000 at launch. AMD would be offering that performance for $300-$350 less. Also ignore the x50 refresh for a moment. We will likely get another x50 refresh later this year.

Note that AMD has dropped the 7900XT to $849.

That means that pricing for the 7800XT is likely to come in under that. Hopefully for consumers it won’t be $799, but rather $649-$699.

EDIT: part of me wonders if AMD does have chips binned for 3ghz+, but they are saving them for the x50 refresh.
The 7900XTX at $1000 clearly will set the eventual prices for the line, if not at launch, then soon after. We got 130% - 150% improvement (1080p > 4K) at the same price as the previous top card. This is my expectation for a few months out, with the slowly sinking price of the 7900XT as an example. AMD might/will try for more, but external forces should prevent it happening.
 

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136
Chips and Cheese dropped another article, this time about the RT performance between RDNA 2 and 3 vs. Turing and Pascal. Enjoy!

 

moinmoin

Diamond Member
Jun 1, 2017
4,944
7,656
136
Chips and Cheese dropped another article, this time about the RT performance between RDNA 2 and 3 vs. Turing and Pascal. Enjoy!

Not surprising Nvidia's approach to RT is on compute throughput. So with that approach RT will always profit of bigger and fatter GPUs. Conversely it is much harder to scale down to smaller and lower wattage chips.

Regarding AMD's slower approach, I liked the last two sentences of this summary (bolding is mine):

"AMD BVH makes it more vulnerable to cache and memory latency, one of a GPU’s traditional weaknesses. RDNA 3 counters this by hitting the problem from all sides. Cache latency has gone down, while capacity has gone up. Raytracing specific LDS instructions help reduce latency within the shader that’s handling ray traversal. Finally, increased vector register file capacity lets each WGP hold state for more threads, letting it keep more rays in flight to hide latency. A lot of these optimizations will help a wide range of workloads beyond raytracing. It’s hard to see what wouldn’t benefit from higher occupancy and better caching."

That way non-RT workloads can potentially profit of the improvements as well.
 

Timorous

Golden Member
Oct 27, 2008
1,605
2,746
136
Not really what I can call a daily usage setup

Crazy Unlocked RX 7900XTX Pulls 650W - YouTube

But the result is remarkable

If the N31 bug stuff is true and AMD were aiming for those kinds of clocks at 350-400W then they would have had an entirely different product on their hands.

On the bright side though if it is true and they can fix it the 7950 XTX and XT should offer some pretty big gains.

Could be a redo of the 480 to 580 jump.
 
  • Like
Reactions: Tlh97 and Ranulf

Kaluan

Senior member
Jan 4, 2022
500
1,071
96
Here's a list of scheduled AMD presentations at GDC today:

Schedule | GDC 2023 | The FidelityFX™ SDK (Presented by AMD) (gdconf.com)
Schedule | GDC 2023 | Temporal Upscaling: Past, Present, and Future (Presented by AMD) (gdconf.com)
Schedule | GDC 2023 | Real-time Sparse Distance Fields for Games (Presented by AMD) (gdconf.com)
Schedule | GDC 2023 | AMD Ryzen™ Processor Software Optimization (Presented by AMD) (gdconf.com)
Schedule | GDC 2023 | DirectStorage: Optimizing Load-Time and Streaming (Presented by AMD) (gdconf.com)
Schedule | GDC 2023 | Optimizing Game Performance with the Radeon Developer Tool Suite (Presented by AMD) (gdconf.com)

Don't know if there will be or was anything else on another day, but this is from what I knew of the 23th.

Edit: Here's their GPUOpen page summing it up + some extras:
AMD at GDC 2023 - AMD GPUOpen

AFAIK, the first scheduled event is just about to start. No chance of them talking about Navi33 for desktop, eh? lol

Edit2: Now the GPUOpen page just says "video coming soon" for all the 6 event thumbnails/descriptions. So I guess now we wait. Will update if something comes up and am faster than usual suspects such as VideoCardz.
 
Last edited:

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136

psolord

Golden Member
Sep 16, 2009
1,910
1,192
136
More info about FSR3 is now available. It uses frame interpolation, as many expected.


very nice

also regarding fsr3 vs dlss3 we know this is true





Profanity in tech is still not allowed.


esquared
Anandtech Forum Director
 
Last edited by a moderator:

KompuKare

Golden Member
Jul 28, 2009
1,013
924
136
Speaking of Fake Frames™, I glanced at the Cyberpunk RT-everywhere trailer and Nvidia are pushing Fake Frames heavily in that.
What I also noticed was that aside from reflections everywhere, the models there were often very low polygon.
 

Saylick

Diamond Member
Sep 10, 2012
3,125
6,294
136
- Fake frames are fake frames and they're garbage, but like all things AMD vs NVIDA part of the fun is seeing how AMD gerry rigs the crap out of their hardware and software to do what NV does with like 5% of the resources and knowhow.

View attachment 78593
Hahaha, at least with AMD fake frames, you the consumer ain't paying extra for silicon that is "needed" to enable that technology. Nvidia loves to tell everyone that DLSS3 can only work on their latest architecture, but it seems more and more to me like a repeat of Freesync vs. hardware based G-sync. I love it when free and "good enough" beats out proprietary solutions.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
Fake frames suck. I would laugh if AMD just cranks up the fake factor to generate even more of them just to make NVidia look bad in a bar chart. Hell, just offer an "extreme" option that blows up the FPS number even if the experience becomes pure garbage just so I can mock all of the people who've made awful arguments defending this.

Just remember that time spent on this pointless garbage is time that wasn't used to make something else or improve existing features.
 

Mopetar

Diamond Member
Jan 31, 2011
7,831
5,980
136
I'm sure this will make its way into the console refreshes and both Sony and Microsoft will be boasting about the 4K 120 FPS capabilities of their consoles.

All of the people who swore up and down that this was just as good (or even that the FSR/DLSS looks better than native) are going to have a hard time justifying why they're buying a $900 GPU to put in a $1000 PC instead of just buying a $500 console that can perform just as well.
 

PJVol

Senior member
May 25, 2020
532
446
106
Just remember that time spent on this pointless garbage is time that wasn't used to make something else or improve existing features.
I found it funny how this specific "garbage" is pushed to compensate another "garbage", in a sense of how much resources and PR efforts it tаkes to push RT in games and consumers' heads, compared to the actual improvement in visual perception it provides just to eventually dlss/fsr'ed everything.
 
Last edited:

Trovaricon

Member
Feb 28, 2015
28
43
91
I find the frame generation yet another brute-forcing approach to keep the status quo of how is frame rendering being done in general. Easy to plug-in "solution" for bigger numbers. Same as image scaling techniques. Why aren't variable rate shading-like techniques used more often - technically, it can be considered fine grained sub-image scaling from generation (rendering) perspective.
 
Last edited:

Gideon

Golden Member
Nov 27, 2007
1,619
3,645
136
The more I look at Nvidia's upcoming 4xxx series lineup, the more I wonder how AMD plans to bridge the gap between Navi 33 and Navi 32?

What we already know:


About Navi 33:​

  • It should be slightly cheaper to produce than the Navi 23 as:
    • It's built on the cheaper 6nm node and is slightly smaller (204mm² vs 237mm²).
    • It's rumored to be motherboard and pin compatible
  • It's roughly 10% faster on average:
    • As notebookcheck has RX 7600S results up, we can compare it against the very similar RX 6700S (the only differences being RDNA2 vs RDNA3 and 14Gbps memory vs 16Gbps).
    • Here are the game results for both. In some games the difference is within margin of error (2-3%). In some games it's 15% faster ad even 20%+ in one.
    • I know It's only a single sample in a small selection of games, but it still gives a ballpark performance increase.


About Navi 32:​

  • It's SKUs will land in the ballpark of RX 6800 - 6900 XT as far as performance goes.
    • Best case: It's single digits faster than the RX 6950XT
    • Worst case: the top SKU at least competes with the RX 6900XT
  • It's almost certainly more expensive to produce than Navi 22, even when cut down!
    • It has a 200mm² 4nm GDC and 3 - 4x 36.5 mm² 6nm MCDs, exotic packaging tech and a native 256 bit memory bus (cut down to 192 bit with 3 MCDs).
    • Navi 22 is a monolithic 335 mm² die on the 7nm process, with a native 192 bit memory bus (cut down to 160 bit)

What it all (most probably) means:​

  • The Navi 33 SKUs (7600 series) will be relatively cheap to produce, even 250$ SKUs shouldn't really be a problem, if RX 6600 is any indication.
  • The Navi 32 SKUs (7800 and possibly 7700 series) probably won't be cheap enough for the 7700 series.
  • The hypothetical best Navi 33 chip (7600XT ?) will at best perform in the ballpark of 6700 non-XT 10GB at 1080p and slightly slower at 1440p
  • The most castrated (192 bit, 3x MCD) Navi 32 SKU that still makes any sense should still perform in the ballpark of RX 6800 at 1440p.
  • 128 bit version with 2x MCD (but still a 200 mm² 5nm GCD) IMO does't make any sense against Nvidias monolithic 146 mm² AD107 and 190 mm²

All in all, that leaves quite the gap in the lineup to fill as:
  1. Navi 33 just doesn't scale to Navi 22 performance levels to be used in the 7700 series
  2. Navi 32 almost certainly isn't cheap enough to produce, to be sold as the 7700 series (at least in mass).
  3. It isn't reasonable to expect one chip to cover the RX 6900, 6800 and 6700 price-brackets

How will AMD address this gap?​


My guess is they will continue selling Navi 22 as the stop-gap. Either as the RX 6750 XT or rebranded into the RX 7700 (non-xt) series.

It should hold its own against at least the upcoming RTX 4060 (something Navi 33 will probaly struggle with). This means it should be competitive at least in the 300$ - 400$ price-range.

I don't really see any other alternative unless AMD just decides we will get no competitive cards at all in the price range (est. 400$ - 650$).



What do you guys think?
 
Last edited: