• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 107 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The 7900 XTX is $200 cheaper than the 4080, has more bandwidth, more memory, more flops, more cache, similar TDP.

It will probably be ahead enough in raster to justify it unless you really do only care about RT games.

But we'll see. And as for Intel achieving node supremacy or even using Intel nodes for consumer GPUs well I can say the odds are dubious at best.
 
So, will Navi 32 be released on desktop? or 6900XT will just fill for it.

Navi 21 is a monolithic 520mm^2 die. Even though it's on an older process, and can do with less expensive packaging, Navi 32 is almost certainly going to be cheaper to make, and probably by some margin. Navi 32 isn't showing up on the market until all the 21s have been sold, but I expect 21 production to stop and it to be replaced.
 
So much trolling and doom pilling in this thread, reminds me of post Zen4 May event 😅

Meanwhile I'm here wandering how great the new media engine is, how good the new AI and software is, when can we see stacked MCDs and many many other things.

Pitty the actual launch is pretty far away, would (hopefully) pipe down some of the specu-trolling.

Or shall we move on to a RDNA3+/GFX1150 thread? 😂
 
Yeah, Intel could certainly have a great product in two or three cycles, but AMD could have much better raytracing performance in RDNA 4 or 5 as well. They either didn't try or couldn't manage it this gen, but with development cycles all that takes is them not putting enough priority on raytracing 4-5 years ago. If Ampere made them take it more seriously we wouldn't realistically see dividends for one more cycle.

I had been hopeful that it would fall between RTX 3000 and 4000 in raytrace, I'm coming from a 3080 which is merely okay at raytracing. With the 7900's at around a 3090 in raytracing I'm left with nothing to upgrade to but a 4090 if I care about that. Maybe a 4080ti when it comes around. I was ready to buy a card before christmas, now I just don't know.
 
So much trolling and doom pilling in this thread, reminds me of post Zen4 May event 😅

Meanwhile I'm here wandering how great the new media engine is, how good the new AI and software is, when can we see stacked MCDs and many many other things.

Pitty the actual launch is pretty far away, would (hopefully) pipe down some of the specu-trolling.

Or shall we move on to a RDNA3+/GFX1150 thread? 😂

Idk man, it looks like a fine card, but compared to what people in this thread and elsewhere have been saying as recently as last week, this is absolutely a letdown...

FMA throughput is around 3.5x
Pixel fillrate is around 2x
L2/L3 cache bandwidth is around 2x
Memory bandwidth is around 2x

If they can't hit 2x performance in games it's due to driver overhead/CPU bottleneck or there's an issue with scaling with the architecture.

Also, on the eve of what might be a leak with clockspeeds and performance targets, does anyone want to make their final guesses? This pump that dumps fuel into the hype train's engine ain't going to prime itself! 😛

Here, I'll start. My guess for full N31 specs:
- Up to 60% perf/W uplift over N21
- 1.8x - 2x raster uplift
- 2.2x to 2.5x RT uplift
- 3.1 GHz game clock
- 375W TDP
- 24 GB GDDR6 RAM

I think anywhere between 2x to 3x hybrid RT performance over N21 is fair game. Who knows what the average might be across 50 titles. Won't know until it gets benched.
 
Idk man, it looks like a fine card, but compared to what people in this thread and elsewhere have been saying as recently as last week, this is absolutely a letdown...
Man, getting called out like that doesn't feel good, even if it was just a guess.

I must say though. I don't think anyone, ANYONE, expected the clocks to be that low. If AMD were able to get close to 3 GHz, we'd actually get the 2x perf numbers everyone was expecting.
 
Idk man, it looks like a fine card, but compared to what people in this thread and elsewhere have been saying as recently as last week, this is absolutely a letdown...

How is this a let down? Most of the rumors were spot on.

For the first time in over a decade, ITS LOWER IN PRICE than the card it replaces.

For the last decade, every generation has been more expensive than the one it replaced. Yes, it can be argued this actually replaces the 6900, and is the same price, and thats fair. But thats still better than more, especially considering current inflation, and that nVidia chose to greatly increase their price, especially the price of the 4080 being $400 more than the 3080.
 
I must say though. I don't think anyone, ANYONE, expected the clocks to be that low. If AMD were able to get close to 3 GHz, we'd actually get the 2x perf numbers everyone was expecting.


The Chinese forums I frequent had a leaker which was actually right on the money regarding RDNA 3 clocks, as well as hinting that it isn't really a AD102 competitor in terms of overall performance.
 
Just pulled the trigger on an Zotac Amp Extreme RTX 4090 Airo. I waited to see what AMD would offer with RDNA3 but unfortunately, it doesn't align with my gaming requirements for my new build.

AMD needs to step up and take ray tracing seriously because that's the future of gaming graphics.
 
Just pulled the trigger on an Zotac Amp Extreme RTX 4090 Airo. I waited to see what AMD would offer with RDNA3 but unfortunately, it doesn't align with my gaming requirements for my new build.

AMD needs to step up and take ray tracing seriously because that's the future of gaming graphics.

-Doesnt even matter if it's the future or not. If NV matches or beats them everywhere then they need to take it seriously because they'll only ever be the second choice for the vast majority of people this way.

If I can get a card for an extra $100 that performs exactly the same in raster and 1.5x as fast in RT, and I'm already spending $1000... then I might as well get the one with more RT whether it's the future or not.
 
How is this a let down? Most of the rumors were spot on.

The rumors being bandied about on this very thread were very far from spot-on, for context....

I think we are going to get something along the lines of 1.6 to 1.7x the performance in real world performance vs a 6900xt. Ray tracing performance will be great than 2x.

If the angstronomics is correct, whatever AMD did to double the number of shaders was very cheap in terms of silicon as Navi 33 die is smaller than navi 23 yet they have doubled the number of shaders.

Combined with the very minor transistor density improvement of TSMC 6nm, I think this architecture is going to be like amphere were compute goes up but gaming performance is going to be quite modest relative to the compute increase.

However power consumption will not increase that much and it will be pointless for the most part as I think AMD has also not increased the pipeline that much to accommodate this wider architecture.

I think Navi 33 is going to be 1.2 to 1.3x a navi 23 but at 120watts instead of 165 watts.

I think navi 31 clocks are going to be more like 2.6-2.8ghz and have power more in the 330 range as the cooler AMD has kind of previewed so far does not look like a 400 watt cooler. That is just something slightly bigger than 2 slots if not a 2 slot card. When you add the ray tracing increase, it helps AMD claim of a greater than 1.5x performance/watt increase.

This post turned out to be a very generous estimate of Navi 31's performance and was poo-pooed here in this thread as being grossly conservative...
 
A couple reliable leakers (who were not wrong with predictions about tonight) have stated that AMD plans to launch a dual chiplet version next year. We will see. If so, they may end up dominating the top of the stack after all.
Source? Only aware of a chiphell(?) post by greymon55 apparently talking about a 10+ MCM product next year. No other context.

And greymon55 has deleted their twitter profile... https://twitter.com/greymon55?lang=en
 
-Doesnt even matter if it's the future or not. If NV matches or beats them everywhere then they need to take it seriously because they'll only ever be the second choice for the vast majority of people this way.

If I can get a card for an extra $100 that performs exactly the same in raster and 1.5x as fast in RT, and I'm already spending $1000... then I might as well get the one with more RT whether it's the future or not.

What card cost $100 more and has 1.5x more RT performance?
 
Instruction Level Parallelism, differentiated core and shader clocks - seems like AMD is bringing back retro stuff. Don't know how things will pan out. Hopefully the entry-to-mid range offerings deliver on performance and performance/watt.

EFoB packaging is gotta be expensive, so I have my doubts as to how cheap Navi31 is going to be to manufacture.
 
Just pulled the trigger on an Zotac Amp Extreme RTX 4090 Airo. I waited to see what AMD would offer with RDNA3 but unfortunately, it doesn't align with my gaming requirements for my new build.

AMD needs to step up and take ray tracing seriously because that's the future of gaming graphics.

You didn't even wait for third party reviews, I guess you were just looking for an excuse.
 
Firs time in history AMD is following Nvidia tech in Hardware.

People forgot that AMD mentioned this GPU is FP32 X 2 means that it is same like Nvidia Ampere method.

AMD did not increase Ray Tracing performance.

Good to see that it does not require new slot for power.

So the rumors of 2x performance was wrong and Hasan Mujtba was right that RNDA 3 flagship is slower than RTX 4090.
 
What card cost $100 more and has 1.5x more RT performance?

-Just trying to illustrate the point that in a halo position, NV can always make a sku to match raster performance while having an inherent RT advantage.

While AMD definitely has the smarter design for maximizing their 5nm wafer order, they're likely not that far ahead in terms of cost so NV can always match them in price (or since NV is NV get within $100 of AMD and essentially be considered equal due to better known gaming adjacent features).

In short, it doesn't matter if feature X is the future or not because NV will market that feature like it is the future and people will buy it up.
 
People forgot that AMD mentioned this GPU is FP32 X 2 means that it is same like Nvidia Ampere method.
I think it's a bit different. Based on the Anandtech article by Ryan Smith - 2x FP32 depends on extracting ILP through compilers and software, whereas Ampere has the hardware to generate the extra FLOPs.
 
Looks like price/performance is very close to what current 6900XT/6800xt cards are selling for. So no improvements there :/
 
I think it's a bit different. Based on the Anandtech article by Ryan Smith - 2x FP32 depends on extracting ILP through compilers and software, whereas Ampere has the hardware to generate the extra FLOPs.
That is my point. From Turning Nvidia is going like AMD 2009-2014, where they are more hardware based and AMD following Nvidia suit being more software based.
 
Back
Top