Question Speculation: RDNA2 + CDNA Architectures thread

Page 188 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,625
5,897
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

insertcarehere

Senior member
Jan 17, 2013
639
607
136
After reading the reviews of 6800 and 6800XT, It looks ok in my books, but It doesn't look like RDNA2 has any IPC gains, only more CU and higher clocks.

Indeed, setting aside the hype train some were perpetuating in this very thread (hilarious in hindsight), Big Navi looks pretty competitive against Ampere which is good for us consumers. Looking forward to see how Navi 22/23 fares against the rest of Nvidia's Ampere stack.
 
  • Like
Reactions: Tlh97 and prtskg

TESKATLIPOKA

Platinum Member
May 1, 2020
2,355
2,848
106
RDNA2 Big Navi is a good product. I am personaly more interested in the mobile versions of N22, N23 and Van Gogh with RDNA2 IGP. I expect at least 16MB Infinity cache for tha APU so It should help a lot for the missing bandwidth.
 

sandorski

No Lifer
Oct 10, 1999
70,100
5,640
126
Fine Wine and more VRAM make RDNA2 the better choice for those wanting 3+Years from their Vidcard. At least among the current selection of products. I'm skipping RDNA2(I think) as my 5700XT is sufficient for 1440P UW at this time. I am somewhat tempted though, so I reserve the right to change my mind....
 
  • Like
Reactions: guachi

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
You have been able to buy a 1440p monitor for under 300usd several years now. I'd call that mainstream. It's the GPU's that wont follow in performance. Sadly, RT seems to make 720p the new sweet spot again.

Just because a product is available, doesn't mean it is mainstream. Mainstream represents the largest install base. And that is far and away 1080P, by a gigantic margin. For a lot of people $300 on a monitor is too much, when they can get a similar sized one for $120. They don't know the difference between 1440 and 1080.

And as long as games keep getting more demanding, GPU's will also have to get more demanding. Technically we had 4K gaming cards five years ago. And they will run games from five years ago. But definitely not todays games. So it should not be a surprise that you need an upper end card to play on a higher resolution display. A high end card may be twice as fast as a low end card, but a high end display (4K) has quadruple the pixels of a mainstream display (1080).
 

Panino Manino

Senior member
Jan 28, 2017
820
1,022
136
What's the verdict on Infinite Cache?
I watched just a few reviews and the performance on 1080p and 1440p is shockingly good but drops hard on 4K. I thought the purpose of the IF was to help at high resolutions?
About helping with RT I think it's a bit soon to make conclusions, with all RT games till now "more or less optimized" for DXR1.0 and Nvidia hardware, but still, not seeing yet it making any difference for RT.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,013
136
What's the verdict on Infinite Cache?
I watched just a few reviews and the performance on 1080p and 1440p is shockingly good but drops hard on 4K. I thought the purpose of the IF was to help at high resolutions?
About helping with RT I think it's a bit soon to make conclusions, with all RT games till now "more or less optimized" for DXR1.0 and Nvidia hardware, but still, not seeing yet it making any difference for RT.

The higher the resolution, the bigger the framebuffers, the harder it is to keep the working set in cache.
 
  • Like
Reactions: Tlh97 and Martimus

leoneazzurro

Senior member
Jul 26, 2016
924
1,451
136
The purpose of IC was to amplify the bandwidth provided by a 256 bit bus with GDDR6 so it could behave more in line with a 320-384 bit bus with GDDR6X, and help in some graphical workloads like Ray tracing (but for those, it is very likely you need to have appropriate profiles in applications and/or drivers)
 

PJVol

Senior member
May 25, 2020
533
446
106
As I understand it, from what have been said in the stream with Scott(AMD), there were multiple reasons for the InfCache introduction, such as vram power consumption and cost, and as someone said, ridiculous whole memory system complexity for more than 256bit bus. The perf drop at 4k may be due to 128mb is insufficient to compensate those misses with this frame size. Looks like it may shine in mobile asics for gaming laptop (1080-1440p), where consumption power is the main priority.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
What's the verdict on Infinite Cache?
I watched just a few reviews and the performance on 1080p and 1440p is shockingly good but drops hard on 4K. I thought the purpose of the IF was to help at high resolutions?

It obviously does, but the higher resolutions will also occupy more of the space and decrease the hit rate relative to smaller resolutions. Compare it to the 1080p results where the resolution is only 25% of 4k and it's pretty obvious how much of an uplift it gives you when the hit rate is even better.

If you just look at the raw memory bandwidth of Navi 21 compared to GA 102 the Ampere cards have at least ~50% more bandwidth and the 3090 is getting closer to having double the bandwidth. I don't know if Nvidia still has a lead over AMD when it comes to compression technology, but without the infinity cache the Navi 21 cards would be completed starved at 4k.

What I'd really like to see is how much better the card would perform at 4K if it used the same GDDR6X memory that Nvidia put in their GA 102 cards. Obviously they'd need to wait until 2 GB memory modules become available, or they'd be facing the same dilemma that Nvidia did with the 3080. However, the 1950 MHz memory used in the 3090 would give AMD a ~20% increase in bandwidth which would probably go a long way to improving the performance in 4k.

I suspect that if we do see some kind of mid-generation refresh from AMD that's what they're most likely to do. Between that and a slight bump in the clock speed it should offer a decent performance uplift.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,628
158
106
What's the verdict on Infinite Cache?
I watched just a few reviews and the performance on 1080p and 1440p is shockingly good but drops hard on 4K. I thought the purpose of the IF was to help at high resolutions?
About helping with RT I think it's a bit soon to make conclusions, with all RT games till now "more or less optimized" for DXR1.0 and Nvidia hardware, but still, not seeing yet it making any difference for RT.
The drop at 4K is the normal drop. The 3080 and 3090 are just freaks with their crazy amounts of shader cores, which are underutilized at lower resolutions.

The Infinity cache does it job, but since the 128MB is targeted for 4K, at lower resolutions the 6800XT and 6800 are the freaks
 
  • Like
Reactions: Tlh97

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
What I feared, thought... maybe tables will turn with Super Resolution? If it's performing so well up to 1440p maybe with SS it'll get above the 3090 with it's DLSS?

Anyway, Hangar 21 demo is up:


Honestly I hope the whole super resolution thing is a fad and it goes away. It's really just an excuse for companies to advertise unacceptable performance as "4K Gaming" even though it isn't.

I can see why consoles might want to use it since they're midrange hardware that needs to be able to pull off 4K for the next five years, but the whole reason to game on PC is so that you don't have to accept those kinds of compromise.

I also wish developers would offer more granular control over the use of RT in their games. The all or nothing approach just leaves a choice of substandard performance or not using RT. By controlling the amount, each person can find a happy medium based on their cards performance and the capabilities of their monitor and what image quality improvements they feel are worth the performance hit.
 

Elfear

Diamond Member
May 30, 2004
7,097
644
126
The drop at 4K is the normal drop. The 3080 and 3090 are just freaks with their crazy amounts of shader cores, which are underutilized at lower resolutions.

The Infinity cache does it job, but since the 128MB is targeted for 4K, at lower resolutions the 6800XT and 6800 are the freaks

That's what I was thinking too. Has anyone done a comparison of the 6000 series with performance drop-off percentage at each resolution like Hardware Unboxed did for the 3000 series? Comparing the two architectures that way would lend some insight into whether the Infinity Cache is running out of capacity or the 3000 series cards are just more geared towards higher resolutions due to the "doubled up" cores.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Honestly I hope the whole super resolution thing is a fad and it goes away. It's really just an excuse for companies to advertise unacceptable performance as "4K Gaming" even though it isn't.

I agree. ML upscaling is just a cheat for people unwilling to set themselves on lower presets or unable to go manual on settings.

So they can set everything at max and sit down while the ML does it's thing masking reality.

It's basically a blue pill.
 

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
I suspect it isn't that the 3080/90 are 4K freaks so much as the 6800's infinity cache turning it into a 1080p freak. When the cache hit ratio is high it essentially eliminates any bottleneck from memory and I think that's where the GA 102 cards fall behind.

Meanwhile the cache isn't big enough for the same performance boost at 4K and the wider bus and faster memory of the Ampere cards give it more of a boost than the number of CUDA cores, otherwise the 3090 should be a lot better than the 3080 than it actually is.

An interesting question will be what AMD does when they move to 5nm. If they double the cache then it probably performs similarly well in 4K as it does now with 1080p and 1440p resolutions.

I really hope a few of the AIB cards have some memory overclocks as I have a feeling that would do a lot for 4K performance.
 

Hans Gruber

Platinum Member
Dec 23, 2006
2,131
1,088
136
I suspect it isn't that the 3080/90 are 4K freaks so much as the 6800's infinity cache turning it into a 1080p freak. When the cache hit ratio is high it essentially eliminates any bottleneck from memory and I think that's where the GA 102 cards fall behind.

Meanwhile the cache isn't big enough for the same performance boost at 4K and the wider bus and faster memory of the Ampere cards give it more of a boost than the number of CUDA cores, otherwise the 3090 should be a lot better than the 3080 than it actually is.

An interesting question will be what AMD does when they move to 5nm. If they double the cache then it probably performs similarly well in 4K as it does now with 1080p and 1440p resolutions.

I really hope a few of the AIB cards have some memory overclocks as I have a feeling that would do a lot for 4K performance.
The 3090 isn't even a consumer level graphics card. It's best to wait for the 3080Ti. This also means the 6900XT is going head to head with the 3080Ti.
 

sze5003

Lifer
Aug 18, 2012
14,182
625
126
The 3090 isn't even a consumer level graphics card. It's best to wait for the 3080Ti. This also means the 6900XT is going head to head with the 3080Ti.
The 3080ti should be announced in January according to that rumor site. That's basically where I'm headed and waiting for it after seeing the 6800xt reviews. The 6800xt is great but if you can not use SAM and you want to ray tracing, Nvidia is still a bit of a better option.

Of course ray performance can be improved for amd with drivers in the future hopefully. As more and more titles will begin to use that tech.
 
  • Like
Reactions: Tlh97

Mopetar

Diamond Member
Jan 31, 2011
7,835
5,982
136
The 3090 isn't even a consumer level graphics card. It's best to wait for the 3080Ti. This also means the 6900XT is going head to head with the 3080Ti.

The 3090 is a consumer level card, just for consumers with more money than sense. It doesn't have Titan drivers so it's useless as a professional or prosumer card.
 

Tup3x

Senior member
Dec 31, 2016
959
942
136
I agree. ML upscaling is just a cheat for people unwilling to set themselves on lower presets or unable to go manual on settings.

So they can set everything at max and sit down while the ML does it's thing masking reality.

It's basically a blue pill.
I don't really understand your reasoning. It's clearly better that reducing the resolution manually. Turning down other quality settings cause higher drop in image quality.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
I don't really understand your reasoning. It's clearly better that reducing the resolution manually. Turning down other quality settings cause higher drop in image quality.
No, not clearly, not across the board.

And games have usually plenty of settings that **invisibly** cause horrible frame drops. Since they are in fact invisible to the human eye guess what happens when ML digest that kind of "effect" :>
 
  • Like
Reactions: Tlh97 and KompuKare