Question Speculation: RDNA2 + CDNA Architectures thread

Page 187 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
Tom's 6800 was OC stable @ 2.55 Ghz.

The Tom's article was showing the 6800 *soundly* beating the 3070 which sat back and traded blows with the 2080 ti.

And in my bias from Tom's, they don't try too hard to show AMD in a positive light :)

At least most of us are going to be able to get a better look at post-launch games and drivers before having the opportunity to make a decision/purchase :D That's good right?!?
 

Mopetar

Diamond Member
Jan 31, 2011
7,797
5,899
136
Just when you thought the RDNA2 leaks would slow down now that we've just gone through a launch.


Ampere efficiency is actually quite good if you step down the voltage and don't try to push it as hard as the reference settings from NVidia. Several sources were able to get significantly lower power draw while still getting 95% or more of the performance.

I think the real winners will be the eventual Navi-based APUs which should be a big step up, particularly if they utilize some kind of infinity cache.
 

uzzi38

Platinum Member
Oct 16, 2019
2,565
5,575
146
Ampere efficiency is actually quite good if you step down the voltage and don't try to push it as hard as the reference settings from NVidia. Several sources were able to get significantly lower power draw while still getting 95% or more of the performance.

I think the real winners will be the eventual Navi-based APUs which should be a big step up, particularly if they utilize some kind of infinity cache.

Doom Eternal drops 12% perf dropping full board power to 250W with a 3080.

With Navi21 up there we're looking at ~23% drop to clocks (2300mhz->1850mhz) at half per-CU power (which should be roughly 170W TBP? Maybe less.)
35e75bb949deb7bf1cdd034485a04774.jpg


I think AMD has the advantage here.
 

blckgrffn

Diamond Member
May 1, 2003
9,110
3,029
136
www.teamjuchems.com
Minecraft RT was demo'ed a while back on XBox SX. And per todays benchmakrs, the 6800XT has TERRIBLE frame rates in Minecraft RT, like, unplayable bad. So something is going on driver wise for that game.

I see. I knew about the demo, I was hoping there was some info about the SX release. Thanks for clarifying the source.

More disappointed that SX version wasn’t available at launch than about an unobtanium card result at launch. We can be sure there is some playable config out there once MS finishes baking it.
 

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
7-10% advantage, 16 Gb of VRAM vs 8, better behavior in newer titles. Then, as said, It should be at least 40$ less, as the 6800XT is the most bang of the buck in the line. And don't undervalue the VRAM amount: 6 Gbytes cards are starting to get VRAM limited already now, 8Gbyte cards will be in a couple of years in several titles.
If the only thing you do is play non-ray tracing and non-DLSS2.0 games at 1440P, then the 6800 makes sense.

If you want to play games with Ray Tracing, games with DLSS 2.0, do video editing, do any professional rendering work, or do any machine learning, the 3070 seems like a no brainer.
 
Last edited:

leoneazzurro

Senior member
Jul 26, 2016
905
1,430
136
If the only thing you do is play non-ray tracing and non-DLSS2.0 games at 1440P, then the 6800 makes sense.

If you want to play games with Ray Tracing, games with DLSS 2.0, do video editing, do any professional rendering work, or do any machine learning, the 3070 seems like a no brainer.

In fact AMD is marketing these as pure gaming cards. Generally if you do rendering work professionally, you go for the professional versions (Ex-Quadros). About ray tracing, I am not convinced the differences we see today will be the differences we will see in the future, as most of the games will be more and more optimized for AMD architecture thanks to console development, too. And judging by the way AMD implemented ray tracing in these cards, they need specific optimization (i.e. creation of BVH structure and their management in the cache is a pure SW optimization). I don't want to say that Dirt 5 will be the reference, but it will certainly not be Minecraft RTX.
 
  • Like
Reactions: Tlh97 and prtskg

mikegg

Golden Member
Jan 30, 2010
1,740
406
136
In fact AMD is marketing these as pure gaming cards. Generally if you do rendering work professionally, you go for the professional versions (Ex-Quadros). About ray tracing, I am not convinced the differences we see today will be the differences we will see in the future, as most of the games will be more and more optimized for AMD architecture thanks to console development, too. And judging by the way AMD implemented ray tracing in these cards, they need specific optimization (i.e. creation of BVH structure and their management in the cache is a pure SW optimization). I don't want to say that Dirt 5 will be the reference, but it will certainly not be Minecraft RTX.
Still, I'm a geek/software engineer and I love that 3070 could do a lot of things well.

Oh, forgot to mention that the 3070 seems to be better if you're a streamer too.

Hopefully Nvidia does a refresh of the 3070 and add 2 more GB of ram.
 

leoneazzurro

Senior member
Jul 26, 2016
905
1,430
136
Still, I'm a geek/software engineer and I love that 3070 could do a lot of things well.

Oh, forgot to mention that the 3070 seems to be better if you're a streamer too.

Hopefully Nvidia does a refresh of the 3070 and add 2 more GB of ram.

Well, of course you have to go for the card which fills all your needs. I, for an instance, am waiting for a good mobile card as for space reasons I cannot have a desktop. Btw, a "refresh of 3070 adding 2 more Gbytes of RAM" would be based on another chip, as GA104 has 256 bit bus and it cannot accomodate 10 Gbytes. It's either that or 16 Gbytes.
 
  • Like
Reactions: Tlh97

leoneazzurro

Senior member
Jul 26, 2016
905
1,430
136
After reading the reviews of 6800 and 6800XT, It looks ok in my books, but It doesn't look like RDNA2 has any IPC gains, only more CU and higher clocks.

Pure IPC could be measured with synthetics only. Games are difficult to use as IPC measures because you have different bottlenecks in different places in the graphics pipeline. And scaling with the number of CU witch is never 1:1
 

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
In fact AMD is marketing these as pure gaming cards. Generally if you do rendering work professionally, you go for the professional versions (Ex-Quadros). About ray tracing, I am not convinced the differences we see today will be the differences we will see in the future, as most of the games will be more and more optimized for AMD architecture thanks to console development, too. And judging by the way AMD implemented ray tracing in these cards, they need specific optimization (i.e. creation of BVH structure and their management in the cache is a pure SW optimization). I don't want to say that Dirt 5 will be the reference, but it will certainly not be Minecraft RTX.

Any optimisation that relies on the cache won't come from the consoles of course. They've got the money to fund some software work now at least :)

The main comfort from the consoles is likely to be that even the 6800 series is much faster in RT than the consoles (obviously!) so they'll be able to run the same effects as the consoles + a bit.

I suspect their biggest problem at the moment with RT is actually that it is so hard that it really needs something like DLSS to get the performance back to a decent level after turning it on.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
Pure IPC could be measured with synthetics only. Games are difficult to use as IPC measures because you have different bottlenecks in different places in the graphics pipeline. And scaling with the number of CU witch is never 1:1
That's why I said no IPC gain instead of IPC regression, because there is more CU.
Meassuring It is hard, but remember how some users were dreaming about 10% or 20% more IPC gain? It turns out as a false hope.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
arch2.jpg


For this slide, here is the endnote:
AMD internal modeling based on graphics-engine-only measured average gaming power consumption and 3dmark11 power consumption vs. frequency for RX 5700XT and RX 6900XT divided by the number of compute units (40 and 80 respectively)
If I understood It correctly, then If we set the clock at ~1900Mhz, then both RX 5700XT(40CU) and RX 6900XT(80CU) will have the same power consumption. Of course they mean only the GPu alone.
This could be very interesting info in regards to the mobile versions.
 

TESKATLIPOKA

Platinum Member
May 1, 2020
2,329
2,811
106
2020-11-18_8-38-49-1030x579.png

arch7.jpg

128MB infinity cache has a hit rate of 58% in 4K.
Based on this slide, It looks like for 1440p you need 64MB and for 1080p you need 32MB infinity cache to have comparable hit rate.
That's why I think we will see 32MB infinity cache for N23 and 64MB for N22.
 
Last edited:
  • Like
Reactions: Tlh97 and PhoBoChai

Qwertilot

Golden Member
Nov 28, 2013
1,604
257
126
That seems very plausible, I do wonder how well N22 will hold up at 4k if they do do that.

The 3070 & so on aren't 'no compromise 4K cards' but they're definitely plausibly effective.
 

kurosaki

Senior member
Feb 7, 2019
258
250
86
2020-11-18_8-38-49-1030x579.png


128MB infinity cache has a hit rate of 58% in 4K.
Based on this slide, It looks like for 1440p you need 64MB and for 1080p you need 32MB infinity cache to have comparable hit rate.
That's why I think we will see 32MB infinity cache for N23 and 64MB for N22.
Uuhhrgh. Hope not.
1440p has been with us so long now, it shouldnt have to cost 1000 eur (VAT included) to play mainstream on 1440p mainstream screen.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
Uuhhrgh. Hope not.
1440p has been with us so long now, it shouldnt have to cost 1000 eur (VAT included) to play mainstream on 1440p mainstream screen.
1440p is the best compromise, given the distance of the monitor from your eyes on a standard desk and the accordingly small enough monitor you need not to stretch you neck.

But "mainstream" is something else, it means 1080p ;)