• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Question Speculation: RDNA2 + CDNA Architectures thread

uzzi38

Golden Member
Oct 16, 2019
1,305
2,425
96
All die sizes are within 5mm^2. The poster here has been right on some things in the past afaik, and to his credit was the first to saying 505mm^2 for Navi21, which other people have backed up. Even still though, take the following with a pich of salt.

Navi21 - 505mm^2

Navi22 - 340mm^2

Navi23 - 240mm^2

Source is the following post: https://www.ptt.cc/bbs/PC_Shopping/M.1588075782.A.C1E.html
 

moinmoin

Platinum Member
Jun 1, 2017
2,222
2,639
106
Of course they won't show everything, but if they are going to bring a tease, then I am sure they brought out the best numbers they can bring out. I am very sure if they had better than 3080 numbers to show, they would have done it.
And turn the upcoming event into a lame boring showcase of everything that isn't as good as the best numbers they already handpicked as a tease for it? Please.
 

Mopetar

Diamond Member
Jan 31, 2011
5,236
1,850
136
What AMD need is DLSS alternative.
I'm one of those types that's generally against the way this type of technology is being used to sell gamers on false numbers to cover for a lack of RT capabilities, so I really don't get the argument.

Part of me thinks that if Nvidia developed a technology that would stab you in the eyes while running one of their cards there would still be people lining up demanding that AMD also implement their own eye-stabbing solution in their cards.

Get better RT hardware so that upscaling isn't necessary. Or just play at a lower resolution. If you show an eagerness to purchase deceit, don't act surprised when you get lied to a lot more in the future.
 

Stuka87

Diamond Member
Dec 10, 2010
5,294
1,075
136
Everything is possible.
19 Tflops? That would be for example 72CU at 2.05GHz, that is doable.
I have to wonder, If RDNA2 is really so much better than RDNA1 and because of that AMD didn't bother to make a bigger RDNA1 chip to combat 2080 Ti or It was because of some limitation in RDNA1.
I think AMD knew from the outset that RDNA 1 was just a stepping stone (which they have mentioned) and with limited 7nm capacity (at that time) they went with the mainstream market, which significantly outsells the high end market.
 

DisEnchantment

Senior member
Mar 3, 2017
739
1,755
106
So I loaded the amdgpu as an eclipse C++ project to check some stuffs and discovered they actually support reading the umc via ioctl and not only from SMI. Didn't know this is exposed via ioctl

C:
static void gmc_v10_0_set_umc_funcs(struct amdgpu_device *adev)
{
    switch (adev->asic_type) {
    case CHIP_SIENNA_CICHLID:
        adev->umc.max_ras_err_cnt_per_query = UMC_V8_7_TOTAL_CHANNEL_NUM;  // --> 16 ( 2 * 8 )
        adev->umc.channel_inst_num = UMC_V8_7_CHANNEL_INSTANCE_NUM; // 2
        adev->umc.umc_inst_num = UMC_V8_7_UMC_INSTANCE_NUM; // 8
        adev->umc.channel_offs = UMC_V8_7_PER_CHANNEL_OFFSET_SIENNA;
        adev->umc.channel_idx_tbl = &umc_v8_7_channel_idx_tbl[0][0];
        adev->umc.funcs = &umc_v8_7_funcs;
        break;
    default:
        break;
    }
}


gmc_v10_0_early_init() --> gmc_v10_0_set_umc_funcs()
You can actually perform ioctl to query the HBM for errors.
When you perform ioctl to get the memory error count this call chain gets invoked which ends up in a register read.
C:
amdgpu_ctx_ioctl() 
    --> amdgpu_ctx_query2() 
        --> amdgpu_ras_query_error_count() 
            --> query_ras_error_count() 
               --> umc_v8_7_query_ras_error_count()  // from gmc_v10_0_set_umc_funcs
Same thing with Vega20 and Arcturus
C:
static void gmc_v9_0_set_umc_funcs(struct amdgpu_device *adev)
{
    switch (adev->asic_type) {
    case CHIP_VEGA10:
        adev->umc.funcs = &umc_v6_0_funcs;
        break;
    case CHIP_VEGA20:
        adev->umc.max_ras_err_cnt_per_query = UMC_V6_1_TOTAL_CHANNEL_NUM;
        adev->umc.channel_inst_num = UMC_V6_1_CHANNEL_INSTANCE_NUM;
        adev->umc.umc_inst_num = UMC_V6_1_UMC_INSTANCE_NUM;
        adev->umc.channel_offs = UMC_V6_1_PER_CHANNEL_OFFSET_VG20;
        adev->umc.channel_idx_tbl = &umc_v6_1_channel_idx_tbl[0][0];
        adev->umc.funcs = &umc_v6_1_funcs;
        break;
    case CHIP_ARCTURUS:
        adev->umc.max_ras_err_cnt_per_query = UMC_V6_1_TOTAL_CHANNEL_NUM;
        adev->umc.channel_inst_num = UMC_V6_1_CHANNEL_INSTANCE_NUM;
        adev->umc.umc_inst_num = UMC_V6_1_UMC_INSTANCE_NUM;
        adev->umc.channel_offs = UMC_V6_1_PER_CHANNEL_OFFSET_ARCT;
        adev->umc.channel_idx_tbl = &umc_v6_1_channel_idx_tbl[0][0];
        adev->umc.funcs = &umc_v6_1_funcs;
        break;
    default:
        break;
    }
}
 
Last edited:

Helis4life

Member
Sep 6, 2020
30
48
46
Wow, so it is possible to have a Youtuber that can understand how hardware works.

*Goes into shock*.

Anyway, here's some recommended watching for those of you wanting to put 20 minutes in. Audio's not the best, but it's still audible and gets the job done.

What I thought was interesting was his comment about AMDs RT implementation being faster/more efficient than nvidias, if the developer implemented it appropriately. Coupled with the fact that both consoles will be using rdna2s implementation of RT, this might mean we see wider developer adoption for AMDs pathway.

I'm curious how a Dev like CDProjektRed will then handle the differing RT pathways, one for the pc and one for the consoles and whether for instance both pathways could be implemented and the appropriate one chosen by the engine at runtime

The nvidia skinworks comment made me chuckle a bit. Can definitely see something like that in the future
 

CakeMonster

Senior member
Nov 22, 2012
993
79
91
I played Control in 4K with DLSS (changed monitors around) and going back to 1600p and native was a huge relief.

I'm very sensitive to sharpening and it really bothered me. If sharpening can't be disabled with DLSS in the future, I'm out, there's no way I'll be using it for as long as I can hold off.

Also, DLSS messed up small text (signs etc) where the resolution would fool you into thinking it would be discernable but its just a sharpened mess. Its minor but not pretty.

Thirdly, on high contrast edges DLSS approached MSAA effect both with regards to resolution and lack of staircase effect. However with low contrast its still not very aliased but its a low resolution blur and does not look 4K. The huge grey column in the Foundation DLC against a gray background has very blurred edges compared to native.

The film grain effect (for those who like that) is pretty much ruined with DLSS too.

Edit: A good experiment that I recommend everyone do (while standing still in place) is turn DLSS resolution way down to study what it does, then turn it gradually back up toward native and compare the effects, then lastly turn it off on native.
 
Last edited:

Justinus

Platinum Member
Oct 10, 2005
2,294
417
126
Just checked techspot (hardware unboxed) and in gears 5 4k ultra they have the 3080 at 72 fps avg and the 5700xt at 41fps. The 6000 is a 78% uplift over the 5700xt in this review. Techspot also tested with the 3950x.

Guru 3d also show a lot of variation in 3080 and 5700xt numbers so I wouldn't say the numbers AMD showed were conclusive as it seems to swing from 90% of 3080 to on par with 3080.

If that is 80CUs @ 2.2Ghz with IPC improvements it seems really poor to be honest and looks like RDNA2 did not fix the scaling issues GCN had. If it is a cut down 72CU @ 2Ghz part then it looks much better.

NV are going to sell every 3080 and 3090 they can put out between now and October 28th (and beyond probably) so if AMD have not shown the top tier card to allow them to show something surprising on October 28th then its not like it matters.
That's exactly my thought. They may be sandbagging as part of their misinformation campaign knowing so few Ampere cards are even going to be available to purchase before their big announcement in 20 days.
 

kurosaki

Senior member
Feb 7, 2019
257
247
76
Yes, overhyped, as everything AMD always does.
Not to be mean or anything. But aren't you overhyping a bit now? I mean, it won't perform better than a five years old handdrawing, and still you proclaim the the 6000-series will perform almost like a videocard?
Isn't there any shame in your body? Do you think you can fool anyone by this. No, the 6900xt will render as bad as a HTC Hero, cut in half, by an axe. Everything else is biased hype.

Heard it from a very trustworthy source on YouTube. He talked about it for like 15 mins, so it must be true.
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
During AMD Zen 3 keynote, they demoed Big Navi in Gears 5 getting average of 73 FPS, at 4K, ultra preset.

This is 20% better performance, than you get at 52 CUs clocked at 1825 MHz.

So as we can see, this is confirming, that during the Zen 3 keynote, they demoed the largest, and fastest version of the GPU. Thats all there is.

80 CUs, clocked at 2.4 GHz roughly 20% faster than 52 CUs clocked at 1825 MHz.

Scaling. AMD is so incompetent, they thought they can run away with 256 bit bus! Its going to be crap, from top to bottom.
 

Glo.

Diamond Member
Apr 25, 2015
4,649
3,276
136
I think this is redacted .No way big navi eat more than 3x more power than consoles.I think igor lost it.Well 8 more days.
Btw TGP is for entire card.Dont know what the fuck igor doing?
Don't call Igor that "he lost it". That is shooting the messenger just because we don't like his messages.

And he may just be bringing what AIBs are feeding him with.

320W of power he says?

I can tell you guys that this might be an indication that AMD has decided to go all out, and not leave anything on the table.

When I wrote the performance targets for Navi dies, or rather CU counts, I had information that initial performance targets were for 250W boards. As in 250W in total power drawn by the boards. It was at the time when 2.1 GHz max boost speeds were on the table.

Now we are seeing 2400 MHz clock speeds at 300W board power. If we take this in the context - everything starts to make sense.

How is that 2.4 GHz clock speed going to affect performance?

I don't believe anybody looked at it, but yesterday I posted comparison of performance between what AMD demoed during Zen Keynote, assuming it was full specs for Navi 21 die, with Xbox Series X, wondering if anybody will see something.

52 CUs clocked at 1825 MHz = 12.1 TFLOPs.
80 CUs clocked at 2410 MHz = 24.5 TFLOPs.

2X performance over Xbox Series X, at least - the theoretical maximum performance. 2.4 GHz at 300W board power means that AMD might be going all out. Straight up for win.

Will they win? We'll see...
 
Last edited by a moderator:

DJinPrime

Member
Sep 9, 2020
87
89
51
RTG recently posted a few videos where he stressed that the consoles are using custom RDNA2 and can't directly make an estimate for the desktop parts because AMD haven't given out any info. His reasonings seems pretty logical, things were added to the console parts (Sony and MS spending tons of money on these) and things were taken out (not required by the consoles). So, who knows what's desktop big Navi, other than it will be competitive.
 

leoneazzurro

Senior member
Jul 26, 2016
332
444
136

germans are theorizing that the leaked synthetics is actually a successor of the 5700xt and not a high or top end card based purely on the fact that 5700xt regularly could beat 2080s on synthetics and lose on actual games
They are ignoring the fact that Igor's lab gave an edge to Navi21 in TSE test, too - and frankly they are calling FSU an "AMD biased test". Strange how they had not called TSE to be an "Nvidia biased test" when there was a big controversy years ago about the way it handles async compute. Sigh.
 

Veradun

Senior member
Jul 29, 2016
564
780
136
They are ignoring the fact that Igor's lab gave an edge to Navi21 in TSE test, too - and frankly they are calling FSU an "AMD biased test". Strange how they had not called TSE to be an "Nvidia biased test" when there was a big controversy years ago about the way it handles async compute. Sigh.
Everything working well on nvidia HW is the way it's meant to be played after all, everything else is obviously AMD biased.

I look forward to Intel entering the market and see what happens to fanboys in the press.
 

Zoal

Junior Member
Oct 25, 2020
5
4
41
One thing to note about that 6700XT vs RTX 3070 - I think people should not look at this "battle" this way.

RX 6700 XT may lose 5-10% to RTX 3070, but it will also be massively more efficient.
If raster performance is comparable NV will heavily push the RT difference in 'modern' games
 
Apr 30, 2020
60
153
66
You guys realize that TSMC has been given little credit for the success of AMD products in the last few years. I think TSMC should be given much of the credit for their silicon.
What do you mean? Pretty much everyone acknowledges and even rants/raves about TSMC's 7nm process being superior to just about anyone else's right now. But you almost must keep in mind that 7nm isn't a magic problem solver. Look at Radeon Vega VII vs the 5700XT. The Vega VII is only 5% faster despite having 50% more CUs and consuming significantly more power. Both are on the same 7nm process. Now Navi2 is pushing that efficiency even further. That's AMD's design work there, iterating and improving.
 

Stuka87

Diamond Member
Dec 10, 2010
5,294
1,075
136
Keep in mind that AMD did the smart thing and did all the bench marking with their new 5000's unreleased CPU's. This is compared to NVidia using Intel.

Hence all the AMD numbers had a 15% CPU advantage added over Nvidia.

Still very impressive and glad that I held off buying the "10GB gimped" 3080
Per the notes, the nVidia cards were tested with the same CPU's.
 

lightmanek

Senior member
Feb 19, 2017
264
498
106
I've got my card mid-day today, but took me another 4h to disconnect Vega VII from the custom loop and re-work my cooling setup as WC blocks for 6800 are not expected before end of this month. Very positively surprised by how silent this cooler stays compared to stock Radeon VII :)
Card is really solid in hand, great stylish look, so much so that my wife wanted to take it and put on display with her jewellery!

Stock performance in Port Royale was 7600pts but I wanted to see what can be done with basic OC via AMD drivers in 30 seconds, here are results:


I was surprised to see that even this non-XT model can be pushed to 2500MHz (I still need to try 2600, didn't go for max OC on my first try, same for memory, played it safe at 2100MHz). With these clocks and +15% Power Limit it's beating Gigabyte RTX 3080 in Firestrike Ultra and makes my WC Vega VII OC look silly!
RT is much faster than my laptop's RTX 2060 mobile and Quadro RTX 4000 I tested last year, but obviously quite a bit behind RTX 3080.

First impressions are great, zero RPM fan mode is a blessing in desktop, my PC is dead silent there. Power consumption in idle is also improved compared to VII and I have a feeling this card can be very efficient when playing games locked to specific refresh rates. This will be tested later, now I need to update BIOS and enable SAM!
 

ASK THE COMMUNITY