• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Speculation: RDNA2 + CDNA Architectures thread

Page 204 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Krteq

Senior member
May 22, 2015
949
604
136
Are you at power limit or not? Someone at B3D said their 6800XT was consuming more power in other games than in this. While Ampere was doing the opposite.
That "someone at B3D" is a @lightmanek :)
Interesting ...
On RDNA2 it pulls less power than some raster games in heavy scenes, as normally my card would clock to around 24xx-25xx MHz range while drawing almost 300W in games like Doom Ethernal or 3DMark TimeSpy. Here in Q2RT, card has enough power headroom to hit 2650MHz+ all the time while drawing around 250W-280W with average closer to 260W.
 

Geranium

Member
Apr 22, 2020
77
98
61
Quake 2 RTX is a Nvidia sponsored RTX game, not standard DXR or Vulkan ray game. It will definitely perform worse on AMD hardware. Same also true for all other RTX sponsored game like Control, Shadow of Tomb Raider, Metro Exodus and other RTX sponsored game.
When did the last time Nvidia pushed game worked well on AMD hardware??
 

Head1985

Golden Member
Jul 8, 2014
1,853
666
136
I think 3D mark is only neutral RT test today.Hybrid is where we should expect RDNA2 RT performance in future in neutral games.6800XT should be slighly faster than 3070 and 6800 slighly slower.What AMD need is DLSS alternative.

 

TESKATLIPOKA

Senior member
May 1, 2020
349
357
96
If ~335mm^2 fully enabled Navi 22 on TSMC 7nm really can't beat partially enabled ~390mm^2 GA104 3060ti on SS 8nm that's not a good endorsement for the architecture.
If N22 has 96MB IC and 192bit GDDR6 then you shouldn't be surprised about the size and 40CU is not a lot so you can't expect much performance from It. I think for 40CU they could have used only 64MB IC and 128bit GDDR6(1/2 of Big Navi), they would save a lot of space this way with minimal impact on performance.
 

GodisanAtheist

Platinum Member
Nov 16, 2006
2,885
1,396
136
If N22 has 96MB IC and 192bit GDDR6 then you shouldn't be surprised about the size and 40CU is not a lot so you can't expect much performance from It. I think for 40CU they could have used only 64MB IC and 128bit GDDR6(1/2 of Big Navi), they would save a lot of space this way with minimal impact on performance.
- Then why do an 80Cu/40Cu stack with nothing in between if the 40CU N22 isn't going to punch up? That leaves a huge gaping hole in AMD's line-up that they cannot permanently fill with a heavily cut down N21 die.

I suspect the slightly inflated IC/BUS to CU ratio + clock speeds will allow the N22 to remain very competitive against the 3060Ti.
 
  • Like
Reactions: Tlh97 and raghu78

Mopetar

Diamond Member
Jan 31, 2011
5,651
2,411
136
What AMD need is DLSS alternative.
I'm one of those types that's generally against the way this type of technology is being used to sell gamers on false numbers to cover for a lack of RT capabilities, so I really don't get the argument.

Part of me thinks that if Nvidia developed a technology that would stab you in the eyes while running one of their cards there would still be people lining up demanding that AMD also implement their own eye-stabbing solution in their cards.

Get better RT hardware so that upscaling isn't necessary. Or just play at a lower resolution. If you show an eagerness to purchase deceit, don't act surprised when you get lied to a lot more in the future.
 

MrTeal

Diamond Member
Dec 7, 2003
3,098
774
136
I'm one of those types that's generally against the way this type of technology is being used to sell gamers on false numbers to cover for a lack of RT capabilities, so I really don't get the argument.

Part of me thinks that if Nvidia developed a technology that would stab you in the eyes while running one of their cards there would still be people lining up demanding that AMD also implement their own eye-stabbing solution in their cards.

Get better RT hardware so that upscaling isn't necessary. Or just play at a lower resolution. If you show an eagerness to purchase deceit, don't act surprised when you get lied to a lot more in the future.
It really doesn't matter, we all know AMD won't be able to get anywhere near as many Gigastabs/s as the 4080 Ti. They just need to be competitive so I can get my green card cheaper.
 

TESKATLIPOKA

Senior member
May 1, 2020
349
357
96
- Then why do an 80Cu/40Cu stack with nothing in between if the 40CU N22 isn't going to punch up? That leaves a huge gaping hole in AMD's line-up that they cannot permanently fill with a heavily cut down N21 die.

I suspect the slightly inflated IC/BUS to CU ratio + clock speeds will allow the N22 to remain very competitive against the 3060Ti.
I think N22 could be competitive against 3060Ti at best.
That IC/BUS is a bit of an overkill for only 40CU even with higher clocks. With only 64MB and 128bit you could save ~45-50mm2 of die space and I don't think we would sacrifice a lot of performance.
So the question is, If It's worth the used up space or not. Yes, If there was only 128bit GDDR then you would be limited to either 8GB or 16GB Vram.
I think 64MB IC and 192bit GDDR6 could be a great compromise, but It looks like the amount of IC depends on the bus width (16MB per 32bit width).
 
Last edited:

moinmoin

Platinum Member
Jun 1, 2017
2,369
2,941
106
I think 3D mark is only neutral RT test today.Hybrid is where we should expect RDNA2 RT performance in future in neutral games.6800XT should be slighly faster than 3070 and 6800 slighly slower.What AMD need is DLSS alternative.

Was going to ask whether it already uses DXR 1.1, but that appears to be the case indeed.
"DirectX Raytracing Tier 1.1
DirectX Raytracing helps developers create realistic reflections, shadows, and other effects that are difficult to achieve with other techniques. DirectX Raytracing Tier 1.1 introduces new features and capabilities that improve efficiency and give developers more flexibility and control. While the details of these improvements are beyond the scope of this guide, you can read more about DXR Tier 1.1 on the Microsoft DirectX Developer Blog and in the DirectX Raytracing Functional Spec. This test uses features from DirectX Raytracing Tier 1.1 to create a realistic ray-traced depth of field effect.
"

I guess we need to wait for the Hangar 21 demo to be released (when will it be actually available? I'm surprised it still isn't) to compare how an optimization to RDNA 2 would look like and how that compares with Nvidia chips.
 

lightmanek

Senior member
Feb 19, 2017
280
512
136
It was power limited.1.090V@2550mhz/+5% power limit/315w.Runs at power limit all time.
Someone here :p

This is surprising as I'm clocking much higher while pulling less power. Are you custom 6800XT or reference design?

For comparison sake, my video below:
 

moinmoin

Platinum Member
Jun 1, 2017
2,369
2,941
106
FWIW the PS5 and XSX/S version of Control will offer two modes: a 60fps Performance Mode and 30fps Graphics Mode (with ray-tracing). The big question is what resolution they manage.


Edit: Here's screenshots of each for direct comparison, looks like Graphics Mode is in a significantly higher resolution (especially visible with the sign to the right):
 
Last edited:

Head1985

Golden Member
Jul 8, 2014
1,853
666
136
Someone here :p

This is surprising as I'm clocking much higher while pulling less power. Are you custom 6800XT or reference design?

For comparison sake, my video below:
Yeah i am sure every 6800XT runs at 2.8Ghz + is undervolted at same time... o_O
Btw you pulling more power.GPU only power draw is up to 280w and that its 345w tototal board power(max reference power limit)
 
Last edited:

lightmanek

Senior member
Feb 19, 2017
280
512
136
Yeah i am sure every 6800XT runs at 2.8Ghz + is undervolted at same time... o_O
Btw you pulling more power.GPU only power draw is up to 280w and that its 345w tototal board power(max reference power limit)
Yes, I think I won silicon lottery with my NAVI21 :)

As for GPU Power, card hits north of 300W when doing 3DMarks or Doom's with identical settings as shown in the video. I think AMD driver shows GPU + MEM power, how are you monitoring total board power? My card doesn't give readings in driver nor it MSI Afterburner for that, only GPU + Mem. Haven't looked into other utilisation overlay apps in ages, so I might be behind here ...
PS. That video was made after original B3D comment of 260W, with my new stable OC of 2757GPU / 2150MEM at set 1050mV (which I explained elsewhere that it still gives full allowed 1150mV at higher clocks, but lowers the whole voltage curve making GPU more efficient on average). I upped GPU core by 30MHz compared to previous setting, but that only partially explains power increase from 260W to 280W, as most of it must come from lowering quality setting.
I also ran 2800MHz clock with the same settings, but after 20 minutes of Quake II RT GPU crashed and recovered to Windows. I think with BIOS from 6900XT and 1175mV maximum setting, I will reach over 2800MHz stable.
 
Last edited:
  • Like
Reactions: Tlh97 and Elfear

fleshconsumed

Diamond Member
Feb 21, 2002
6,062
1,433
136
Well, I wanted to buy 6000 card, but 6800's are really hard to come by, much harder than Ampere, my local Microcenter had no 6800's in stock today and 6700xt is going to have 192bit bus which means it's going to blow at mining. So I got open box 3060ti from Microcenter for $411 after tax this morning. I figure it's about the same price as 5700XT but up to 25% faster in 4K, and because 3060ti is really good at mining for its price I'm going to mine when not gaming to offset the cost. Still feels bad man, I installed drivers and nVidia control panel has not changed in the past 15 years since 2005, it was like a blast from the past. I didn't like AMD Adrenalin drivers initially, but compared to them nVidia control panel is just horrid. At least it's still a nice upgrade from my current rx480, should be 2.5 almost 3 times faster.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
5,651
2,411
136
Edit: Here's screenshots of each for direct comparison, looks like Graphics Mode is in a significantly higher resolution (especially visible with the sign to the right):
Graphics mode is obviously doing more than just adding in some RT since performance mode even removes a few things like the grate to the far right entirely, but the photo does show off what you're getting pretty well. I'd say that it generally does look better, but the reflections of the papers on the ground just seem unnatural so I think there's still some room for improvement. Even in performance mode the faked reflections on the floor look good enough, and even if the far right-side is more accurately done with RT, the fake aesthetic in performance mode just fits better.

I think that the darker spotting on the walls in performance mode looks a little better, but I'm not sure if performance mode was only adding them because it thinks that's what shadowing should look like. I'm not sure if it's just the resolution, but graphics mode doesn't have the odd whitish pixels along the base of the door and on some other metallic edges or trim, which doesn't look good at all. I'm not sure if it's just some aliasing that goes away with the higher resolution or if there's some extra processing being done to smooth that out. The grainy sign is also pretty unacceptable as well.
 

Hitman928

Diamond Member
Apr 15, 2012
3,432
3,614
136
Hmmmm, a Virtual Super Resolution example in this release:

Virtual Super Resolution is AMD's name for downscaling (i.e. rendering at a higher res. than your display resolution). Super Resolution is what they are tentatively calling their upscaling feature (i.e. rendering at a lower resolution than the display resolution). It's a terrible naming choice after VRS has already been established and hopefully they come up with a better name before release.
 

moinmoin

Platinum Member
Jun 1, 2017
2,369
2,941
106
Virtual Super Resolution is AMD's name for downscaling (i.e. rendering at a higher res. than your display resolution). Super Resolution is what they are tentatively calling their upscaling feature (i.e. rendering at a lower resolution than the display resolution). It's a terrible naming choice after VRS has already been established and hopefully they come up with a better name before release.
I suggest Artificial Super Resolution since that's what DLSS is. :p
 

ASK THE COMMUNITY

TRENDING THREADS