• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."
  • Community Question: What makes a good motherboard?

Question Speculation: RDNA2 + CDNA Architectures thread

Page 192 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zinfamous

No Lifer
Jul 12, 2006
103,511
18,067
136
I think the only thing that is even more certain than before, at this point, is that the 3090 is just a really, really bad product overall. :D

3080 vs 6800XT is, at this single moment, a decent tossup (basically, which one can you actually buy at any given second? that's the winner).

...But I can't imagine how "future-proof" advantage in memory, overall efficiency, demonstrated OC headroom, cheaper, and the consensus that a lot of disadvantages (RT, lack of DLSS-like feature), will actually be addressable in future software/driver updates for current products, is what gives 6800XT the actual win, when everything else is "basically a toss up"?
 

Ajay

Diamond Member
Jan 8, 2001
7,587
2,742
136
I think what you wrote is absolutely correct. It's too soon to make conclusion on which product is better. Both of them have their advantages and disadvantages.
As always, the largest differences in performance are seen in actual games. One cannot predict their GPU's performance on future games. All this average frames/sec across 19 games crap is just something for forum users to argue over ad nauseam. Over and over and over again for every new GPU release.
 

Elfear

Diamond Member
May 30, 2004
6,999
517
126
Yep, its the only thing I look at. Cant go back to less than 4k, mainly because it allows you to have much larger screens than 1080p.
I used to focus on the 4k results as well until I started shopping around for a new gaming monitor and realizing how few gaming-oriented 4k monitors are out there (at least in sizes >30"). So until some more gaming 4k monitors hit the market, I'll continue to suffer along on my UW (#FirstWorldProblems) which makes the 1440p results more relevant to me. I do understand though that the percentage of 4k gamers is much higher amongst the enthusiast-level GPU buyers.
 

moinmoin

Platinum Member
Jun 1, 2017
2,074
2,485
106
...But I can't imagine how [assorted promises for the future] is what gives 6800XT the actual win, when everything else is "basically a toss up"?
AMD Fine Wine™ obviously. ;)
More seriously, the fact that it's the same tech as the consoles that are to stay on the market for at least the next half a decade, if not even longer. Those consoles will the target for most new AAA games during that time, so that's where much of the tech will be pushed.
 

lightmanek

Senior member
Feb 19, 2017
233
413
106
I've got my card mid-day today, but took me another 4h to disconnect Vega VII from the custom loop and re-work my cooling setup as WC blocks for 6800 are not expected before end of this month. Very positively surprised by how silent this cooler stays compared to stock Radeon VII :)
Card is really solid in hand, great stylish look, so much so that my wife wanted to take it and put on display with her jewellery!

Stock performance in Port Royale was 7600pts but I wanted to see what can be done with basic OC via AMD drivers in 30 seconds, here are results:


I was surprised to see that even this non-XT model can be pushed to 2500MHz (I still need to try 2600, didn't go for max OC on my first try, same for memory, played it safe at 2100MHz). With these clocks and +15% Power Limit it's beating Gigabyte RTX 3080 in Firestrike Ultra and makes my WC Vega VII OC look silly!
RT is much faster than my laptop's RTX 2060 mobile and Quadro RTX 4000 I tested last year, but obviously quite a bit behind RTX 3080.

First impressions are great, zero RPM fan mode is a blessing in desktop, my PC is dead silent there. Power consumption in idle is also improved compared to VII and I have a feeling this card can be very efficient when playing games locked to specific refresh rates. This will be tested later, now I need to update BIOS and enable SAM!
 

PhoBoChai

Member
Oct 10, 2017
114
363
106
Die shots of the new Console APU's is taking so long!
There's some "heated discussion" right now about the SeX supposedly underperforming, but the possibility that some amount of Infinite Cache on the PS5's is showing it's presence is still open. Like in the desktops it can't help much at 4K, but at lower resolutions it makes wonders.

I'm curious if AMD will also put some amount of IC along with RDNA2 inside the next mobile APUs. If they do this they could nullify Intel's challenge with Xe.
Why do people say IC can't help much at 4K?

128MB hit rate at 4K is 58%. That's not just ~2.1x bandwidth multiplier on vram, it is a perf boost since cache on die access is about 10x faster than memory in terms of latency.

If it reduces to 68MB for PS5, that should still have a decent hit rate, as its not a linear drop, would be ~35-40%. PS5 has good memory bandwidth relative to it's 36CU, so it doesn't need as high a hit rate as 80CU N21. It would definitely give PS5 an edge, especially in ray tracing.
 
  • Like
Reactions: Tlh97

Panino Manino

Senior member
Jan 28, 2017
279
327
106
Why do people say IC can't help much at 4K?

128MB hit rate at 4K is 58%. That's not just ~2.1x bandwidth multiplier on vram, it is a perf boost since cache on die access is about 10x faster than memory in terms of latency.

If it reduces to 68MB for PS5, that should still have a decent hit rate, as its not a linear drop, would be ~35-40%. PS5 has good memory bandwidth relative to it's 36CU, so it doesn't need as high a hit rate as 80CU N21. It would definitely give PS5 an edge, especially in ray tracing.
Well, I can't respond to you, but I keep seeing people much more knowledgeable than me saying this and from the results I'm seeing these don't look like 4K cards like Ampere.
It sure helps, but just helps.
 

PhoBoChai

Member
Oct 10, 2017
114
363
106
Well, I can't respond to you, but I keep seeing people much more knowledgeable than me saying this and from the results I'm seeing these don't look like 4K cards like Ampere.
It sure helps, but just helps.
More knowledgeable say that? Then they are plain wrong.

They compare 4K vs 2x FP32 Ampere designs:

But they don't actually say the correct thing, 3080 & 3090 only scales well for it's Tflops at 4K. Its quite inefficient perf/tflops at 1080p and 1440p.

6800XT is efficient at all resolutions. The performance it achieves for it's raw shader power is superior.

If you're a 1440p gamer, 6800XT is superior.

If you're a high refresh 1080p gamer, 6800XT is much better.

If you're an 4K gamer, stock, the 3080 is superior.

OC, the 6800XT is better.

The only scenario where 3080 is better is with games that has DLSS 2.0 and RTX NV sponsored games.

Saying that the 3080 is better at RT is misleading because it ignores the fact that in non-NV sponsored RT games, the 6800XT competes very well in RT.
 

PhoBoChai

Member
Oct 10, 2017
114
363
106
Once/if there are 40CUs 6000 series outperforming a 5700XT at 1440p/4K with a lower bandwidth they will understand that IC helps alot at 4K and that it just happens to be an overkill for lower resolutions.
Correct. Computerbase.de just did a memory OC analysis for the 6800XT. They found almost no gains in perf from memory OC, which shows clearly the GPU isn't bandwidth limited. The IC does it's job really well across all resolutions.
 

Bouowmx

Golden Member
Nov 13, 2016
1,059
443
146
Radeon RX 6800 series owner
Can you do tests with AIDA64 GPGPU?

Run AIDA64 Extreme. Go to Tools > GPGPU Benchmark.
Uncheck all processors except the Radeon RX 6800. Double-click the box next to:
  • Memory copy
  • Single-precision FLOPS
  • 32-bit integer IOPS
Don't worry about [TRIAL VERSION]. Click on Results, select the correct device, and click Save.
Open the saved text file, and paste the contents as a reply.

GeForce GTX 1660, 1770 MHz core, 9900 MT/s memory
Code:
Benchmark                        Result  Run Time  Build Time
-------------------------------------------------------------
Memory Copy                 198397 MB/s   4641 ms          
- 15 MB Block               167317 MB/s      0 ms          
- 32 MB Block               183486 MB/s      0 ms          
- 64 MB Block               190988 MB/s      1 ms          
- 128 MB Block              195241 MB/s      1 ms          
- 256 MB Block              197401 MB/s      3 ms          
- 512 MB Block              198342 MB/s      5 ms          
- 1024 MB Block             198158 MB/s     10 ms          
- 1536 MB Block             198397 MB/s     15 ms          
Single-Precision FLOPS      4961 GFLOPS  10640 ms          
- float1                    4961 GFLOPS    886 ms      969 ms
- float2                    4928 GFLOPS    892 ms      141 ms
- float4                    4909 GFLOPS    896 ms      125 ms
- float8                    4819 GFLOPS    913 ms      141 ms
- float16                   4794 GFLOPS    917 ms      125 ms
32-bit Integer IOPS          4961 GIOPS   9110 ms          
- int1                       4961 GIOPS    887 ms        0 ms
- int2                       4935 GIOPS    891 ms        0 ms
- int4                       4935 GIOPS    891 ms        0 ms
- int8                       4832 GIOPS    910 ms        0 ms
- int16                      4794 GIOPS    917 ms        0 ms
 

TESKATLIPOKA

Senior member
May 1, 2020
265
298
96
More knowledgeable say that? Then they are plain wrong.

They compare 4K vs 2x FP32 Ampere designs:

But they don't actually say the correct thing, 3080 & 3090 only scales well for it's Tflops at 4K. Its quite inefficient perf/tflops at 1080p and 1440p.

6800XT is efficient at all resolutions. The performance it achieves for it's raw shader power is superior.

If you're a 1440p gamer, 6800XT is superior.

If you're a high refresh 1080p gamer, 6800XT is much better.

If you're an 4K gamer, stock, the 3080 is superior.

OC, the 6800XT is better.

The only scenario where 3080 is better is with games that has DLSS 2.0 and RTX NV sponsored games.

Saying that the 3080 is better at RT is misleading because it ignores the fact that in non-NV sponsored RT games, the 6800XT competes very well in RT.
OK. It's possible anything aout 50% is enough and the reason for losing in 4K is that Ampere better uses It's higher FP32 count.
I made a quick performance comparison in different resolution based on Techspot and RX5700 is the baseline:
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
1080p average: 100% vs 168% vs 160% vs 167%
1080p minimum: 100% vs 166% vs 155% vs 163%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
1440p average: 100% vs 189% vs 184% vs 199%
1440p minimum: 100% vs 184% vs 182% vs 193%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
2160p average: 100% vs 211% vs 223% vs 246%
2160p minimum: 100% vs 214% vs 219% vs 241%
Both Amperes increase the difference in performance against RX 5700 much more than RX 6800XT. On the other hand the size of IC doesn't negatively affect to RX 6800XT's performance against RX5700 by increasing the resolution. Yet I would love to see 2x bigger IC and It's effect on performance, If It hlep and by how much.
Which RT games are non-NV sponsored? Dirt5 is certainly one of them, but then the question is If It's not because It's AMD heavily optimized and If It has no detrimental effect on Nvidia's hardware.

P.S. RTX 2080 Super vs RX 6800XT is the same, performance difference increases by increasing the resolution
 

Guru

Senior member
May 5, 2017
818
331
106
I don't get ray tracing in the current form. I think tracing rays is amazing, its a fantastic technology, but its at least 3-4 years too early to do it properly in real time.

Right now you get *slightly* better shadows for 30% to 60% performance drop. Another game might do reflections, so you get slightly better reflections compared to rasterization reflections, but again at about 50% performance drop! Its not worth it at all!

Nvidia jumped the gun, they released utter garbage 2000 series graphics, uber expensive, pathetic performance and had to sell it basically on "features", so they forced ray tracing way too early! They knew it was shit, they knew it tanks performance insanely, so they decided to basically run games at 720p and upscale them to 4k and basically have garbage graphics.

So you enable ray tracing to have *supposedly* better graphics, but it runs like garbage, so you enable DLSS to ruin your graphics quality. So you get NONE of the benefits! You essentially have worse graphics running at worse fps, all in order to have mildly better shadows, which actually become worse through dlss upscaling.
 

PhoBoChai

Member
Oct 10, 2017
114
363
106
OK. It's possible anything aout 50% is enough and the reason for losing in 4K is that Ampere better uses It's higher FP32 count.
I made a quick performance comparison in different resolution based on Techspot and RX5700 is the baseline:
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
1080p average: 100% vs 168% vs 160% vs 167%
1080p minimum: 100% vs 166% vs 155% vs 163%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
1440p average: 100% vs 189% vs 184% vs 199%
1440p minimum: 100% vs 184% vs 182% vs 193%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
2160p average: 100% vs 211% vs 223% vs 246%
2160p minimum: 100% vs 214% vs 219% vs 241%
Both Amperes increase the difference in performance against RX 5700 much more than RX 6800XT. On the other hand the size of IC doesn't negatively affect to RX 6800XT's performance against RX5700 by increasing the resolution. Yet I would love to see 2x bigger IC and It's effect on performance, If It hlep and by how much.
Which RT games are non-NV sponsored? Dirt5 is certainly one of them, but then the question is If It's not because It's AMD heavily optimized and If It has no detrimental effect on Nvidia's hardware.

P.S. RTX 2080 Super vs RX 6800XT is the same, performance difference increases by increasing the resolution
That's correct, those numbers are accurate. It clearly shows 6800XT scaling with higher resolution, and at 4K it really shines vs 5700/XT. Its basically scaling as it should.

Ampere is going down res it scales worse and worse for all of its tflops. Thats why NV hyped up 8K gaming.

As for IC being larger, you will increase hit rate % but at the same time you increase the latency due to the way cache partitions work. So any hit you get is slightly inferior, while you get larger die & high power use. At some point, more cache isn't the solution anymore.

For RT games on PC that isn't NV sponsored, so far only Dirt 5, Godfall and WoW Shadowland xpac. On PS5, we've seen amazing RT results from really modest hw, only 36 CUs.
 

TESKATLIPOKA

Senior member
May 1, 2020
265
298
96
I didn't find RT perfomance charts in Godfall anywhere.
Either game developers will release patches for their RT games to improve AMD perf, or we can very well end up in a situation when on some games RT will be better on one HW and worse on the other one.
 
Last edited:

uzzi38

Golden Member
Oct 16, 2019
1,220
2,250
96
I'm confused, is he stating DXR enabled is faster than DXR disabled? Edit: Yeah, that's what the video shows indeed. Hm...
It enables VRS by default but afaik the implementation doesn't play too well on Nvidia hardware right now and can cause common crashing.
 

ASK THE COMMUNITY