As always, the largest differences in performance are seen in actual games. One cannot predict their GPU's performance on future games. All this average frames/sec across 19 games crap is just something for forum users to argue over ad nauseam. Over and over and over again for every new GPU release.I think what you wrote is absolutely correct. It's too soon to make conclusion on which product is better. Both of them have their advantages and disadvantages.
I used to focus on the 4k results as well until I started shopping around for a new gaming monitor and realizing how few gaming-oriented 4k monitors are out there (at least in sizes >30"). So until some more gaming 4k monitors hit the market, I'll continue to suffer along on my UW (#FirstWorldProblems) which makes the 1440p results more relevant to me. I do understand though that the percentage of 4k gamers is much higher amongst the enthusiast-level GPU buyers.Yep, its the only thing I look at. Cant go back to less than 4k, mainly because it allows you to have much larger screens than 1080p.
AMD Fine Wine™ obviously....But I can't imagine how [assorted promises for the future] is what gives 6800XT the actual win, when everything else is "basically a toss up"?
Why do people say IC can't help much at 4K?Die shots of the new Console APU's is taking so long!
There's some "heated discussion" right now about the SeX supposedly underperforming, but the possibility that some amount of Infinite Cache on the PS5's is showing it's presence is still open. Like in the desktops it can't help much at 4K, but at lower resolutions it makes wonders.
I'm curious if AMD will also put some amount of IC along with RDNA2 inside the next mobile APUs. If they do this they could nullify Intel's challenge with Xe.
Once/if there are 40CUs 6000 series outperforming a 5700XT at 1440p/4K with a lower bandwidth they will understand that IC helps alot at 4K and that it just happens to be an overkill for lower resolutions.Why do people say IC can't help much at 4K?
Well, I can't respond to you, but I keep seeing people much more knowledgeable than me saying this and from the results I'm seeing these don't look like 4K cards like Ampere.Why do people say IC can't help much at 4K?
128MB hit rate at 4K is 58%. That's not just ~2.1x bandwidth multiplier on vram, it is a perf boost since cache on die access is about 10x faster than memory in terms of latency.
If it reduces to 68MB for PS5, that should still have a decent hit rate, as its not a linear drop, would be ~35-40%. PS5 has good memory bandwidth relative to it's 36CU, so it doesn't need as high a hit rate as 80CU N21. It would definitely give PS5 an edge, especially in ray tracing.
More knowledgeable say that? Then they are plain wrong.Well, I can't respond to you, but I keep seeing people much more knowledgeable than me saying this and from the results I'm seeing these don't look like 4K cards like Ampere.
It sure helps, but just helps.
Correct. Computerbase.de just did a memory OC analysis for the 6800XT. They found almost no gains in perf from memory OC, which shows clearly the GPU isn't bandwidth limited. The IC does it's job really well across all resolutions.Once/if there are 40CUs 6000 series outperforming a 5700XT at 1440p/4K with a lower bandwidth they will understand that IC helps alot at 4K and that it just happens to be an overkill for lower resolutions.
Can you do tests with AIDA64 GPGPU?Radeon RX 6800 series owner
Benchmark Result Run Time Build Time ------------------------------------------------------------- Memory Copy 198397 MB/s 4641 ms - 15 MB Block 167317 MB/s 0 ms - 32 MB Block 183486 MB/s 0 ms - 64 MB Block 190988 MB/s 1 ms - 128 MB Block 195241 MB/s 1 ms - 256 MB Block 197401 MB/s 3 ms - 512 MB Block 198342 MB/s 5 ms - 1024 MB Block 198158 MB/s 10 ms - 1536 MB Block 198397 MB/s 15 ms Single-Precision FLOPS 4961 GFLOPS 10640 ms - float1 4961 GFLOPS 886 ms 969 ms - float2 4928 GFLOPS 892 ms 141 ms - float4 4909 GFLOPS 896 ms 125 ms - float8 4819 GFLOPS 913 ms 141 ms - float16 4794 GFLOPS 917 ms 125 ms 32-bit Integer IOPS 4961 GIOPS 9110 ms - int1 4961 GIOPS 887 ms 0 ms - int2 4935 GIOPS 891 ms 0 ms - int4 4935 GIOPS 891 ms 0 ms - int8 4832 GIOPS 910 ms 0 ms - int16 4794 GIOPS 917 ms 0 ms
OK. It's possible anything aout 50% is enough and the reason for losing in 4K is that Ampere better uses It's higher FP32 count.More knowledgeable say that? Then they are plain wrong.
They compare 4K vs 2x FP32 Ampere designs:
But they don't actually say the correct thing, 3080 & 3090 only scales well for it's Tflops at 4K. Its quite inefficient perf/tflops at 1080p and 1440p.
6800XT is efficient at all resolutions. The performance it achieves for it's raw shader power is superior.
If you're a 1440p gamer, 6800XT is superior.
If you're a high refresh 1080p gamer, 6800XT is much better.
If you're an 4K gamer, stock, the 3080 is superior.
OC, the 6800XT is better.
The only scenario where 3080 is better is with games that has DLSS 2.0 and RTX NV sponsored games.
Saying that the 3080 is better at RT is misleading because it ignores the fact that in non-NV sponsored RT games, the 6800XT competes very well in RT.
Very high TGP in my opinion for both -> most likely very high frequency.NV22 XT 186-211 W TGP (RX 6700 XT)
NV22 XTL 146-156 W TGP (RX 6700?)
12 GB GDDR6
That's correct, those numbers are accurate. It clearly shows 6800XT scaling with higher resolution, and at 4K it really shines vs 5700/XT. Its basically scaling as it should.OK. It's possible anything aout 50% is enough and the reason for losing in 4K is that Ampere better uses It's higher FP32 count.
I made a quick performance comparison in different resolution based on Techspot and RX5700 is the baseline:
1080p average: 100% vs 168% vs 160% vs 167%
1080p minimum: 100% vs 166% vs 155% vs 163%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
1440p average: 100% vs 189% vs 184% vs 199%
1440p minimum: 100% vs 184% vs 182% vs 193%
RX 5700 vs RX 6800 XT vs RTX 3080 vs RTX 3090
2160p average: 100% vs 211% vs 223% vs 246%
2160p minimum: 100% vs 214% vs 219% vs 241%
Both Amperes increase the difference in performance against RX 5700 much more than RX 6800XT. On the other hand the size of IC doesn't negatively affect to RX 6800XT's performance against RX5700 by increasing the resolution. Yet I would love to see 2x bigger IC and It's effect on performance, If It hlep and by how much.
Which RT games are non-NV sponsored? Dirt5 is certainly one of them, but then the question is If It's not because It's AMD heavily optimized and If It has no detrimental effect on Nvidia's hardware.
P.S. RTX 2080 Super vs RX 6800XT is the same, performance difference increases by increasing the resolution
|Thread starter||Similar threads||Forum||Replies||Date|
|Question Speculation: RDNA3 + CDNA2 Architectures Thread||Graphics Cards||169|
|D||Discussion 'Lovelace'? Next gen Nvidia gaming architecture speculation||Graphics Cards||76|
|I||Info AMD RDNA2 event link||Graphics Cards||6|
|Question RTX 3070 Delayed - NVIDIA fine tuning Anti-BOT and counter RDNA2 for the ultimate man vs silicone showdown of the century!||Graphics Cards||31|
|Question Speculation: RDNA3 + CDNA2 Architectures Thread|
|Discussion 'Lovelace'? Next gen Nvidia gaming architecture speculation|
|Info AMD RDNA2 event link|
|Question RTX 3070 Delayed - NVIDIA fine tuning Anti-BOT and counter RDNA2 for the ultimate man vs silicone showdown of the century!|