Question Gaming: Ryzen 7 3800XT or i7-10700k?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dave3000

Golden Member
Jan 10, 2011
1,353
91
91
I right now have an i7-4930k with a GTX 1080 ti but I want to see better performance in gaming and be prepared for FS2020. Should I go for the i7-10700k or the Ryzen 7 3800XT? I don't need more than 8 cores for gaming right now, so I see no point in a 3900X or 3950X (even though I can afford those) for just gaming right now or any time soon, especially since the upcoming PS5 and XBox Series X are going to have 8-core/16-thread CPUs.
 

CP5670

Diamond Member
Jun 24, 2004
5,513
590
126
I went back and forth on this recently too and decided on the 10700K, even though the 3800X and 3900X do seem like better value for the money. For general web browsing and productivity, more threads are better in benchmarks but make no practical difference, since any desktop is fast enough and I typically use a laptop or phone anyway. I do occasional content creation but find that performance in things like 7zip or video encoding is not that important, since I can just browse websites or do other things while it's running. On the other hand, 80fps vs 90fps 1% lows in a VR game can affect the experience a lot. To the extent that the CPU matters at all, most games (and also emulators) are still limited by performance on one thread.
 

HurleyBird

Platinum Member
Apr 22, 2003
2,684
1,268
136
Here I am, still sitting on a 4770K, telling myself I should wait for AM5 so as to have a platform with a future (and hoping that AMD's execution remains excellent so it's not too long a wait), and here you guys are debating 3800XT Vs. 10700K on the cusp of a Zen 3 launch that is likely to embarrass both those contenders...
 

epsilon84

Golden Member
Aug 29, 2010
1,142
927
136
Here I am, still sitting on a 4770K, telling myself I should wait for AM5 so as to have a platform with a future (and hoping that AMD's execution remains excellent so it's not too long a wait), and here you guys are debating 3800XT Vs. 10700K on the cusp of a Zen 3 launch that is likely to embarrass both those contenders...

To be fair, if you've held on for this long on a 4770K, you've more than got your moneys worth. TBH I find the 'future proof platform' thing a bit overblown. I usually upgrade my CPU and mobo in one go... sure its nice that I can possibly run next gen Ryzen on my current B450 motherboard and I may just do that, but there are usually some features or standard that older gen motherboards don't support and makes me want to upgrade to a newer chipset anyway.
 
  • Like
Reactions: Tlh97 and SMU_Pony

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
@Makaveli

Makes me wish someone - anyone! - would start selling binned A-die or M-die from Samsung. It's been awhile since b-die became the gold standard in DDR4. Sadly, I think that may not happen since DDR5 is coming in 2021 or so.

Don't they already somewhat bin them? Searching Newegg for 3600 CL14 it makes me believe they do. Mostly G.Skill offerings show up. I'm not sure what extent they go to as far as binning goes. Maybe it's just a will it boot XXXX MHz @x.xx v's and they take it from there?

For being no longer in production there looks to be plenty still available on the market. Somewhat of a price premium, but looks tolerable.

When I was looking I decided to just buy the cheapest 3200 MHz CL14 b-die kit and take my chances on the silicon lottery.

I bought this one about a week before Zen 2 launched. It's currently $100 shipped. Sure you can get cheaper, but arguably not better ram. Of course there's always the possibility they bin tigher now for more profit? So it's a huge YMMV.


I've had my kit all the way up to 4333MHz playing around, but settled on 3600 CL14 with tightened timings....Until Zen 3 drops most likely.
 

Makaveli

Diamond Member
Feb 8, 2002
4,723
1,058
136
Sometimes ram quality does help.

Within the last 2 months Asus has dropped alot of bios updates for X570 boards.

The most recent 2602 bios for my board was released then pulled. Alot of people were having issues with memory and D.O.C.P

And what I noticed is everyone using B-die memory has no issues with the bios releases. However those on lesser memory not so lucky.
 
  • Like
Reactions: Tlh97 and Elfear

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,856
136
@Makaveli

The mobo OEMs should be looking at "lesser memory" since b-die is no longer being made. Plenty of Micron e-die out there. And Hynix CJR.

@Kenmitch

They bin everything for aftermarket DIMMs. Problem is none of it's A-die or M-die.
 

Makaveli

Diamond Member
Feb 8, 2002
4,723
1,058
136
@Makaveli

The mobo OEMs should be looking at "lesser memory" since b-die is no longer being made. Plenty of Micron e-die out there. And Hynix CJR.

@Kenmitch

They bin everything for aftermarket DIMMs. Problem is none of it's A-die or M-die.

Possible they are making alot of changes now to the X570 bioses for Zen 3.

It has me curious to if we are going to see increased IF clocks, and also better support for higher Clocked DDR4 sticks.
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
They bin everything for aftermarket DIMMs. Problem is none of it's A-die or M-die.

How much cheaper is the A-die and M-die? It's not like you can't get b-die anymore.

And what I noticed is everyone using B-die memory has no issues with the bios releases. However those on lesser memory not so lucky.

I've run into a few uEFI updates that didn't play nicely with my b-die at stock clocks(3200 CL14). I don't remember offhand if it was the Asus C6H hero or my MSI X570, but I'm currently leaning towards the C6H. I do remember clearly that the revision stated improved memory compatibility as one of the new features. Thought it was somewhat ironic when it didn't even like my b-die.
 

Campy

Senior member
Jun 25, 2010
785
171
116
B-die has supposedly been "out of production" for over a year now, yet new kits of B-die with higher clocks and tighter timings keep getting released. I think Samsung are still making the ICs. Perhaps they don't sell their own kits with it anymore, but if supply really were drying up I think we would notice.
 
  • Like
Reactions: lightmanek

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,856
136
How much cheaper is the A-die and M-die? It's not like you can't get b-die anymore.

You can get b-die, but Samsung isn't producing any more b-die ICs. A-die and M-die to date has only shown up in server/workstation-oriented "bare" DIMMs - the major memory OEMs haven't started using it in enthusiast-class products (XMP settings, heatspreaders, etc.).
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
This is an interesting situation. For most gaming scenarios, even a 10600k is faster at stock than the fastest OC tuned Zen2 XT.

However, there are a couple of exceptions where the gap is fairly small, and the case of Counter Strike Source where it's basically tied, which seems to be a case where the game is so old (closing on two decades!) where the most crucial elements of the game actually fit inside the generous Zen2 cache, bypassing the latency limitations of the Zen chiplet architecture.

Latency being a bit worse on Zen2 is irrelevant to 99% of things outside of gaming, because you're dealing with predictable, serialized loads (encoding, compression, distributed computing ala folding/SETI) which are not as sensitive to it.

Both are more than fast enough for all generic light duties of web, office apps, etc, but general purpose and heavy compute scenarios simply favor Zen2, often by enormous margins when also considering the costs and power consumption.

Circling back to gaming, if you absolutely had to buy something over the next few days and wanted the best overall gaming options, they would be the 10600k, 10700k, and 10900k. 6C/12T 10600k is the best value of the three, and I find the 10900k a bit silly, as 10C/20T is not enough of an advantage over the 8C/16T 10700k to be worth the exaggerated price and aggressive cooling requirements. With a typical nice AIO or big air cooler like a Noctua DH15 Chromax, I think the 10600/700 would be actually superior in terms of keeping a more consistent all core OC vs the i9.

BUT, we are only a few days away from FS2020 moving from a limited alpha build to a public beta, which will probably be fairly representative of final performance. Given that it looks to be an extremely sophisticated and demanding title, if you are most interested in building a strong PC for this title specifically, it will be worth waiting to see what the actual performance is like with current hardware.

Questions such as :

Does it scale to very high core counts? If so, it might make a case not only for the i9, but the 3900/3950 could potentially outpace Intel in this title.

Does it on the other hand scale poorly past 6 Cores? In this case, a 10600k might make the most sense, or in the event it runs particularly well on Zen2, perhaps a 3600/3600X/3700 build.

Is it something like F1 where all the common CPUs run it well past normal refresh rates? Maybe this will let you spend less on CPU now, and plan on a drop in CPU upgrade in a few years.

Is it GPU or CPU bound at your expected display resolution and refresh rate? It's a world of difference attempting 1440/144 vs 4k/60 for example, and with Freesync/Gsync, a situation with fluctuations between 110-130 is totally fluid, while the same GPU might struggle to lock 4k/60, which is VERY jarring when you have drops under 60, and can experience tearing and input lag. The ramifications of these limitations drastically change where you need performance.

4k/60 you need an abundance of GPU grunt but not much in the way of CPU to maintain the most demanding titles at smooth framerates, while 1440p VRR, you are actually needing both solid GPU as well as CPU performance to hit those goals.

Two PCs doing 4K/60 with a 2080 Super, but one being a Ryzen 3300X and one being a 10900K OC, you would not be able to tell the difference between them in virtually any gaming scenario. As long as the CPU side of the minimums is beyond 60fps, you would never be able to see any reduction from having a $120 CPU vs a $700(?) one. But change that to 1440p mixed settings and 144hz, and suddenly you see big differences as even a 9900K/10700K/10900K at max tuned OC is STILL too slow to maintain 144fps in the most demanding titles no matter how strong your GPU is. Lower the settings to 800x600 and everything minimum and these heavy games still can't maintain 144 with current CPUs.


Just as an update, as we couldn't have known before, but FS2020 turns out to be interesting.

SIGNIFICANT CPU bottleneck at 1080p and 1440p, even with top end 2080ti.

Poor CPU multithreading, although it will use more than 4 cores, 4 cores take the overwhelming brunt of the load.

Because of horrific CPU bottleneck, it's not terribly hard on GPU at sub 4k. In most scenarios you'll be waiting on your CPU to do its work while your GPU idles. Hence, eg 51FPS CPU limit locks you down even if you go to minimum details vs high details with a 2060/5600 or better.

With a stock 3.6Ghz 9900K and god-awful 3200 CL16 Ram, the 9900K is 10%+ faster at 1080p/1440p over Ryzen 3800XT (the fastest Ryzen CPU for gaming available so far).


For FS2020, neither the 9900K (10700K more or less identical, though the 10700K has more aggressive stock turbo), 10900K nor the 3700/3800/3900/3950 make sense for the title. Scaling with 6C/12T and even 4C/8T CPUs is so close that even a 10100 or Ryzen 3100/3300 would give nearly the same performance, though the 3700X and 10600K make more sense.

Given OC headroom, a 5.2Ghz all-core 10600K with 3600 or better DDR4 is probably about the optimal CPU for this title, boosting that ~10% lead seen in the benchmarks closer to 15-17%.

It's an interesting, and seemingly VERY unoptimized title so far. The fact it scales so horribly over multiple cores is disappointing. It's also shown to be unstable for quite a number of users on the subreddit so far.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
@Arkaign

Any sign that L3 cache size has an impact on performance?

Hi! I saw a Reddit post with a guy swapping between 3300x and 3700x with basically zero performance difference, and that's 16MB vs 32MB, so it doesn't seem particularly sensitive there, though perhaps the 8MB in the APU models might be a bridge too far, but those are as yet hard to come by in the more professional reviewer landscape.

Hopefully a big Digital Foundry deep dive is on the way. Even more hopefully, there is optimization to come that will significantly improve things and take more advantage of high core counts.

As a tip that seems to be backed up by many, if you have multiple displays, or run windowed mode, snapping your map to a different location and moving it back or leaving it there seems to often double performance (!!!), and that sounds like a bug to me for sure, and could lead to very large improvements when fixed. I take it with a certain level of skepticism for now, but it's at least something to be hopeful about.

I guess this is another good example for now of why building/speccing for an unreleased title is always a gamble, and shows just how important it is to gather real world testing of component choices with the released product are before actually buying anything. Someone wanting to build a dedicated FS2020 rig would (unless something fairly drastic changes) find a 9900k/10700k/10900k/3700X/3800X/3900X/3950X completely underutilized by this software. It's even barely utilizing 6C/12T over 4C/8T, which shocks me. 😔
 

CropDuster

Senior member
Jan 2, 2014
366
45
91
This mirrors what I saw with the Alpha the last few months. My 1080ti would sit around 50% usage with my 4.4 5820k at 1080p. Very little difference changing between HT on or off, seemed to reduce some stutters but didn't increase FPS with the extra threads.
 
  • Like
Reactions: Arkaign

DrMrLordX

Lifer
Apr 27, 2000
21,640
10,856
136
Hi! I saw a Reddit post with a guy swapping between 3300x and 3700x with basically zero performance difference, and that's 16MB vs 32MB

Unfortunately, Zen2 is restricted in L3 amount per CCX, so if the game is really only using 4t then it will only have locality to one 8MB block of L3. I was more thinking 10600k vs 10900k (12MB vs 20MB).
 
  • Like
Reactions: Tlh97 and Arkaign

TheGiant

Senior member
Jun 12, 2017
748
353
106
Just as an update, as we couldn't have known before, but FS2020 turns out to be interesting.

SIGNIFICANT CPU bottleneck at 1080p and 1440p, even with top end 2080ti.

Poor CPU multithreading, although it will use more than 4 cores, 4 cores take the overwhelming brunt of the load.

Because of horrific CPU bottleneck, it's not terribly hard on GPU at sub 4k. In most scenarios you'll be waiting on your CPU to do its work while your GPU idles. Hence, eg 51FPS CPU limit locks you down even if you go to minimum details vs high details with a 2060/5600 or better.

With a stock 3.6Ghz 9900K and god-awful 3200 CL16 Ram, the 9900K is 10%+ faster at 1080p/1440p over Ryzen 3800XT (the fastest Ryzen CPU for gaming available so far).


For FS2020, neither the 9900K (10700K more or less identical, though the 10700K has more aggressive stock turbo), 10900K nor the 3700/3800/3900/3950 make sense for the title. Scaling with 6C/12T and even 4C/8T CPUs is so close that even a 10100 or Ryzen 3100/3300 would give nearly the same performance, though the 3700X and 10600K make more sense.

Given OC headroom, a 5.2Ghz all-core 10600K with 3600 or better DDR4 is probably about the optimal CPU for this title, boosting that ~10% lead seen in the benchmarks closer to 15-17%.

It's an interesting, and seemingly VERY unoptimized title so far. The fact it scales so horribly over multiple cores is disappointing. It's also shown to be unstable for quite a number of users on the subreddit so far.
well flight simulators through the ages always performed bad at the time of release even on top end hardware
x-plane as example is pretty much single threaded
the test requires more data
I don't see low percentile FPS
this is a game where the new cores/ryzens will show it's potential
 
Last edited:
  • Like
Reactions: Tlh97 and Arkaign

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
well flight simulators through the ages always performed bad at the time of release even on top end hardware
x-plane as example is pretty much single threaded
the test requires more data
I don't see low percentile FPS
this a game where the new cores/ryzens will show it's potential

True, I'd like to see a lot more info gathered, and of course that will be extra challenging with the live weather/etc components, so that may need to be disabled for some tests to get 100% repeatable results. And of course bug fixing and optimization will be crucial to keep up with, as it may become more effective at using newer GPU drivers and more efficient in multithreading.

It does feel like a beta release, widespread crashing, failures to install, and inconsistent performance. Incredibly ambitious, but I wish it had come out of the gate at least slightly more polished.
 
  • Like
Reactions: scannall

tamz_msc

Diamond Member
Jan 5, 2017
3,821
3,643
136
Everything I've seen so far suggests that Flight Simulator 2020 is much more demanding on the GPU than the CPU. At 4K max settings on a 2080 Ti performance is below 30 FPS when flying over big cities like NY. It also uses a ton of RAM at these settings, 20+ GB to be precise.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
Everything I've seen so far suggests that Flight Simulator 2020 is much more demanding on the GPU than the CPU. At 4K max settings on a 2080 Ti performance is below 30 FPS when flying over big cities like NY. It also uses a ton of RAM at these settings, 20+ GB to be precise.

It CAN be, at ultra settings and 4k, however it slams into a CPU wall at 1080 and 1440 even with a 2080ti. Even the very best current AMD and Intel CPUs choke on it 😑
 

CP5670

Diamond Member
Jun 24, 2004
5,513
590
126
I guess this is another good example for now of why building/speccing for an unreleased title is always a gamble, and shows just how important it is to gather real world testing of component choices with the released product are before actually buying anything. Someone wanting to build a dedicated FS2020 rig would (unless something fairly drastic changes) find a 9900k/10700k/10900k/3700X/3800X/3900X/3950X completely underutilized by this software. It's even barely utilizing 6C/12T over 4C/8T, which shocks me. 😔

I think a lot of games are like this. Even if they make use of 4-8 or more cores, they never load down all the threads continuously like rendering or encoding does and the real bottleneck is still on one thread.
 
  • Like
Reactions: Arkaign

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
I think a lot of games are like this. Even if they make use of 4-8 or more cores, they never load down all the threads continuously like rendering or encoding does and the real bottleneck is still on one thread.

That's a fair observation, and why the 3300X and 7700K are so good in so many titles. There are a few exceptions, but it's way more common to see heavy 4C/8T as the practical limit for scaling.

A late 2020 ambitious title from a company with deep experience on 8C CPU optimization (Xbox) surprised me with just how rough it is with essentially idling most cores while being CPU limited. On the FS2020 subreddit, you see people with like 3900X and 2080 Super, testing minimum details and resolution, and CPU bottlenecking hard even though the overall gauge shows 50% or less, even down to 30%. Which is technically accurate, it's just that the game isn't well coded enough to do anything much past the first 4 cores lol.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,628
158
106
That's a fair observation, and why the 3300X and 7700K are so good in so many titles. There are a few exceptions, but it's way more common to see heavy 4C/8T as the practical limit for scaling.

The benchmark environment is ideal for lower core/thread CPUs.
A regular user will most likely have a few programs/apps running on the background - something might decide to update or run a scan in the middle of your game, then the extra resources that on a clean optimized system are useless will have a use.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,377
126
The benchmark environment is ideal for lower core/thread CPUs.
A regular user will most likely have a few programs/apps running on the background - something might decide to update or run a scan in the middle of your game, then the extra resources that on a clean optimized system are useless will have a use.

That's something I've always noticed, and was glad to see Gamers Nexus cover. All the bloat and crapware that is on OEM systems can really drag it down. And even for stuff you might want, it really doesn't need to be running all the time. Everything seems to wan to autoboot with windows, but disabling autostart for excess stuff and opening it only when you need it is far more efficient for sure.