Article "How much does the CPU count for gaming?" - The answer is suprising.

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AnitaPeterson

Diamond Member
Apr 24, 2001
6,022
561
126
Engadget actually went off the beaten path and pitted a Ryzen 3300x against an i9-10900.
Lo and behold, a quad-core budget CPU holds quite well against a 10-core beast, running on similar specs (MB, RAM, GPU).
The conclusion? " If you’re building a gaming PC, unless you’re aiming for ultra-high framerates over everything else, you may be better off putting that money towards a better GPU. "

 

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Here are my results using BF5, Skylake CPUs, and 1% minimums from the chart above. I weighted an SMT thread as 0.25 of a core. I'll go through and see if I can find an SMT on/off test later to make this more accurate. My results show very high correlation with small error between 1% lows and core count compared to much weaker correlation and high error between 1% lows and frequency.


CPU​
All core turbo​
1% min​
7600k​
4​
50​
7700k​
4.4​
77​
Correlation, r​
0.511645338586546​
9400f​
3.9​
77​
9600k​
4.3​
84​
Regression R^2​
0.261780952497341​
8700k​
4.3​
114​
Regression, std error​
24.338​


View attachment 21558


CPU​
Cores​
1% min​
7600k​
4​
50​
7700k​
5​
77​
Correlation, r​
0.961479773965898​
9400f​
6​
77​
9600k​
6​
84​
Regression R^2​
0.98935175​
8700k​
7.5​
114​
Regression, std error​
3.324​

View attachment 21559
From a statistical standpoint:
1) We analyzed different chips on different test beds. So we do have different results.
2) Both cores and frequency scale with 1% lows. Determining which one contributes most of the difference requires more analysis than separate Pearson r / R^2.
3) I wouldn't assume that SMT = 25% of a regular core in Battlefield V without first confirming it, because it may affect your results substantially in the second chart. Introducing error in the input data should be avoided if possible, because it will produce error in your results.

From a test setup (result) standpoint:
1) IMO 1% min in relation to average fps is more important. If your 1% min is 50fps but your average is 100fps, that's a bad experience. If your 1% min is 50fps and your average is 60fps, that's far less stutter-y. There is no good data on this, however, and it has only been my personal experience and the anecdotal experience of others that suggests that the difference between 1% mins and average fps is more important than raw 1% mins or raw average fps.
2) As above, the chips you included and the chips I included are different, hence there may be different results. And we both had very limited numbers of chips. I will post my data and results this evening, for clarity.

I will (as I said before) try to dig around for a more comprehensive look at 1% lows on Skylake chips (e.g. 7600, 7700, 8700, 9100, 9400, 9700, 9900, 10400, 10600, 10700), and I would also like to find more data on AMD Zen2 chips re: the same, but with 3100, 3300X, 3500X, 3600, 3600X, 3700X, 3800X - which I think would be a great, comprehensive view.
 
  • Like
Reactions: krumme and Elfear

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
From a statistical standpoint:
1) We analyzed different chips on different test beds. So we do have different results.
2) Both cores and frequency scale with 1% lows. Determining which one contributes most of the difference requires more analysis than separate Pearson r / R^2.
3) I wouldn't assume that SMT = 25% of a regular core in Battlefield V without first confirming it, because it may affect your results substantially in the second chart. Introducing error in the input data should be avoided if possible, because it will produce error in your results.

From a test setup (result) standpoint:
1) IMO 1% min in relation to average fps is more important. If your 1% min is 50fps but your average is 100fps, that's a bad experience. If your 1% min is 50fps and your average is 60fps, that's far less stutter-y. There is no good data on this, however, and it has only been my personal experience and the anecdotal experience of others that suggests that the difference between 1% mins and average fps is more important than raw 1% mins or raw average fps.
2) As above, the chips you included and the chips I included are different, hence there may be different results. And we both had very limited numbers of chips. I will post my data and results this evening, for clarity.

I will (as I said before) try to dig around for a more comprehensive look at 1% lows on Skylake chips (e.g. 7600, 7700, 8700, 9100, 9400, 9700, 9900, 10400, 10600, 10700), and I would also like to find more data on AMD Zen2 chips re: the same, but with 3100, 3300X, 3500X, 3600, 3600X, 3700X, 3800X - which I think would be a great, comprehensive view.

1) Ok, I'll wait to see your data and where it was pulled from.
2) I'll wait to see how you try to tackle this, but because the frequencies of each of the chips is actually very similar and yet we have drastic difference in 1% lows, on the surface it seems pretty clear. Obviously if you include an 8 core chip severely underclocked to a really low frequency, it's going to have bad lows, but I think everyone would agree that's outside the scope of the discussion.
3) Like I said, that was a ballpark figure, but no one has really tested for it much so it's hard to find data on. It will also depend on how many real cores you have to begin with. SMT on BF5 will likely benefit a lot more on a 2 core CPU than an 8 core CPU because at 8 real cores, you're pretty much already saturating what the game engine can use. I did find 1 video of the 8700k with SMT on vs off but you can just watch the fps counter and spot minimums. It shows a 16.3% advantage in minimums for SMT on for the 8700K but it's hardly exhaustive testing.

1) This is not so cut and dry, especially with high refresh monitors and VRR. Today, most gamers who want game smoothness fall into 2 camps. First are those with cheaper 60 Hz monitors where you just want to have minimums above 60 fps so you can turn on V-sync and lock your framerate to 60fps. Second are those with high refresh rate monitors or wide range VRR capable monitors where you are just trying to have the highest minimums possible to take advantage of a smoother experience provided by the monitor tech. Either way, the most important thing will be how high you can get your lows and failing to get them high enough, a secondary consideration is how large your fps swings are. For example, if your fps range is 30 fps to 36 fps, you have really low variance, but still a really crappy experience in terms of smoothness. If you have a 60 fps to 100 fps range, you're going to have a good experience no matter which group you fall into.

2) I included every intel chip in the chart that wasn't essentially a rebrand of another chip. I don't think you'll find much more meaningful data but will wait to see what you find. Later today I'll throw in the AMD chips to see how it may change things.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
3) Like I said, that was a ballpark figure, but no one has really tested for it much so it's hard to find data on.
HTT can be from 0% to 100% depending on the game and what it runs.
Very much like you said if the game already gets saturated by real cores HTT/SMT isn't going to do anything.
3) It will also depend on how many real cores you have to begin with. SMT on BF5 will likely benefit a lot more on a 2 core CPU than an 8 core CPU because at 8 real cores, you're pretty much already saturating what the game engine can use.
On a dual core with HTT BF 5 runs 3 threads and the rest is negligible in comparison so it will not benefit any more from it.
In BF 4 you could adjust the number of threads at least in the beginning but after some update it and every BF game that followed it adjusts itself so you have to first determine how many threads the game runs on how many available logical/real cores.

Also the number one reason for low minimums is how the games are developed for the consoles,they tune it just so that all secondary threads are done by the time the main thread wants to continue.
They don't code it into the threads though they just use the physical limitations of the cores,the main core can only run so fast so anything else has to finish in so and so time.

On windows you have to tell the OS to put the main thread on the lowest priority or at least lower than any other thread of the game,that way windows will finish all secondary threads before continuing with the main thread.
https://www.youtube.com/watch?v=5W7Ebz6RXuw
"Normal" results at 2:07 avg CPU= 64.5FPS avg min CPU 29.8FPS
"Fixed" results at 4:07 avg CPU=107.9FPS avg min CPU 42.1FPS
 
  • Like
Reactions: Elfear

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
1) Ok, I'll wait to see your data and where it was pulled from.
2) I'll wait to see how you try to tackle this, but because the frequencies of each of the chips is actually very similar and yet we have drastic difference in 1% lows, on the surface it seems pretty clear. Obviously if you include an 8 core chip severely underclocked to a really low frequency, it's going to have bad lows, but I think everyone would agree that's outside the scope of the discussion.
3) Like I said, that was a ballpark figure, but no one has really tested for it much so it's hard to find data on. It will also depend on how many real cores you have to begin with. SMT on BF5 will likely benefit a lot more on a 2 core CPU than an 8 core CPU because at 8 real cores, you're pretty much already saturating what the game engine can use. I did find 1 video of the 8700k with SMT on vs off but you can just watch the fps counter and spot minimums. It shows a 16.3% advantage in minimums for SMT on for the 8700K but it's hardly exhaustive testing.

1) This is not so cut and dry, especially with high refresh monitors and VRR. Today, most gamers who want game smoothness fall into 2 camps. First are those with cheaper 60 Hz monitors where you just want to have minimums above 60 fps so you can turn on V-sync and lock your framerate to 60fps. Second are those with high refresh rate monitors or wide range VRR capable monitors where you are just trying to have the highest minimums possible to take advantage of a smoother experience provided by the monitor tech. Either way, the most important thing will be how high you can get your lows and failing to get them high enough, a secondary consideration is how large your fps swings are. For example, if your fps range is 30 fps to 36 fps, you have really low variance, but still a really crappy experience in terms of smoothness. If you have a 60 fps to 100 fps range, you're going to have a good experience no matter which group you fall into.

2) I included every intel chip in the chart that wasn't essentially a rebrand of another chip. I don't think you'll find much more meaningful data but will wait to see what you find. Later today I'll throw in the AMD chips to see how it may change things.
Here is the updated data, I included the 10600 and 10700 as well, because they were freshly tested.
Effective cores = cores are full core, additional threads are 0.15 x core.
I also decided to not analyze Delta because it doesn't really matter to you, and for me, it was purely anecdotal.

ChipCoresThreadsEffCoresBaseBoostAvg FPS1% LowDelta
9900K
8​
16​
9.2​
3.6​
5​
167​
136​
0.814371​
10700K
8​
16​
9.2​
3.8​
4.7​
167​
136​
0.814371​
9700K
8​
8​
8​
3.6​
4.9​
170​
126​
0.741176​
10600K
6​
12​
6.9​
4.1​
4.8​
167​
125​
0.748503​
8700K
6​
12​
6.9​
3.7​
4.7​
162​
121​
0.746914​
9600K
6​
6​
6​
3.7​
4.6​
153​
97​
0.633987​
7700K
4​
8​
4.6​
4.2​
4.5​
150​
94​
0.626667​
9400F
6​
6​
6​
2.9​
4.1​
143​
85​
0.594406​
9100F
4​
4​
4​
3.6​
4.2​
130​
61​
0.469231​

Multiple regression on Effective Cores/Boost as they pertain to 1% lows, here is the result:

R Square 0.930
Coefficients:
Effective Cores 8.072
Boost Freq 39.489

When you run it to include cores, threads, base, boost, effective cores - result is always the same. The largest coefficient is the boost clock.

I'm not sure how else to slice it. Take out all chips with more than 8 threads? Boost is still the largest coefficient (Effective Cores 8.137, Boost 37.339).

I think where it's confusing is that the Pearson r and R^2 when you just run correlation between effective cores and 1% lows, and separately running boost freq and 1% lows, the cores seem to have a tighter correlation. But that's why multiple regression analysis is done, to explain whether something else is a larger contributor in the overall context of ALL of the independent variables.

Now, we could go really deep. Do we include L1, L2, L3 cache size when comparing with Zen2? Do we include latencies, both intercore and from chip to memory? And so on. Who knows, those things might play huge roles.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,075
3,904
136
its interesting but really most game engines are pretty good at dispatching jobs its just that there main target has 1/2 the IPC @ 2ghz clock. so to hit something around 60fps would only need around 16ghz of throughput of a Zen2/skylake core. Now obviously some engines have unequal scaling in terms of threat requirements and thats really all that is being tracked here, if game houses really cared about scaling performance as high as PC master race really wants we would probably see better scaling results.

But moving forward is really going to be more interesting because we are going to need something like 36ghz of Zen2 throughput including SMT just to make parity with the consoles. They önly"have 8 SMT cores @ 3ghz but they have hardware decompression engines for kracken and have to decompress upto 5.5GBps of data. According to Mark Cerny thats worth about 3 dedicated Zen2 cores. Epic especially made the point that its the I/O engine of the PS5 that made the current demo possible.
If any one links the whole laptop running in developer mode thing im going to slap them.....

So its looking like if we get a cross platform game that can max the Console CPU / I/O engines consistently the minimum for playing those games will be a 3800X/i7-10700K. thats going to be pretty crazy!
 

VirtualLarry

No Lifer
Aug 25, 2001
56,586
10,225
126
So its looking like if we get a cross platform game that can max the Console CPU / I/O engines consistently the minimum for playing those games will be a 3800X/i7-10700K. thats going to be pretty crazy!
I've kind of wondered about that. Thinking of doing a June or July "platform refresh" (new AM4 mobo + CPU). Mostly to upgrade to on-board 2.5GbE-T, and hopefully fix some of my freezing issues, but also to better prepare for console ports, from these new consoles. Currently have an RX 5700 reference card; that will stay in until new cards come out.

Edit: It's almost frightening (from a PCMR perspective), that console game horsepower might increase by such a big leap, that by next year, even a 6C/12T Zen2 CPU and RX 5700 (XT), won't be enough to satisfactorily run the ports @ 60FPS. That we might need 12C/24T, and Navi 20. To say nothing of the future uselessness of all of the "budget" gaming PCs that will be built this season, with 3300X Zen2 4C/8T and B550 mobos. At least with those, a 3900XT will likely be a drop-in upgrade.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
Here is the updated data, I included the 10600 and 10700 as well, because they were freshly tested.
Effective cores = cores are full core, additional threads are 0.15 x core.
I also decided to not analyze Delta because it doesn't really matter to you, and for me, it was purely anecdotal.

ChipCoresThreadsEffCoresBaseBoostAvg FPS1% LowDelta
9900K
8​
16​
9.2​
3.6​
5​
167​
136​
0.814371​
10700K
8​
16​
9.2​
3.8​
4.7​
167​
136​
0.814371​
9700K
8​
8​
8​
3.6​
4.9​
170​
126​
0.741176​
10600K
6​
12​
6.9​
4.1​
4.8​
167​
125​
0.748503​
8700K
6​
12​
6.9​
3.7​
4.7​
162​
121​
0.746914​
9600K
6​
6​
6​
3.7​
4.6​
153​
97​
0.633987​
7700K
4​
8​
4.6​
4.2​
4.5​
150​
94​
0.626667​
9400F
6​
6​
6​
2.9​
4.1​
143​
85​
0.594406​
9100F
4​
4​
4​
3.6​
4.2​
130​
61​
0.469231​

Multiple regression on Effective Cores/Boost as they pertain to 1% lows, here is the result:

R Square 0.930
Coefficients:
Effective Cores 8.072
Boost Freq 39.489

When you run it to include cores, threads, base, boost, effective cores - result is always the same. The largest coefficient is the boost clock.

I'm not sure how else to slice it. Take out all chips with more than 8 threads? Boost is still the largest coefficient (Effective Cores 8.137, Boost 37.339).

I think where it's confusing is that the Pearson r and R^2 when you just run correlation between effective cores and 1% lows, and separately running boost freq and 1% lows, the cores seem to have a tighter correlation. But that's why multiple regression analysis is done, to explain whether something else is a larger contributor in the overall context of ALL of the independent variables.

Now, we could go really deep. Do we include L1, L2, L3 cache size when comparing with Zen2? Do we include latencies, both intercore and from chip to memory? And so on. Who knows, those things might play huge roles.

Did you check correlation between independent variables when you did the multivariate regression? If your independent variables have too high of correlation (0.6+ or so), it's most likely going to screw everything up and make determining the significance of the independent variables impossible (https://statisticsbyjim.com/regression/multicollinearity-in-regression-analysis/).

With that said, your frequencies are all wrong. Those are single core boost frequencies, but that's not what they'll run at with any remotely modern game. Frequencies should be more like this:


Chip​
Boost​
9900K​
4.7​
9100F​
4​
10700K​
4.7​
9700K​
4.6​
10600K​
4.5​
8700K​
4.3​
9600K​
4.3​
7700K​
4.4​
9400F​
3.9​

Additionally, you're still calculating how much of an impact 8c/16t CPUs have on core scaling, but the engine doesn't scale that high and that wasn't the point. The point was that game engines are starting to be able to scale beyond 8t and any games that do, a 4c or 4c/8t CPU will have a hard time keeping up with any consumer 6c/12t or higher CPU, especially in minimums. I offered BF5 as an example of a game that has already been able to scale beyond 4c/8t and the graph shows that. If you calculate correlation from 4c to 6c/12t (or 8c/8t) you'll see very strong correlation in minimum increases. Of course you'll see correlation in frequency too, the question is, will a 4c/8t be able to keep up on frequency alone.

If your interpretation is correct and frequency is strongly the determining factor in minimums, then let's run a sanity check, I'm a huge fan of always running a sanity check. From your numbers, a 4c/4t 9100f which operates at 4 GHz loaded is getting 61 in 1% min. A 8c/8t 9700k at 4.6 GHz loaded is getting 126 for 1% min. So which do you think is more likely, that a 15% increase in clock speed accounted for the bulk of a 106.6% increase in minimums, or do you think that having 100% more cores was the bulk of the 106.6% increase in minimums? Clearly the clock speed difference can only account for a small part of that difference. Now, this is obviously one example, but the same will hold true if you want to check others when you frame it with the understanding of the limits of the game engine to scale beyond ~ 6c/12 or 8c/8t.

Edit: I accidentally closed my browser tab and ended up with a mishmash of post drafts so I fixed that.
 
Last edited:

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Hitman is onto something. I just went through a 7700K 'de-build' and part out. It was under a huge AIO and was too slow for BFV even at 4.7Ghz with no offset and 3600Mhz DDR4 at CL15.
 

TheELF

Diamond Member
Dec 22, 2012
4,027
753
126
I've kind of wondered about that. Thinking of doing a June or July "platform refresh" (new AM4 mobo + CPU). Mostly to upgrade to on-board 2.5GbE-T, and hopefully fix some of my freezing issues, but also to better prepare for console ports, from these new consoles. Currently have an RX 5700 reference card; that will stay in until new cards come out.

Edit: It's almost frightening (from a PCMR perspective), that console game horsepower might increase by such a big leap, that by next year, even a 6C/12T Zen2 CPU and RX 5700 (XT), won't be enough to satisfactorily run the ports @ 60FPS. That we might need 12C/24T, and Navi 20. To say nothing of the future uselessness of all of the "budget" gaming PCs that will be built this season, with 3300X Zen2 4C/8T and B550 mobos. At least with those, a 3900XT will likely be a drop-in upgrade.
You can only use as much CPU as the GPU allows,the PS5 is supposed to be 4k and even if it will have a lot of 1080 games,look at the 2080ti reviews there is a lot of bottlenecking going on at 4k to a degree that every CPU looks the same and not as much but still a lot of it in 1080,if you use more than that you are just wasting compute for no reason,not that this doesn't happen.

It would not surprise me if they are only going to use the additional horsepower for ray tracing and to prevent slow downs due to the OS doing stuff.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
You can only use as much CPU as the GPU allows,the PS5 is supposed to be 4k and even if it will have a lot of 1080 games,look at the 2080ti reviews there is a lot of bottlenecking going on at 4k to a degree that every CPU looks the same and not as much but still a lot of it in 1080,if you use more than that you are just wasting compute for no reason,not that this doesn't happen.

It would not surprise me if they are only going to use the additional horsepower for ray tracing and to prevent slow downs due to the OS doing stuff.

I think you're on to something there. We also have to remember that Sony and Microsoft both want to make sure they're not far off from one another so as to be disadvantaged. And probably greater than half the reason they went with what they did was : it was available. They needed 8 physical x86 cores to make the transition as easy as possible with full out of box compatibility with the massive 8th gen library and games under development, so that knocked out consideration of 4 or 6 Core parts. And 7nm Zen2 is incredibly efficient up to the high 3Ghz range in terms of TDP, so it wouldn't have saved them basically anything to go below 3Ghz. Hence : we get what we get, which is extremely powerful and honestly probably the opposite of 8th gen, where HD7790+ (OG X1) and HD7850/70 (OG PS4) went up in combo with the horrific Jaguar. 8 terrible ~1.6/1.75Ghz cores, which almost certainly affected game design because they were SO SO bad core for core. Certain game engines even made 2d games choppy lol.

I think minimum FPS should be way more reliable with 9th gen consoles. Though your point about chasing 4k means less 60fps titles than we'd like outside of devs who can employ clever upscaling options for 'performance' mode. Something I'll choose 100 times out of 100 if it means 60fps vs 30.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
32,029
32,494
146
Hitman is onto something. I just went through a 7700K 'de-build' and part out. It was under a huge AIO and was too slow for BFV even at 4.7Ghz with no offset and 3600Mhz DDR4 at CL15.
I will recommend someone to watch, if you do not already do so. Introduction - He is a chatty Kathy, belabors points trying to avoid the inevitable comments section derpery, and has a haughty voice that sounds like he is going to ask if you have any Grey Poupon. But the guy that runs the youtube channel Tech Deals does something few reviewers do. He plays a CPU demanding game, usually one that is poorly optimized to boot, for 20 minutes or so. While everyone else was gushing over the new 3100 and 3300x he was pointing out they are not ideal for a game like Ghost Recon Breakpoint. Playable sure, but not a buttery smooth experience. 4/8 is Grrrreat! Except when it's not.

And it grinds my gears that reviewers show hardware in its very best light. I don't want to see that budget CPU on an expensive board with expensive ram, and $100+ cooler. Set it up the way someone buying that level of kit would. I understand the 2080ti for bottleneck elimination, but do a follow up with a more appropriate GPU, THAT would help your viewers looking to buy. I like Byran from TechYesCity, but his review of the 10400 is trash for said reasons. Another thing that grinds my gears, reviewers hyperbole over 3-5 percent gains. Anand always pointed out BITD that 3 percent is margin of error. So why are we excited about something 2-3 percent faster than margin of error, that requires a new board? More marketing hype, and something to rally the troops, but it won't be prying my wallet open.
 

Arkaign

Lifer
Oct 27, 2006
20,736
1,379
126
Oh man, you nailed it so hard lol. It IS interesting to see what the ultimate potential of a CPU is if everything is ultra tuned and matched with high end surrounding components, but a budget to mid-range build would make a lot more sense to me with all but the highest end CPUs.

Eg; 10400 v 9400 v 3600 using : entry or sub $100 B series mobos, 3200 budget Ram, stock cooler, and maybe a 1660 Super or base 2060 + RX5500 / 5600 class cards. Then several runs of benches. 1080p medium eSports for Fortnite Overwatch etc, and 1440p medium for AAA stuff. I think it would really open some eyes to see just how set you are with basically any competent 6C CPU these days. 4C/4T and 4C/8T are obviously a mixed bag. Sometimes fine, but sometimes hitchy even with high FPS (as I just experienced in BFV multi with a highly tuned+OCed 7700k lol).
 
  • Like
Reactions: DAPUNISHER

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
32,029
32,494
146
Oh man, you nailed it so hard lol. It IS interesting to see what the ultimate potential of a CPU is if everything is ultra tuned and matched with high end surrounding components, but a budget to mid-range build would make a lot more sense to me with all but the highest end CPUs.

Eg; 10400 v 9400 v 3600 using : entry or sub $100 B series mobos, 3200 budget Ram, stock cooler, and maybe a 1660 Super or base 2060 + RX5500 / 5600 class cards. Then several runs of benches. 1080p medium eSports for Fortnite Overwatch etc, and 1440p medium for AAA stuff. I think it would really open some eyes to see just how set you are with basically any competent 6C CPU these days. 4C/4T and 4C/8T are obviously a mixed bag. Sometimes fine, but sometimes hitchy even with high FPS (as I just experienced in BFV multi with a highly tuned+OCed 7700k lol).
You know, I should point out that tech tubers do get around to the real sauce, it is just frustrating that they have to kow tow to the companies first. They learned long ago the algorithms favor titles like $250 or $300 or $400 complete PC build. Hence, the info will be available for the inexperienced DIYer, but it would be great if more reviewers did it from the get go. Now of course it makes no difference because C-19 has nerfed availability of most everything.
 

jpiniero

Lifer
Oct 1, 2010
16,818
7,258
136
I will recommend someone to watch, if you do not already do so. Introduction - He is a chatty Kathy, belabors points trying to avoid the inevitable comments section derpery, and has a haughty voice that sounds like he is going to ask if you have any Grey Poupon. But the guy that runs the youtube channel Tech Deals does something few reviewers do. He plays a CPU demanding game, usually one that is poorly optimized to boot, for 20 minutes or so. While everyone else was gushing over the new 3100 and 3300x he was pointing out they are not ideal for a game like Ghost Recon Breakpoint. Playable sure, but not a buttery smooth experience. 4/8 is Grrrreat! Except when it's not.

I dunno, the benchmarks I saw in Ghost Recon Breakpoint Vulkan, the 7700K was able to hit the GPU limit on 1080p max settings. It was only 80-something mins, even with an 2080 Ti.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
32,029
32,494
146
I dunno, the benchmarks I saw in Ghost Recon Breakpoint Vulkan, the 7700K was able to hit the GPU limit on 1080p max settings. It was only 80-something mins, even with an 2080 Ti.
Good point there. What render path was used? Eye candy turned up? So many conditions to be controlled for. Even what was benchmarked. Did they do a custom 60 second run? Canned bench? Average over 10+ mins of game play in stressful area/s? Not debating, only stating my preference. I like to see extended game play in stressful areas, so I know before I go, if you will.

Odyssey was mentioned. Loved my OC'd4770K HTPC right up to that game. TV does not have free/g-sync and it was a bad time as the game went on. None of the reviews I saw made it look that bad, had to play a good number of hours to hit the 4770K crusher. Fighting 5 mercenaries plus the citizens put that thing in the hurt locker.
 

LightningZ71

Platinum Member
Mar 10, 2017
2,524
3,216
136
its interesting but really most game engines are pretty good at dispatching jobs its just that there main target has 1/2 the IPC @ 2ghz clock. so to hit something around 60fps would only need around 16ghz of throughput of a Zen2/skylake core. Now obviously some engines have unequal scaling in terms of threat requirements and thats really all that is being tracked here, if game houses really cared about scaling performance as high as PC master race really wants we would probably see better scaling results.

But moving forward is really going to be more interesting because we are going to need something like 36ghz of Zen2 throughput including SMT just to make parity with the consoles. They önly"have 8 SMT cores @ 3ghz but they have hardware decompression engines for kracken and have to decompress upto 5.5GBps of data. According to Mark Cerny thats worth about 3 dedicated Zen2 cores. Epic especially made the point that its the I/O engine of the PS5 that made the current demo possible.
If any one links the whole laptop running in developer mode thing im going to slap them.....

So its looking like if we get a cross platform game that can max the Console CPU / I/O engines consistently the minimum for playing those games will be a 3800X/i7-10700K. thats going to be pretty crazy!
You're overlooking one very important thing: The consoles are hard limited to 16GB of TOTAL ram, ~3GB of which is taken by the OS and, at least 4GB is going to be in use for the GPU side of things for things like active textures. etc. This leaves a total of ~9GB of RAM for the program to run in and store decompressed game data. Of course it's going to have to run through a mountain of data coming from the SSD, it'll constantly have to be swapping in and out the scenery and texture data to keep up with living in those "tight" quarters. On a PC port, it would just requiring some retuning of the game engine on how it caches texture and scenery data to RAM to be able to keep up with those demands. Yes, it'll likely benefit from having a core or two dedicated to managing what's in RAM, but, given how little processing that that will take, and how most games only really heavily utilize just a few threads and cores, even an 8 core should have ample time to spend working on that. So, for a normal system that has 16GB of ram, it's going to have roughly ftwo-three times as much RAM available just for local texture and data caching. And none of what I just said addresses the fact that most video cards that are capable of keeping up with the demands of 4K/8K display are going to have 6-8+ GB of VRAM, that's double the available texture RAM on the card as well. While this isn't night and day different, its enough of a difference to help cover the significant reduction in data throughput that the storage subsystem will have.

That one decompression chip for a minimal addition of cost to the Processor enables the consoles to have a significant reduction in the cost of materials by roughly halving the amount of RAM that they have to have on board, and given the type of RAM that they use, that's not a trivial cost reduction. That's simply a problem that PCs don't have to contend with.

That is why I feel that even a 3700x with 16GB of RAM should easily manage to keep up with any of the next gen console games without any noticeable issues, as long as it has a decent video card, an x570 or B550 board and a pair of decent M.2 PCIe storage devices, one for the OS and another for the game data.

(edited to reflect that I had the wrong numbers for the next gen consoles RAM sizes)
 
Last edited:

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,576
96
Most worry about their maximum frame rate while i figure how much BOINC can use before it tanks my processor. Best still is 76% in BOINC with BF4 going dumping well over 100 fps per second. Hit a nice 4ghz on all 24 threads hitting closer to about 95-97% gpu usage. Need to see what GTA V does on same settings,i bet it will be just as good. Can't wait to see what a top of the line 4000 series will let me do!
 

Hitman928

Diamond Member
Apr 15, 2012
6,695
12,370
136
Good point there. What render path was used? Eye candy turned up? So many conditions to be controlled for. Even what was benchmarked. Did they do a custom 60 second run? Canned bench? Average over 10+ mins of game play in stressful area/s? Not debating, only stating my preference. I like to see extended game play in stressful areas, so I know before I go, if you will.

Odyssey was mentioned. Loved my OC'd4770K HTPC right up to that game. TV does not have free/g-sync and it was a bad time as the game went on. None of the reviews I saw made it look that bad, had to play a good number of hours to hit the 4770K crusher. Fighting 5 mercenaries plus the citizens put that thing in the hurt locker.

Even though I wasn't a big fan of Kyle, I do miss hardocp's reviews for that reason. They would pick a spot in the game they thought was heavily taxing and play it for a good 10 - 20 minutes to get their data. Doing this meant they would only test a few games due to time restrictions, but if every reviewer did this for at least 1 - 2 games per review and try to differentiate from each other in game selection, you'd have a huge collection of actual game play data to analyze.

I remember how GameGPU's Doom (2016) benchmarks were first to release and how every CPU and GPU was running such high frame rates and everyone thought you could run it at high settings on a potato. Then I checked their recorded benchmark play through and it was literally them in a small empty room walking around and then they shoot a barrel. Maybe there was 1 zombie. I was shocked that their benchmark scene was akin to them staring at a wall for 20 seconds. Jump into an actual wide open battle space with dozens of monsters and you're frame rate is going to drop by like 50%. Granted the game does run great on a wide range of hardware, but not nearly like they were showing. I wish all reviewers would state explicitly what they are benching for their numbers but very few do.
 
  • Like
Reactions: OTG and DAPUNISHER

amrnuke

Golden Member
Apr 24, 2019
1,181
1,772
136
Did you check correlation between independent variables when you did the multivariate regression? If your independent variables have too high of correlation (0.6+ or so), it's most likely going to screw everything up and make determining the significance of the independent variables impossible (https://statisticsbyjim.com/regression/multicollinearity-in-regression-analysis/).

With that said, your frequencies are all wrong. Those are single core boost frequencies, but that's not what they'll run at with any remotely modern game. Frequencies should be more like this:


Chip​
Boost​
9900K​
4.7​
9100F​
4​
10700K​
4.7​
9700K​
4.6​
10600K​
4.5​
8700K​
4.3​
9600K​
4.3​
7700K​
4.4​
9400F​
3.9​

Additionally, you're still calculating how much of an impact 8c/16t CPUs have on core scaling, but the engine doesn't scale that high and that wasn't the point. The point was that game engines are starting to be able to scale beyond 8t and any games that do, a 4c or 4c/8t CPU will have a hard time keeping up with any consumer 6c/12t or higher CPU, especially in minimums. I offered BF5 as an example of a game that has already been able to scale beyond 4c/8t and the graph shows that. If you calculate correlation from 4c to 6c/12t (or 8c/8t) you'll see very strong correlation in minimum increases. Of course you'll see correlation in frequency too, the question is, will a 4c/8t be able to keep up on frequency alone.

If your interpretation is correct and frequency is strongly the determining factor in minimums, then let's run a sanity check, I'm a huge fan of always running a sanity check. From your numbers, a 4c/4t 9100f which operates at 4 GHz loaded is getting 61 in 1% min. A 8c/8t 9700k at 4.6 GHz loaded is getting 126 for 1% min. So which do you think is more likely, that a 15% increase in clock speed accounted for the bulk of a 106.6% increase in minimums, or do you think that having 100% more cores was the bulk of the 106.6% increase in minimums? Clearly the clock speed difference can only account for a small part of that difference. Now, this is obviously one example, but the same will hold true if you want to check others when you frame it with the understanding of the limits of the game engine to scale beyond ~ 6c/12 or 8c/8t.

Edit: I accidentally closed my browser tab and ended up with a mishmash of post drafts so I fixed that.
Re: the Battlefield V limit, as I said in my post, I removed all the chips with more than 8 threads, and the result came out the same. I could add back the chips with 12 threads but I'm not sure the results would be much different. I'll check on that.


Some interesting data points (further sanity checks) on chips where it seems that some variables are held constant, while others of interest are different:

9900K vs 10700K
same cores/threads
9900K peak boost 5 GHz vs 4.7 GHz for 10700K
same all-core boost
same 1% lows
same avg FPS
-- Looks like peak boost has little effect in this case.

8700K vs 9600K
same base freq
same all-core boost
SMT on vs SMT off (6 threads vs 12 threads)
+20-25% 1% lows for 8700K
+5-10% avg FPS for 9700K
-- Looks like adding threads helps quite a bit (note boost freq minimally different <5%).

9400F vs 9600K
same cores/threads
9600K peak boost 4.6 GHz vs 4.1 GHz for 9400F
9600K all-core boost 4.3 GHz vs 3.9 GHz for 9400F
1% lows 97 vs 85 (10-15% difference)
avg FPS 153 vs 143 (5-10% difference)
-- Looks like increasing all-core/peak boost helps as well, not as much.


Some other interesting cases:

7700K vs 9400F
4/8 (4.6 effective threads) vs 6/6
7700K all-core 4.4 GHz, 9400F all-core 3.9 GHz
1% lows 94 vs 85
It's possible that BF5 uses SMT very efficiently only at low thread counts, in this case each SMT thread would confer ~55% the performance of a "real" core, resulting in a 4c/8t chip beating a 6c/6t chip by 10-15% as we see. We also note an all-core boost difference of 10-15%. Both probably playing roles.

9700K (8/8) vs 10600K (6/12) and 8700K (6/12)
1% low: 126 vs 125 vs 121 (<5% differences)
It seems there is a break point where above 8 threads frequency is more important. I'm not sure if that point is at 8 or 10 threads, we don't have a 10 thread chip to compare.


There is DEFINITELY a relationship between core counts, thread counts, all core boost, and peak boost -- with 1% lows. I don't think we can easily dismiss any of these factors, as they will play different roles depending on how many cores and threads there are.

Statistically, you are right - core count, effective core count, and boost speed (as well as all-core boost speed) are tightly related. Meaning it is very difficult to sort out which plays the biggest role.

In the end it seems like both play a role, and it will be hard to "prove" which is the largest contributor, though I think you're correct that cores/threads do play a major role, and all-core/peak boost seems like it doesn't play as large a role, especially for the majority of chips installed in gaming rigs at this point.
 
Last edited:
  • Like
Reactions: Hitman928

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
233
106
Fun yes but i think anything with 4 threads will do nice today :) Last time i ran i had to turn down physics cause it kept crashing my game LOL. Anything out of low would crash the game and sometimes freeze the system.
These problems are mostly fixed now (1.8.5), 2070 Super can handle PhysX high in semi-passive mode just fine. For older GPUs, a separate PhysX card would be highly beneficial (e.g. ~7870 + ~GT640). However, the game is poorly coded to begin with and the main thread is mostly single-threaded, but can use more threads but I haven't looked into that yet. Wish I had an i7 5775c w/ L4 cache around to test it with or smth with more L2 cache or Ryzen. Most reviewers only test newer games and rarely analyze the past.
 
Last edited:
  • Like
Reactions: Arkaign

Rigg

Senior member
May 6, 2020
710
1,805
136
I will recommend someone to watch, if you do not already do so. Introduction - He is a chatty Kathy, belabors points trying to avoid the inevitable comments section derpery, and has a haughty voice that sounds like he is going to ask if you have any Grey Poupon. But the guy that runs the youtube channel Tech Deals does something few reviewers do. He plays a CPU demanding game, usually one that is poorly optimized to boot, for 20 minutes or so. While everyone else was gushing over the new 3100 and 3300x he was pointing out they are not ideal for a game like Ghost Recon Breakpoint. Playable sure, but not a buttery smooth experience. 4/8 is Grrrreat! Except when it's not.

And it grinds my gears that reviewers show hardware in its very best light. I don't want to see that budget CPU on an expensive board with expensive ram, and $100+ cooler. Set it up the way someone buying that level of kit would. I understand the 2080ti for bottleneck elimination, but do a follow up with a more appropriate GPU, THAT would help your viewers looking to buy. I like Byran from TechYesCity, but his review of the 10400 is trash for said reasons. Another thing that grinds my gears, reviewers hyperbole over 3-5 percent gains. Anand always pointed out BITD that 3 percent is margin of error. So why are we excited about something 2-3 percent faster than margin of error, that requires a new board? More marketing hype, and something to rally the troops, but it won't be prying my wallet open.
I subscribe to both You Tubers. I agree with pretty much everything you wrote in this post. It bugs me when realistic hardware configs aren't tested or at least discussed in reviews.

Too many you tube reviewers over focus on bench-marking. Don't get me wrong , I love Gamer's Nexus and Hardware Unboxed as much as the next guy, but it's nice to have someone who shares the actual user experience that doesn't always show up on charts.

The only thing that bugs me about Tech Deals is that he doesn't have a clue about motherboard power delivery and is obsessed with putting 32gb of RAM in everything. His awkwardness in live streams makes me cringe sometimes too. He seems like a nice guy with nice family though. I hope his wife's PC knowledge gets better. His streams would be better if someone was able to challenge some of his takes.
 
  • Like
Reactions: DAPUNISHER

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,576
96
These problems are mostly fixed now (1.8.5), 2070 Super can handle PhysX high in semi-passive mode just fine.

Yeah was a few months back, it may have been a driver there but whatever i had my fun with a insane fps bump. I think one of my friends recently played and mentioned no more crashing. I loved the game but i have moved on past it.