Article "How much does the CPU count for gaming?" - The answer is suprising.

AnitaPeterson

Diamond Member
Apr 24, 2001
5,599
18
81
Engadget actually went off the beaten path and pitted a Ryzen 3300x against an i9-10900.
Lo and behold, a quad-core budget CPU holds quite well against a 10-core beast, running on similar specs (MB, RAM, GPU).
The conclusion? " If you’re building a gaming PC, unless you’re aiming for ultra-high framerates over everything else, you may be better off putting that money towards a better GPU. "

 

ozzy702

Golden Member
Nov 1, 2011
1,000
407
136
1% lows are going to be dramatically different between a 3300X and higher core count CPU. Add in running other apps while gaming, streaming, etc and unless you're on a very strict budget, there's no reason to settle for four cores, especially given the direction that gaming will take over the next few years. 7700k and 3300X will not age well.
 

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
1% lows are going to be dramatically different between a 3300X and higher core count CPU. Add in running other apps while gaming, streaming, etc and unless you're on a very strict budget, there's no reason to settle for four cores, especially given the direction that gaming will take over the next few years. 7700k and 3300X will not age well.
Depends on the game. I've been disappointed by how many modern games still don't benefit from more than 4 cores 8 threads. However, there are definitely games out there where you really need at least 6 cores 12 threads to have a really smooth experience and as you said, with the next gen consoles having 8 modern cores, I think anything less than 6 cores 12 threads will start to suffer over the next few years. At least I hope so for the sake of progress.
 

amrnuke

Senior member
Apr 24, 2019
797
883
96
1% lows are going to be dramatically different between a 3300X and higher core count CPU. Add in running other apps while gaming, streaming, etc and unless you're on a very strict budget, there's no reason to settle for four cores, especially given the direction that gaming will take over the next few years. 7700k and 3300X will not age well.
I would have tended to agree. But this review and this review don't seem to have any consistent differences. It seems like 1% lows are somewhat less dependent on core/thread count and more dependent on clockspeed. In fact, the AMD chip with the widest spread between average and 1% low fps is the 3900X, which makes no sense if 1% lows depend on thread count.

I'd love to see more data on it, of course.
 

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
I would have tended to agree. But this review and this review don't seem to have any consistent differences. It seems like 1% lows are somewhat less dependent on core/thread count and more dependent on clockspeed. In fact, the AMD chip with the widest spread between average and 1% low fps is the 3900X, which makes no sense if 1% lows depend on thread count.

I'd love to see more data on it, of course.
Like I said, it depends on the game and many times even the scene you are testing in the game. A lot of reviewers will test scenes with very few characters and almost nothing happening. They do this for repeatability but then it raises the question if the test reflects actual play through performance.


 

ozzy702

Golden Member
Nov 1, 2011
1,000
407
136
I would have tended to agree. But this review and this review don't seem to have any consistent differences. It seems like 1% lows are somewhat less dependent on core/thread count and more dependent on clockspeed. In fact, the AMD chip with the widest spread between average and 1% low fps is the 3900X, which makes no sense if 1% lows depend on thread count.

I'd love to see more data on it, of course.
For most games I would agree. BFV is a game I've played quite a bit on many different systems and 1% lows are dramatically different between a overclocked 7700k and a stock 9900k. The chart above doesn't even do it justice for 64 man multiplayer.
 

Arkaign

Lifer
Oct 27, 2006
20,451
856
126
As in all things in life, balance is key.

For someone building a new gaming rig in 2020, unless they're doing it for a specific game or set of games (such as eSports, Fortnite, etc) that absolutely run well on it, 4C/8T is reaching a point where it just feels extra rough for many titles especially if you don't have a good VRR display to help smooth out some of the nastier dips.

AMDs price cuts to the Ryzen 3600 that were just reported, along with the i5-10400f, make what I consider the ideal affordable gaming CPU combos if someone wants to play AAA multiplats during the 9th gen era. Stepping up to a 3700XT or 10700K is even better of course, but that's a LOT more money, and thus probably paired with a better GPU, faster ram etc in any situation that makes sense.
 

VirtualLarry

Lifer
Aug 25, 2001
47,653
4,823
126
As in all things in life, balance is key.

For someone building a new gaming rig in 2020, unless they're doing it for a specific game or set of games (such as eSports, Fortnite, etc) that absolutely run well on it, 4C/8T is reaching a point where it just feels extra rough for many titles especially if you don't have a good VRR display to help smooth out some of the nastier dips.

AMDs price cuts to the Ryzen 3600 that were just reported, along with the i5-10400f, make what I consider the ideal affordable gaming CPU combos if someone wants to play AAA multiplats during the 9th gen era. Stepping up to a 3700XT or 10700K is even better of course, but that's a LOT more money, and thus probably paired with a better GPU, faster ram etc in any situation that makes sense.
Be brutally frank with me, did I make a mistake buying some i5-4570/4590 SFF boxes (4C/4T Haswell quad), dropping in 16GB DDR3, GTX 1650 LP, and 480GB SSD (refurb came with 500GB HDD too), "for Gaming"?

I know that spec probably wouldn't run BF V that well, but for GTA V, Fortnite, etc?

Considering the price too, I paid maybe $400 for the parts, as opposed to a bespoke rig with a Ryzen R5 3600, 32GB of 3200/3600 RAM, and a GTX 1660 Super/Ti / RX 5700, which would probably run closer to $800-900 (given prices on AM4 mobos and PSUs, maybe higher).

I guess my problem is that I don't advertise. I need to find a couple of buyers for a couple of these rigs.
 
  • Like
Reactions: geokilla and Glo.

Arkaign

Lifer
Oct 27, 2006
20,451
856
126
Ah definitely not, as long as they're focused on appropriate gaming. There's definitely a big difference between the kind of gamer who will be waiting for Cyberpunk 2077, AC Valhalla, etc, and the kind who will be mainly playing Minecraft, Fortnite, Overwatch, FNAF, Gmod, Brick Rigs, etc.

And the price, well of course the difference is night and day, it's so expensive even to go with a pretty minimal new build vs what you can get in a used build. I shoot for 6C/12T E5 1650/1660 Xeons for used gaming PC base levels, but even 4C/4T can be totally fine depending on the context.
 

amrnuke

Senior member
Apr 24, 2019
797
883
96
Like I said, it depends on the game and many times even the scene you are testing in the game. A lot of reviewers will test scenes with very few characters and almost nothing happening. They do this for repeatability but then it raises the question if the test reflects actual play through performance.
I just went ahead and tossed TechSpot's data -- not from this review, but from their 10900K and 3100/3300X review (which includes same system setup for all systems) into Excel and used the Data Analysis add in for some basic evaluation.

Setup
Independent variables were cores, threads, base freq, boost freq.
Dependent variables were average FPS, 1% lows, and what I call "delta%".
Delta% is the gap between average FPS and 1% lows, represented as a percentage - 1% lows divided by average FPS. The higher this percentage, the closer the 1% lows are to the average FPS, and the "smoother" the game would likely be.

Results were from Battlefield V on Techspot's reviews listed above.
CPUs looked at were AMD's current Zen2 desktop lineup (except 3500X, 3600X, 3800X) and Intel's 9100F, 9400F, 9700K, 9900K, 10900K, 8700K, 7700K, as well as the 1600, 1600AF, 2600, 2700X. Thus, a broad range of cores, threads, base clocks, boost clocks.
(Could be easily done on multiple of TechSpot's reviewed games too.)

Results
Correlations
The strongest correlation with 1% lows was boost speed, and that Pearson r value was 0.794, the highest correlation of ANY of the values except for average FPS and boost speed, which was 0.908 Pearson r.

For delta%, if you want to minimize the gap between the 1% lows and average FPS for smoother gameplay, the tightest correlation is thread count, but cores, boost clock saw similar correlations -- overall -- Pearson r of 0.597 for threads, 0.522 for cores, 0.510 for boost clock.

Of course, many of the higher-core, higher-thread chips have faster boost clocks. So how does one disentangle whether the higher delta% is due to higher thread counts, core counts, or boost clocks? We have to run a regression analysis to determine which factor is actually contributing to the high 1% lows and good delta%.

Regression
First I standardized the continuous independent variables. Then I checked coefficients for 1% lows and delta%.

As it turns out, the coefficient for boost clock is the best predictor with a coefficient of 0.373, and the thread count is next best, but quite weak in comparison, with a coefficient of 0.056. Cores and base clock exhibit negative coefficients. This tells us it's likely that the strong correlation between thread count and delta% was actually substantially contributed by the higher boost clocks seen on these highly-threaded chips.

However, it should be noted, even using cores, threads, base, and boost, the best R2 is 0.558, which isn't all that great, and while boost clock is the largest contributor to that, none of these variables are great predictors of a good delta% score, even when combined.

(Even if you just want to look at pure 1% lows, the coefficient for boost clocks is 123.78, and that for thread count is 8.32, with cores and base clock having negative coefficients; that R2 fit is better at 0.739.)

Conclusion
Boost clock carries a moderate coefficient of determination in relation to high delta% and high 1% lows. But the number of cores and threads, based on this data, plays a minimal role when you analyze the data to disentangle multiple possible contributing factors.

Does the conclusion fit the individual data points? Yes. If threads were the primary factor, one would expect the 9700K (8 threads) to perform on par with a 7700K or 3300X which both have 8 threads. However, the 9700K performs between the 2700X/3700X and the 9900K/10900K. If threads were extremely important, one would expect the delta% leaders to be the 3900X and 3950X, but they sit alongside the 3600 and 9700K with respect to delta%.

On the other hand, if cores were of primary importance, then the 3300X should sit behind the 1600AF, 2600, 9400F -- but in fact it sits ahead of them.

For most of these conundrums, the answer lay in the boost frequency -- the 3300X boosts higher than the 1600AF, 2600, and 9400F. The 9700K boosts higher than all the chips except the two that sit ahead of them in the delta% rankings.

Overall, I think the data make sense. But of course, I'd like to see more data as well. And of course, we can talk a lot about whether Battlefield V's benchmark is taxing enough on the CPU as well. I am open to any other benchmarks that have been done that might expose weaknesses in low-thread-count CPUs.

(Caveat emptor: While I have worked on a lot of research projects, I did so with the assistance of statisticians, and my knowledge is based to a great extent on what I learned from them. I have taken stats classes, but nothing like what they were doing/taught me. Thus, my informal education I think is OK, but I would never run a real statistical analysis on a research project without a real statistician involved.)
 

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
I just went ahead and tossed TechSpot's data -- not from this review, but from their 10900K and 3100/3300X review (which includes same system setup for all systems) into Excel and used the Data Analysis add in for some basic evaluation.

Setup
Independent variables were cores, threads, base freq, boost freq.
Dependent variables were average FPS, 1% lows, and what I call "delta%".
Delta% is the gap between average FPS and 1% lows, represented as a percentage - 1% lows divided by average FPS. The higher this percentage, the closer the 1% lows are to the average FPS, and the "smoother" the game would likely be.

Results were from Battlefield V on Techspot's reviews listed above.
CPUs looked at were AMD's current Zen2 desktop lineup (except 3500X, 3600X, 3800X) and Intel's 9100F, 9400F, 9700K, 9900K, 10900K, 8700K, 7700K, as well as the 1600, 1600AF, 2600, 2700X. Thus, a broad range of cores, threads, base clocks, boost clocks.
(Could be easily done on multiple of TechSpot's reviewed games too.)

Results
Correlations
The strongest correlation with 1% lows was boost speed, and that Pearson r value was 0.794, the highest correlation of ANY of the values except for average FPS and boost speed, which was 0.908 Pearson r.

For delta%, if you want to minimize the gap between the 1% lows and average FPS for smoother gameplay, the tightest correlation is thread count, but cores, boost clock saw similar correlations -- overall -- Pearson r of 0.597 for threads, 0.522 for cores, 0.510 for boost clock.

Of course, many of the higher-core, higher-thread chips have faster boost clocks. So how does one disentangle whether the higher delta% is due to higher thread counts, core counts, or boost clocks? We have to run a regression analysis to determine which factor is actually contributing to the high 1% lows and good delta%.

Regression
First I standardized the continuous independent variables. Then I checked coefficients for 1% lows and delta%.

As it turns out, the coefficient for boost clock is the best predictor with a coefficient of 0.373, and the thread count is next best, but quite weak in comparison, with a coefficient of 0.056. Cores and base clock exhibit negative coefficients. This tells us it's likely that the strong correlation between thread count and delta% was actually substantially contributed by the higher boost clocks seen on these highly-threaded chips.

However, it should be noted, even using cores, threads, base, and boost, the best R2 is 0.558, which isn't all that great, and while boost clock is the largest contributor to that, none of these variables are great predictors of a good delta% score, even when combined.

(Even if you just want to look at pure 1% lows, the coefficient for boost clocks is 123.78, and that for thread count is 8.32, with cores and base clock having negative coefficients; that R2 fit is better at 0.739.)

Conclusion
Boost clock carries a moderate coefficient of determination in relation to high delta% and high 1% lows. But the number of cores and threads, based on this data, plays a minimal role when you analyze the data to disentangle multiple possible contributing factors.

Does the conclusion fit the individual data points? Yes. If threads were the primary factor, one would expect the 9700K (8 threads) to perform on par with a 7700K or 3300X which both have 8 threads. However, the 9700K performs between the 2700X/3700X and the 9900K/10900K. If threads were extremely important, one would expect the delta% leaders to be the 3900X and 3950X, but they sit alongside the 3600 and 9700K with respect to delta%.

On the other hand, if cores were of primary importance, then the 3300X should sit behind the 1600AF, 2600, 9400F -- but in fact it sits ahead of them.

For most of these conundrums, the answer lay in the boost frequency -- the 3300X boosts higher than the 1600AF, 2600, and 9400F. The 9700K boosts higher than all the chips except the two that sit ahead of them in the delta% rankings.

Overall, I think the data make sense. But of course, I'd like to see more data as well. And of course, we can talk a lot about whether Battlefield V's benchmark is taxing enough on the CPU as well. I am open to any other benchmarks that have been done that might expose weaknesses in low-thread-count CPUs.

(Caveat emptor: While I have worked on a lot of research projects, I did so with the assistance of statisticians, and my knowledge is based to a great extent on what I learned from them. I have taken stats classes, but nothing like what they were doing/taught me. Thus, my informal education I think is OK, but I would never run a real statistical analysis on a research project without a real statistician involved.)
I believe your analysis is flawed for a few reasons. First, by taking every CPU you're including too many variables which is essentially giving you a lot of noise and making any real significant correlation almost impossible. There are multiple reasons a CPU performs the way it does in any benchmark so you need to better isolate what you are trying to compare.

Second, you're giving equal weight to cores and SMT threads which you shouldn't do as they are not equal in performance. A real physical core will perform much better than an SMT thread on the same CPU. Weighting an SMT thread to be ~1/4 of a core would probably be close enough I think but that's just based on a rough estimate from experience and not a statistical analysis of BF5 and SMT. Even then it's tricky as SMT can give really different benefits depending on the situation but then trying to analyze it to that detail becomes very involved and somewhat complex.

Third, there's a plateau point of adding cores and gaining performance (even in minimums) which needs to be taken into consideration and points a bit back to reason number one.

What you should do to better analyze the argument is consider CPUs within the same architecture and limit the upper bound of cores / threads to 6/12 or maybe 8/8. Basically once they've reached that threshold, any CPU with greater threads is considered the same in terms of cores/threads as you won't really gain much above that (you might for multiplayer scenarios but that's not what we have here). Of course clocks speeds will still weigh heavily on performance, including minimums, but what I think you'll find is that basically no matter how fast you clock a CPU (within the realm of realistically achievable frequencies), for some games (like BF5), you'll never reach the same minimums with a 4 core (even 8 thread) CPU as you will with a 6/12 CPU or higher. Comparing the 8700k, 9600k, 7600k, and 7700k I think would be a great data set to work with.
 
Last edited:

ozzy702

Golden Member
Nov 1, 2011
1,000
407
136
Be brutally frank with me, did I make a mistake buying some i5-4570/4590 SFF boxes (4C/4T Haswell quad), dropping in 16GB DDR3, GTX 1650 LP, and 480GB SSD (refurb came with 500GB HDD too), "for Gaming"?

I know that spec probably wouldn't run BF V that well, but for GTA V, Fortnite, etc?

Considering the price too, I paid maybe $400 for the parts, as opposed to a bespoke rig with a Ryzen R5 3600, 32GB of 3200/3600 RAM, and a GTX 1660 Super/Ti / RX 5700, which would probably run closer to $800-900 (given prices on AM4 mobos and PSUs, maybe higher).

I guess my problem is that I don't advertise. I need to find a couple of buyers for a couple of these rigs.
The i5-4590 is pretty weak and will just barely get it done for Fortnite. I built a budget rig not too long ago for someone with an i5-4590 and 1650 and it was barely adequate.

BFV requires at least an i7-7700k to pull 1% lows above 60fps from my experience.
 
  • Like
Reactions: killster1

ozzy702

Golden Member
Nov 1, 2011
1,000
407
136
What you should do to better analyze the argument is consider CPUs within the same architecture and limit the upper bound of cores / threads to 6/12 or maybe 8/8. Basically once they've reached that threshold, any CPU with greater threads is considered the same in terms of cores/threads as you won't really gain much above that (you might for multiplayer scenarios but that's not what we have here). Of course clocks speeds will still weigh heavily on performance, including minimums, but what I think you'll find is that basically no matter how fast you clock a CPU (within the realm of realistically achievable frequencies), for some games (like BF5), you'll never reach the same minimums with a 4 core (even 8 thread) CPU as you will with a 6/12 CPU or higher. Comparing the 8700k, 9600k, 7600k, and 7700k I think would be a great data set to work with.
With BFV you are absolutely correct. Physical cores mean much more than SMT cores and no matter how high you clock an i7-7700k, it will never pull the same framerates as an 8700k or 9700k. Look at the 9600k vs 7700k vs 8700k. It's painfully obvious that the 9600k's six physical cores provide an advantage over the 7700k's 4 hyperthreaded cores. Then look at the 8700k and you see a massive gain in 1% lows. From my experience there is very little benefit moving up from the 8700k which appears to satisfy the speed and core count that BFV wants before running into a bottleneck with a 2080TI.
 

beginner99

Diamond Member
Jun 2, 2009
4,453
801
126
The chart above doesn't even do it justice for 64 man multiplayer.
Which is the many problem of these benches will always only test single-player mode. Makes sense because else it's not repeatable but it misses the point that a quad-core will struggle in certain multiplayer settings.

Another issue is that benchmark machines have most background stuff disabled. The aren't running a gazillion other things in the background which also makes sense for repeatability but again hides the advantage a 6-core could have over a quad.

Difference between a 3300x and a 3600 isn't that big. If you are that low on money, better get a console. Will always give you more bang for the buck, especially with the upcoming ones compared to buying a PC now. In fact I would rather not save at the platform level (CPU, mobo, RAM) because you can keep that while changing GPUs one or 2 times. Having said that the real way to save money is by getting only a 1080p screen and hence not needing a powerful GPU at all. You can then several years down the line get a better monitor and GPU if you have the funds which will work fine if you have a solid platform.
 

amrnuke

Senior member
Apr 24, 2019
797
883
96
I believe your analysis is flawed for a few reasons. First, by taking every CPU you're including too many variables which is essentially giving you a lot of noise and making any real significant correlation almost impossible. There are multiple reasons a CPU performs the way it does in any benchmark so you need to better isolate what you are trying to compare.

Second, you're giving equal weight to cores and SMT threads which you shouldn't do as they are not equal in performance. A real physical core will perform much better than an SMT thread on the same CPU. Weighting an SMT thread to be ~1/4 of a core would probably be close enough I think but that's just based on a rough estimate from experience and not a statistical analysis of BF5 and SMT. Even then it's tricky as SMT can give really different benefits depending on the situation but then trying to analyze it to that detail becomes very involved and somewhat complex.

Third, there's a plateau point of adding cores and gaining performance (even in minimums) which needs to be taken into consideration and points a bit back to reason number one.

What you should do to better analyze the argument is consider CPUs within the same architecture and limit the upper bound of cores / threads to 6/12 or maybe 8/8. Basically once they've reached that threshold, any CPU with greater threads is considered the same in terms of cores/threads as you won't really gain much above that (you might for multiplayer scenarios but that's not what we have here). Of course clocks speeds will still weigh heavily on performance, including minimums, but what I think you'll find is that basically no matter how fast you clock a CPU (within the realm of realistically achievable frequencies), for some games (like BF5), you'll never reach the same minimums with a 4 core (even 8 thread) CPU as you will with a 6/12 CPU or higher. Comparing the 8700k, 9600k, 7600k, and 7700k I think would be a great data set to work with.
The original statement I responded to was not qualified by all these very important things you bring up. However, I'd disagree that there is noise - the findings, when you tabulate everything, are fairly consistent even when including all chips. But I'll entertain and rerun it.

1) Yes, performance increases from 4 to 8 cores, and flattens between 8 and up to 10 cores, and gets worse with 12 and 16 cores. But that could be uarch.
2) No, threads are not worth as much as a core, but it depends on the workload. In CB20 they're worth much more than they are in gaming. The inclusion of threads in the analysis doesn't change the weak correlation between cores and delta% or 1% lows. I will exclude threads from the next analysis because they play little role, but we will still find that core count doesn't matter across a broad range of chips.
3) I will run AMD Zen2 on its own, and also will run Intel's 7xxx-10xxx separately.
4) Upper-bounding the cores wasn't part of the original statement made by @ozzy702, which was "1% lows are going to be dramatically different between a 3300X and higher core count CPU". But I'll bound Intel to exclude the 10900K, and AMD to exclude 3900X and 3950X even though they provide excellent clock speed data.

If we restrict this to CPUs with 10 or fewer cores, break it out by uarch as above, and remove threads from the analysis, here are the results. Regression based on cores, base, and boost clocks. I'd have liked to have excluded base clocks, as it's useless, but we need a third variable for regression and you wanted to exclude threads. Perhaps L3 cache size would be a good one for the future?

Intel - 7700K, 8700K, 9100F, 9400F, 9700K, 9900K
R2 for regression 0.888
Intercept: 0.660
Coefficients:
Cores: 0.028
Base: -0.058
Boost: +0.467

AMD - only 4 chips because of the limits - 3100, 3300X, 3600, 3700X
R2 for regression 1 (always suspect that you don't have enough data points when this happens)
Coefficients:
Intercept: 0.700
Cores: -0.000
Base: -0.212
Boost: +0.222

Neither uarch seems to respond well to increased cores when compared to increased frequency. It appears any perceived benefit from more cores is related to the higher boost clocks on those chips.

We need more data. When we decide to selectively exclude high core count chips (even though the argument is that more cores helps) it really limits the data, especially for AMD. Perhaps I can dig through and see if TechSpot has done consistently well-controlled testing of Zen2 chips and add more data.
 
  • Like
Reactions: dvsv

LightningZ71

Senior member
Mar 10, 2017
303
190
86
I wonder if the comparison between a 3300x with expensive, high end RAM that is tuned within an inch of its life with the equivalent amount of money spent on a 3600x with the remaining budget spent on commodity ram that is similarly optimized will reflect a notable difference in 1% lows? I suspect that the inter ccx latency will be an issue for the 3600X and that the reduced RAM latency for the 3300x would help it quite a bit.
In the end, these arguments are always about “bang for the buck” as most of us have limited resources. I feel that this is a more important focus.
 
  • Like
Reactions: DAPUNISHER

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
1) Yes, performance increases from 4 to 8 cores, and flattens between 8 and up to 10 cores, and gets worse with 12 and 16 cores. But that could be uarch.
It's not uarch because there's plenty examples of those chips scaling very well when the program being used can use the additional cores. No real time game engine today can use 16 cores 32 threads or even 12 cores 24 threads so you won't see any benefit from the additional cores.

2) No, threads are not worth as much as a core, but it depends on the workload. In CB20 they're worth much more than they are in gaming. The inclusion of threads in the analysis doesn't change the weak correlation between cores and delta% or 1% lows. I will exclude threads from the next analysis because they play little role, but we will still find that core count doesn't matter across a broad range of chips.
Obviously it depends on workload, but we are only talking about gaming in this thread, nothing else. Even with gaming it will depend on the game engine, some game engines actually benefit from turning SMT completely off whereas others (such as BF5) benefit from it being on. I also never suggested excluding threads, just that you can't treat them the same as a core, they don't have the same significance and by treating them the same it will mess up the calculation. By excluding threads you're still running a flawed analysis, just in the opposite way.

4) Upper-bounding the cores wasn't part of the original statement made by @ozzy702, which was "1% lows are going to be dramatically different between a 3300X and higher core count CPU". But I'll bound Intel to exclude the 10900K, and AMD to exclude 3900X and 3950X even though they provide excellent clock speed data.
While he didn't explicitly say it, it definitely was implied by the way he framed the statement comparing a 3300x (4 cores 8 threads) and higher core count CPUs. He never said that you will see continued scaling up to a 16c/32t CPU and I'm pretty sure no one has ever made that argument on these forums.

If we restrict this to CPUs with 10 or fewer cores, break it out by uarch as above, and remove threads from the analysis, here are the results. Regression based on cores, base, and boost clocks. I'd have liked to have excluded base clocks, as it's useless, but we need a third variable for regression and you wanted to exclude threads. Perhaps L3 cache size would be a good one for the future?

Intel - 7700K, 8700K, 9100F, 9400F, 9700K, 9900K
R2 for regression 0.888
Intercept: 0.660
Coefficients:
Cores: 0.028
Base: -0.058
Boost: +0.467

AMD - only 4 chips because of the limits - 3100, 3300X, 3600, 3700X
R2 for regression 1 (always suspect that you don't have enough data points when this happens)
Coefficients:
Intercept: 0.700
Cores: -0.000
Base: -0.212
Boost: +0.222

Neither uarch seems to respond well to increased cores when compared to increased frequency. It appears any perceived benefit from more cores is related to the higher boost clocks on those chips.

We need more data. When we decide to selectively exclude high core count chips (even though the argument is that more cores helps) it really limits the data, especially for AMD. Perhaps I can dig through and see if TechSpot has done consistently well-controlled testing of Zen2 chips and add more data.
Please post the data you are using because even just viewing the BF5 graph tells me something is wrong with your findings. Either that or you are calculating something that no one was making an argument about.
 

Hitman928

Platinum Member
Apr 15, 2012
2,450
1,489
136
Here are my results using BF5, Skylake CPUs, and 1% minimums from the chart above. I weighted an SMT thread as 0.25 of a core. I'll go through and see if I can find an SMT on/off test later to make this more accurate. My results show very high correlation with small error between 1% lows and core count compared to much weaker correlation and high error between 1% lows and frequency.


CPU​
All core turbo​
1% min​
7600k​
4​
50​
7700k​
4.4​
77​
Correlation, r​
0.511645338586546​
9400f​
3.9​
77​
9600k​
4.3​
84​
Regression R^2​
0.261780952497341​
8700k​
4.3​
114​
Regression, std error​
24.338​


1590337461176.png


CPU​
Cores​
1% min​
7600k​
4​
50​
7700k​
5​
77​
Correlation, r​
0.961479773965898​
9400f​
6​
77​
9600k​
6​
84​
Regression R^2​
0.98935175​
8700k​
7.5​
114​
Regression, std error​
3.324​

1590337490991.png
 

Attachments

  • Like
Reactions: krumme

FaaR

Senior member
Dec 28, 2007
638
71
91
Hmm. As long as someone isn't chasing the bleeding edge of gaming it feels like almost any semi-modern CPU would suffice. As long as one settles for 60fps max FPS even an aging Nehalem chip could suffice, especially with some basic overclocking (and many of those hit 4GHz or more from what I recall - not mine though.)

That said, performance definitely varies from game to game on a particular CPU. I used to do a fair bit of skirmish gameplay in Supreme Commander back in the day, and with 1000 unit cap on a max size map my i7-4770K Haswell bogged under quickly. My current i9-7900X with bumped turbo multipliers fares much better; in a quick game a while ago I forgot to ally my enemy AI players, so several of them nuked each other into oblivion, but until that happened the game ran very smoothly even though there were a lot of active units going. I couldn't notice any slowdown at all actually.

Finally tech caught up with a 13 years old game! ;)
 
  • Haha
Reactions: Arkaign

ozzy702

Golden Member
Nov 1, 2011
1,000
407
136
Hmm. As long as someone isn't chasing the bleeding edge of gaming it feels like almost any semi-modern CPU would suffice. As long as one settles for 60fps max FPS even an aging Nehalem chip could suffice, especially with some basic overclocking (and many of those hit 4GHz or more from what I recall - not mine though.)

That said, performance definitely varies from game to game on a particular CPU. I used to do a fair bit of skirmish gameplay in Supreme Commander back in the day, and with 1000 unit cap on a max size map my i7-4770K Haswell bogged under quickly. My current i9-7900X with bumped turbo multipliers fares much better; in a quick game a while ago I forgot to ally my enemy AI players, so several of them nuked each other into oblivion, but until that happened the game ran very smoothly even though there were a lot of active units going. I couldn't notice any slowdown at all actually.

Finally tech caught up with a 13 years old game! ;)
I know this isn't a BFV thread, but since we've chatted about it... I tried BFV on a 4ghz X5670 and the experience was ok, but not great. 1% lows would drop below 60fps regularly and it was annoying. I'm admittedly an FPS snob though, especially with first person shooters.
 
Last edited:

loki1944

Member
Apr 23, 2020
52
14
36
Be brutally frank with me, did I make a mistake buying some i5-4570/4590 SFF boxes (4C/4T Haswell quad), dropping in 16GB DDR3, GTX 1650 LP, and 480GB SSD (refurb came with 500GB HDD too), "for Gaming"?

I know that spec probably wouldn't run BF V that well, but for GTA V, Fortnite, etc?

Considering the price too, I paid maybe $400 for the parts, as opposed to a bespoke rig with a Ryzen R5 3600, 32GB of 3200/3600 RAM, and a GTX 1660 Super/Ti / RX 5700, which would probably run closer to $800-900 (given prices on AM4 mobos and PSUs, maybe higher).

I guess my problem is that I don't advertise. I need to find a couple of buyers for a couple of these rigs.
Depends what you mean by "well"; my 2007 Xeon X3230 can run GTA V "well" on normal settings @1080p with a GTX 280 and my 4770K can run it "well" at normal settings and some high with a GTX 570@1080p. My i5 750 can play DayZ "well" at 1080p high with a GTX 1050Ti.
 

Arkaign

Lifer
Oct 27, 2006
20,451
856
126
I know this isn't a BFV thread, but since we've chatted about it... I tried BFV on a 4ghz X5670 and the experience was ok, but not great. 1% lows would drop below 60fps regularly and it was annoying.
That's pretty impressive for tech basically from 2009 :)

I find that E5-1650/1660 v1/v2 (both fully unlocked multipliers and OCable using older versions of XTU, like on my son's S30) run basically all games including BFV smoothly, the only thing that seems to hit it decently is AC Odyssey in the bigger cities it can drop under 60 at times. GPU wise with 16GB or more of DDR3-1866 (quad channel equals DDR4 3733 bandwidth, but with lower latency!) it seems to match up well to a GTX 1070-1080/Vega56/64/RTX 2060 Super range at the top end. But of course if you're pairing with a better GPU than that kind of level you're probably building an outright beefier unit overall anyway.

But I'm still able to find systems complete and only lacking GPU and SSD for $150-$250, which is unreal value for Case, 600W+ PSU, Mobo, Ram, HSF, Legal Windows License for W10 Pro digital activation, etc. And they have enough PCIe slots that adding 10Gbe lan, USB C/3.1, etc is easy if desired. I can't even build a Pentium or Athlon current socket PC with a similar PSU for the same money lol. Just buying 16GB of DDR4, a basic Mobo, case, and 500+W respectable PSU sucks tons of budget in a flash.
 

Mopetar

Diamond Member
Jan 31, 2011
4,670
859
126
I think this has been known for a while. Even before Zen and the explosion in the number of consumer cores, it wasn't unusual for an i5 to have similar frame rates as the vastly more expensive HEDT parts from Intel. If you raised the resolution high enough even a Celeron would be able to keep up with a bottlenecked GPU in average frame rate, though typically not in the 1% lows.

There are a few games that do take advantage of a larger number of cores, but they tend to be the exception more than the norm. Fastest single core performance continues to be the main limiting factor for most games if the GPU isn't the performance bottleneck. Perhaps with the new consoles we'll finally see ~8 cores become more of a baseline minimum, but that might not happen for a few more years until we're past the early stages of the lifecycle and more developers are focusing on the current generation hardware and building for it from the start of their projects.
 

ASK THE COMMUNITY