Did you check correlation between independent variables when you did the multivariate regression? If your independent variables have too high of correlation (0.6+ or so), it's most likely going to screw everything up and make determining the significance of the independent variables impossible (
https://statisticsbyjim.com/regression/multicollinearity-in-regression-analysis/).
With that said, your frequencies are all wrong. Those are single core boost frequencies, but that's not what they'll run at with any remotely modern game. Frequencies should be more like this:
Chip | Boost |
9900K | 4.7 |
9100F | 4 |
10700K | 4.7 |
9700K | 4.6 |
10600K | 4.5 |
8700K | 4.3 |
9600K | 4.3 |
7700K | 4.4 |
9400F | 3.9 |
Additionally, you're still calculating how much of an impact 8c/16t CPUs have on core scaling, but the engine doesn't scale that high and that wasn't the point. The point was that game engines are starting to be able to scale beyond 8t and any games that do, a 4c or 4c/8t CPU will have a hard time keeping up with any consumer 6c/12t or higher CPU, especially in minimums. I offered BF5 as an example of a game that has already been able to scale beyond 4c/8t and the graph shows that. If you calculate correlation from 4c to 6c/12t (or 8c/8t) you'll see very strong correlation in minimum increases. Of course you'll see correlation in frequency too, the question is, will a 4c/8t be able to keep up on frequency alone.
If your interpretation is correct and frequency is strongly the determining factor in minimums, then let's run a sanity check, I'm a huge fan of always running a sanity check. From your numbers, a 4c/4t 9100f which operates at 4 GHz loaded is getting 61 in 1% min. A 8c/8t 9700k at 4.6 GHz loaded is getting 126 for 1% min. So which do you think is more likely, that a 15% increase in clock speed accounted for the bulk of a 106.6% increase in minimums, or do you think that having 100% more cores was the bulk of the 106.6% increase in minimums? Clearly the clock speed difference can only account for a small part of that difference. Now, this is obviously one example, but the same will hold true if you want to check others when you frame it with the understanding of the limits of the game engine to scale beyond ~ 6c/12 or 8c/8t.
Edit: I accidentally closed my browser tab and ended up with a mishmash of post drafts so I fixed that.
Re: the Battlefield V limit, as I said in my post, I removed all the chips with more than 8 threads, and the result came out the same. I could add back the chips with 12 threads but I'm not sure the results would be much different. I'll check on that.
Some interesting data points (further sanity checks) on chips where it seems that some variables are held constant, while others of interest are different:
9900K vs 10700K
same cores/threads
9900K peak boost 5 GHz vs 4.7 GHz for 10700K
same all-core boost
same 1% lows
same avg FPS
-- Looks like peak boost has little effect in this case.
8700K vs 9600K
same base freq
same all-core boost
SMT on vs SMT off (6 threads vs 12 threads)
+20-25% 1% lows for 8700K
+5-10% avg FPS for 9700K
-- Looks like adding threads helps quite a bit (note boost freq minimally different <5%).
9400F vs 9600K
same cores/threads
9600K peak boost 4.6 GHz vs 4.1 GHz for 9400F
9600K all-core boost 4.3 GHz vs 3.9 GHz for 9400F
1% lows 97 vs 85 (10-15% difference)
avg FPS 153 vs 143 (5-10% difference)
-- Looks like increasing all-core/peak boost helps as well, not as much.
Some other interesting cases:
7700K vs 9400F
4/8 (4.6 effective threads) vs 6/6
7700K all-core 4.4 GHz, 9400F all-core 3.9 GHz
1% lows 94 vs 85
It's possible that BF5 uses SMT very efficiently only at low thread counts, in this case each SMT thread would confer ~55% the performance of a "real" core, resulting in a 4c/8t chip beating a 6c/6t chip by 10-15% as we see. We also note an all-core boost difference of 10-15%. Both probably playing roles.
9700K (8/8) vs 10600K (6/12) and 8700K (6/12)
1% low: 126 vs 125 vs 121 (<5% differences)
It seems there is a break point where above 8 threads frequency is more important. I'm not sure if that point is at 8 or 10 threads, we don't have a 10 thread chip to compare.
There is DEFINITELY a relationship between core counts, thread counts, all core boost, and peak boost -- with 1% lows. I don't think we can easily dismiss any of these factors, as they will play different roles depending on how many cores and threads there are.
Statistically, you are right - core count, effective core count, and boost speed (as well as all-core boost speed) are tightly related. Meaning it is very difficult to sort out which plays the biggest role.
In the end it seems like both play a role, and it will be hard to "prove" which is the largest contributor, though I think you're correct that cores/threads do play a major role, and all-core/peak boost seems like it doesn't play as large a role, especially for the majority of chips installed in gaming rigs at this point.