[Part 3] Measuring CPU Draw Call Performance in Fallout 4

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ZGR

Platinum Member
Oct 26, 2012
2,058
671
136
Downloaded the .ini's and saves and set my GPU name correctly. Hope this is better!
Spec: 5775C @ 4.2 GHz | 3.8 GHz cache | 1.9 GHz L4 | 32GB 4x8GB 2400 MHz CL10 | 1080 ti

Corvega:

corvegatop.jpg

49.1 fps - 13,358 draw calls

Diamond City:
dcity3.jpg\
63.7 fps - 8237 draw calls
 

KompuKare

Golden Member
Jul 28, 2009
1,200
1,510
136
Downloaded the .ini's and saves and set my GPU name correctly. Hope this is better!
Spec: 5775C @ 4.2 GHz | 3.8 GHz cache | 1.9 GHz L4 | 32GB 4x8GB 2400 MHz CL10 | 1080 ti

Corvega:

View attachment 60657

49.1 fps - 13,358 draw calls

Diamond City:
View attachment 60658\
63.7 fps - 8237 draw calls
Interesting!

So it seems that while Broadwell does very well in that AT Civ6 bench, it does less well in Fallout4.

Now on the 5800X3D thread when Broadwell was brought up, someone said that the L4 isn't necessarily that fast but unsure whether they meant latency, bandwidth, or both. In any case huge caches are no miracle for everything. The other thing to consider though for Broadwell C and Zen3 3D is they manage to get their performance numbers while using fairly little power.

As for you scores:
VSCuIz5.png

well in Diamond City your score is near to Zen2.
In Corvega it is a bit low. I note that you got 13,358 draw calls there so you must have been looking slightly elsewhere from the others.

My chart is mainly what we got on the OCUK thread so it doesn't have Sandy Bridge scores, but from earlier in this thread and the OP's charts, your score for Diamond City is pretty close to SB @ 4.2GHz using DDR4 3000 which is actually pretty good.

Anyway, thanks for running the benches. I think by and large Broadwell C has aged pretty well, and I'd speculate that Zen3 3D will age similarly while 8C/16T is enough.
 

ZGR

Platinum Member
Oct 26, 2012
2,058
671
136
Yeah that sounds about right. My chip performed slightly worse or on par benched against my friend's 3700X in the games he tested.

Now that the dGPU is removed from the motherboard, the 5775C I have will finally get some rest back to stock clocks and voltage. I finally get to use its fancy iGPU.

The bandwidth and latency for the L4 cache is poor compared to mid range DDR4, so it really shouldn't perform that well against Skylake or newer with fast RAM. The 5775C's memory controller is also limited to just 2400 DDR3, which was kinda slow compared to what Haswell supported. Broadwell C's successor, Skull Canyon, had even more L4 and could utilize it in more ways than just a victim cache. Intel kept that platform locked down tight unfortunately.

I think Zen 3 3D will age a bit better since its stacked L3 cache is quite fast. Hopefully the cache war will start to heat up!
 

DrMrLordX

Lifer
Apr 27, 2000
22,137
11,827
136
Now on the 5800X3D thread when Broadwell was brought up, someone said that the L4 isn't necessarily that fast but unsure whether they meant latency, bandwidth, or both.

That was probably me, and I'm pretty sure it's both.


Plenty of Comet Lake and Coffee Lake systems have DDR4 setups outperforming that L4.
 

KompuKare

Golden Member
Jul 28, 2009
1,200
1,510
136
That was probably me, and I'm pretty sure it's both.


Plenty of Comet Lake and Coffee Lake systems have DDR4 setups outperforming that L4.
Okay, but while AT's benching uses JEDEC memory which is unfair to (especially) newer CPUs there is something about Civ6 which really loves Broadwell C. And thanks to @ZGR we can say that the same isn't really the case for Fallout4 (and by implication Skyrim SE etc.). Would still like to see Civ6 480P benches for Ryzen 5800X3D eventually.

I guess the lesson is: there is no perfect CPU and approaches which work really well for some loads don't work that well for others. Design goals matter a lot: Broadwell C seemed to me mainly a mobile-first design trying to feed the iGPU, while Zen3 3D is mainly server-first design which can really well with some loads (as seen in Phoronix Milan-X Linux test) and happens do well in a lot of games including Fallout4. At the cost of silicon, large cache CPUs seem to do really well in terms of perf/watt though.
 

BoredErica

Member
Apr 26, 2022
41
64
61
Just registered to say I really appreciate the work here. No braindead messages about how 'omg cpu perf doesn't matter, you play at 4k anyways'. + Corvega test + attempts to share ini and save files. Good job everyone. Hope it continues in generations to come. My takeaway so far from 5800x3d results:
1. FO4 does not seem to benefit exceptionally from extra vcache. 5800x3d is not stomping 12600k/12900k.
2. However, it still performs on par with those CPUs so it's not disappointing either.
3. FO4/Skyrim seem to benefit more from single thread perf increases from gen to gen. If I look at TPU game averages at 720p, I'd assume my old 5ghz 7600k -> 5600x would be 8% increase in FPS, and 5600x->12700k/5800x3d 15%. Those numbers turned out to be 30% and seemingly around 50% here for the latter. So reviewer game averages even at low resolution seems to constantly underestimate benefits gen on gen vs FO4.

For me, Corvega is just the first step. I get lower FPS in Fanuel Hall with ultra shadow settings than Corvega, and large settlements make both loads look like a walk in the park. But it's good progress.


I think Zen4 will beat 5800x3d in FO4. I wonder if FO4 likes ddr5?
 
Last edited:

KompuKare

Golden Member
Jul 28, 2009
1,200
1,510
136
For me, Corvega is just the first step. I get lower FPS in Fanuel Hall with ultra shadow settings than Corvega, and large settlements make both loads look like a walk in the park. But it's good progress.
Well, credit where credit is due, MajinCry created these bench threads and came up with the saves.
I guess it has to repeatable and work with the base game.
More complex saves might work as long as they too use the base game.
My CreationEngine poison is heavily modded Skyrim and while it would be great to get some idea of CPU performance with 2000 mods and hundreds NPCs, if the requirements was to download 100GB of Wabbajack modlists first then there wouldn't be many takers.
A simple look at threading using Process Explorer showed me one thread near 100% and a second one at 70% or so. One might be draw calls, the other NPCs but it's probably not that simple.
We can always hope that Bethesda fix their engine to scale better, but I'm not holding my breath.
 

ZGR

Platinum Member
Oct 26, 2012
2,058
671
136
Even on my old 5775c, I could spawn so many AI on F4 and Skyrim. The game engine is very good at running a few hundred ai at decent FPS.
 
  • Like
Reactions: USER8000

JoeRambo

Golden Member
Jun 13, 2013
1,814
2,105
136
A simple look at threading using Process Explorer showed me one thread near 100% and a second one at 70% or so. One might be draw calls, the other NPCs but it's probably not that simple.
We can always hope that Bethesda fix their engine to scale better, but I'm not holding my breath.

It's not easy at all, for example in one of Paradox games, this one developer is pushing more multithreading, but it's complex topic. Here's video with a quite some technical detail with challenges involved:

 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
1. FO4 does not seem to benefit exceptionally from extra vcache. 5800x3d is not stomping 12600k/12900k.

Yes it does. Look at the 5700x/5800x results - the cache increases performance on the 5800x3d by almost 40% to 60% although the boost clocks(4.5ghz) are lower than the 5700x/5800x(4.6ghz/4.7ghz). Because of the age of the engine and the age of the game(2015),at least some of the modern intel cpus do benefit from some optimisations(Skylake) - a highly tuned Alderlake CPU is only about 20% to 30% better than a highly tuned Skylake derived CPU. In 2015 the newest AMD cpu was the Bulldozer lineage,the consoles were using Jaguar derived CPUs and Zen is a new design.

Starfield uses Creation 2 and will need to have better optimisations for AMD CPUs.

3. FO4/Skyrim seem to benefit more from single thread perf increases from gen to gen. If I look at TPU game averages at 720p, I'd assume my old 5ghz 7600k -> 5600x would be 8% increase in FPS, and 5600x->12700k/5800x3d 15%. Those numbers turned out to be 30% and seemingly around 50% here for the latter. So reviewer game averages even at low resolution seems to constantly underestimate benefits gen on gen vs FO4.

Also Fallout 4 can use more than four threads(but not very well):

fallout4_1080p-1440p-cpu-scaling.jpg


So a newer CPU than the Core i5 7600k will have better single core performance and more cores. Although a lot of the improvement in Fallout 4 performance will be from the better single core performance,memory speed,etc the extra threads also are helpful.
 
Last edited:

BoredErica

Member
Apr 26, 2022
41
64
61
Yes it does. Look at the 5700x/5800x results - the cache increases performance on the 5800x3d by almost 40% to 60% although the boost clocks(4.5ghz) are lower than the 5700x/5800x(4.6ghz/4.7ghz). Because of the age of the engine and the age of the game(2015),at least some of the modern intel cpus do benefit from some optimisations(Skylake) - a highly tuned Alderlake CPU is only about 20% to 30% better than a highly tuned Skylake derived CPU.
Also Fallout 4 can use more than four threads(but not very well):

In my experience, with controlled testing for Oblivion/SSE and going off of memory with FO4 (so less reliable), 5600x was 30% faster than 7600k @ 5ghz. I didn't do controlled testing for FO4 regrettably so I'm not going to say my recollection MUST be right. Let me rephrase your points to see if I'm understanding what you are saying.

1.12600k is getting 95FPS.
2. Skylake would be getting 76fps (95/1.25).
3. Skylake outperforms 5600x. (Hard to accept 5600x loses to Skylake in FO4 when my 5600x beat my old 5ghz 7600k by 30% in SSE/Oblivion. Somewhere along the lines here things aren't making sense for me.)
4. Kaby Lake would be getting 84fps (76*(5/4.5)). Alder Lake would be only 13% faster than Kaby Lake. So then 9900k offered ~0% perf increase over 7600k.

As for multicore usage:
Running around FO4 is one thing. Corvega (and the worst parts of FO4 to run) are another. We can try to test this by turning off 2 cores. If Fallout4 did use more cores even in Corvega and that caused decreased in FPS with cores disabled, that is not the case for Skyrim and definitely not the case for Oblivion. The perf increase 7600k->5600x was 30% for Oblivion and Skyrim so it can't be cores. Or perhaps the situation in Fallout4 is very different from Skyrim/Oblivion even if they manifested similarly larger than expected performance improvement. If more cores helped FO4 Corvega, it makes no sense to me to have Kaby Lake be on par with 9900k.

Or alternative explaination:
I hallucinated FO4 gains but did get 30% gains in SSE/Oblivion, when in fact FO4 was not any faster on 5600x and actually slower. Hard pill for me to swallow but possible.
 
Last edited:

Madcap_Magician

Junior Member
Apr 26, 2018
11
31
91
I'm back and have taken the title of King of Corvega Dump3D. Although I'm sure someone with a different motherboard and ram and/or more skill can best my score.

I performed many tests to see how the 5800x3D would perform with various setups.


In this post I will include the top scoring setup. In a followup post I'll include more details on various setups.

Corvega: 113.7 FPS
Diamond City: 129.5 FPS
 

Attachments

  • FPS 129.5.FO4 DC DiamondC 5800x3d 101.6 1867.jpg
    FPS 129.5.FO4 DC DiamondC 5800x3d 101.6 1867.jpg
    655.8 KB · Views: 26
  • FPS 113.7.FO4 DC Corvega 5800x3D 101.6 1867.jpg
    FPS 113.7.FO4 DC Corvega 5800x3D 101.6 1867.jpg
    710.1 KB · Views: 27

Madcap_Magician

Junior Member
Apr 26, 2018
11
31
91
PBO Tuner?

Available in post 13:

Here are more results from some interesting clock configurations. The FPS doesn't tell the whole story though. With clocks that don't match up there is a loss in smoothness and stability.

I was going to post more but I don't want to get off topic.

Thanks to MajinCry for coming up with this test. It's been very helpful and useful for us creation engine game fans.

I'm going to do at least one more test to see the difference in driver overhead between a 1080ti and a 3080ti. I have the card, just wanted to finish this testing before moving on. Next tests will feature lower speed ram and otherwise stock settings because I greatly prefer stability over a few frames.
 

Attachments

  • FPS 127.5.FO4 DC DiamondC 5800x3d2133ram 1900fclk.jpg
    FPS 127.5.FO4 DC DiamondC 5800x3d2133ram 1900fclk.jpg
    606.2 KB · Views: 17
  • FPS 123.2FO4 DC DiamondC 5800x3d2133ram 1067fclk.jpg
    FPS 123.2FO4 DC DiamondC 5800x3d2133ram 1067fclk.jpg
    615.8 KB · Views: 16
  • FPS 108.5.FO4 DC Corvega 5800x3d2133ram 1067fclk.jpg
    FPS 108.5.FO4 DC Corvega 5800x3d2133ram 1067fclk.jpg
    644.2 KB · Views: 18
  • FPS 108.4FO4 DC Corvega 5800x3d2133ram 1900fclk.jpg
    FPS 108.4FO4 DC Corvega 5800x3d2133ram 1900fclk.jpg
    629.5 KB · Views: 15

psolord

Platinum Member
Sep 16, 2009
2,095
1,235
136
Ok that's interesting.

Can someone point me to these infamous save files and the .ini? No time to do a thorough search, but the test seems quick enough and I'd like to test my 2700k@5Ghz+3060ti vs the 8600K@5Ghz+1070 for the lolz.

Also has anyone tested with DXVK?
 

ZGR

Platinum Member
Oct 26, 2012
2,058
671
136
Ok that's interesting.

Can someone point me to these infamous save files and the .ini? No time to do a thorough search, but the test seems quick enough and I'd like to test my 2700k@5Ghz+3060ti vs the 8600K@5Ghz+1070 for the lolz.

Also has anyone tested with DXVK?


KompuKare helped me find the link, glad I can help too lol.
 

Madcap_Magician

Junior Member
Apr 26, 2018
11
31
91
In my experience, with controlled testing for Oblivion/SSE and going off of memory with FO4 (so less reliable), 5600x was 30% faster than 7600k @ 5ghz. I didn't do controlled testing for FO4 regrettably so I'm not going to say my recollection MUST be right. Let me rephrase your points to see if I'm understanding what you are saying.

1.12600k is getting 95FPS.
2. Skylake would be getting 76fps (95/1.25).
3. Skylake outperforms 5600x. (Hard to accept 5600x loses to Skylake in FO4 when my 5600x beat my old 5ghz 7600k by 30% in SSE/Oblivion. Somewhere along the lines here things aren't making sense for me.)
4. Kaby Lake would be getting 84fps (76*(5/4.5)). Alder Lake would be only 13% faster than Kaby Lake. So then 9900k offered ~0% perf increase over 7600k.

As for multicore usage:
Running around FO4 is one thing. Corvega (and the worst parts of FO4 to run) are another. We can try to test this by turning off 2 cores. If Fallout4 did use more cores even in Corvega and that caused decreased in FPS with cores disabled, that is not the case for Skyrim and definitely not the case for Oblivion. The perf increase 7600k->5600x was 30% for Oblivion and Skyrim so it can't be cores. Or perhaps the situation in Fallout4 is very different from Skyrim/Oblivion even if they manifested similarly larger than expected performance improvement. If more cores helped FO4 Corvega, it makes no sense to me to have Kaby Lake be on par with 9900k.

Or alternative explaination:
I hallucinated FO4 gains but did get 30% gains in SSE/Oblivion, when in fact FO4 was not any faster on 5600x and actually slower. Hard pill for me to swallow but possible.

I’m going to have to look into this SSE oblivion. Thanks for the heads up.

FYI on my 3900x I did test performance by dropping cores. I used process lasso to be able to remove them from fallout in real time. The fps numbers became less stable with worse lows before I saw a direct loss of max fps.

So the performance loss with less cores is there it’s not going to show up in a screenshot of the fps at one moment in time. We would need a frame time graph to get an accurate impression of smoothness and performance.
 

BoredErica

Member
Apr 26, 2022
41
64
61
I’m going to have to look into this SSE oblivion. Thanks for the heads up.

FYI on my 3900x I did test performance by dropping cores. I used process lasso to be able to remove them from fallout in real time. The fps numbers became less stable with worse lows before I saw a direct loss of max fps.

So the performance loss with less cores is there it’s not going to show up in a screenshot of the fps at one moment in time. We would need a frame time graph to get an accurate impression of smoothness and performance.
Thanks for the reply. I'm going to think about whether I should get 5800x3d or wait for next gen or both for a long time now. :^)
 

psolord

Platinum Member
Sep 16, 2009
2,095
1,235
136
Here are my results on 2700k+3060ti with msi afteburner osd on top (doesn't seem to affect performance)

However I am not sure I did everything correctly because if you launch the fallout.exe with the provided prefs.ini the game launches in a tiny window and can't see jack. So I used the launcher and used full screen 1080p ultra settings. Hope this will suffice.

Will try to test my 8600k+1070 out of curiosity, during the week.

Oh and dxvk does not work with this game unfortunately. Yes I deleted all enb related files. I just wanted to get a similar screenshot without the enb data of course, just to see the ballpark difference, in msi ab data.


 
  • Like
Reactions: lightmanek

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
In my experience, with controlled testing for Oblivion/SSE and going off of memory with FO4 (so less reliable), 5600x was 30% faster than 7600k @ 5ghz. I didn't do controlled testing for FO4 regrettably so I'm not going to say my recollection MUST be right. Let me rephrase your points to see if I'm understanding what you are saying.

1.12600k is getting 95FPS.
2. Skylake would be getting 76fps (95/1.25).
3. Skylake outperforms 5600x. (Hard to accept 5600x loses to Skylake in FO4 when my 5600x beat my old 5ghz 7600k by 30% in SSE/Oblivion. Somewhere along the lines here things aren't making sense for me.)
4. Kaby Lake would be getting 84fps (76*(5/4.5)). Alder Lake would be only 13% faster than Kaby Lake. So then 9900k offered ~0% perf increase over 7600k.

As for multicore usage:
Running around FO4 is one thing. Corvega (and the worst parts of FO4 to run) are another. We can try to test this by turning off 2 cores. If Fallout4 did use more cores even in Corvega and that caused decreased in FPS with cores disabled, that is not the case for Skyrim and definitely not the case for Oblivion. The perf increase 7600k->5600x was 30% for Oblivion and Skyrim so it can't be cores. Or perhaps the situation in Fallout4 is very different from Skyrim/Oblivion even if they manifested similarly larger than expected performance improvement. If more cores helped FO4 Corvega, it makes no sense to me to have Kaby Lake be on par with 9900k.

Or alternative explaination:
I hallucinated FO4 gains but did get 30% gains in SSE/Oblivion, when in fact FO4 was not any faster on 5600x and actually slower. Hard pill for me to swallow but possible.

I am not sure where you are getting those numbers from - the best thing is for you to run this benchmark on your Ryzen 5 5600X and your Core i5 7600K and see what data you get. The whole IPC improvement numbers are an average on TPU and will differ from game to game.

Fallout 4 probably had some optimisations for Skylake(the latest CPU before its launch) - Skylake showed some noticeable improvements over Haswell especially if you tuned the RAM. Skyrim Special Edition was made internally before Fallout 4 to test the engine on consoles and Bethesda decided to release it later,so it could explain what you are seeing(it might not have the Skylake optimisations?).

But this is a drawcall benchmark - it tests primarily the rendering thread so its going to be limited by single core performance. But when you go inside the plant its going to also have things such as NPC AI,weapons mechanics,etc running on different threads. If you run mods like Sim Settlements 2 which has lots of scripts,etc things will change. Skyrim is a much simpler game mechanically.


VSCuIz5.png


Skylake and every Intel CPU upto Cometlake use the same core(but with security fixes). Single core IPC will be the same with no security fixes for both. Alderlake is a two generational improvement in IPC over Skylake/Cometlake. Two generations have yield at best 30% improvement in this benchmark(tuned Core i9 9900K and tuned Core i5 12600K).

Zen2 is slightly slower than Skylake according to gameplay tests done on here.The improvement in this game is about 10% to 30% overall for Zen3 vs Zen2 and would make it about the same as Skylake.The game really benefits from RAM tuning. You can easily gain 20% to 30% extra performance with tuning RAM properly. It also benefits from larger caches.

The guy who posted the Core i9 9900K,knows how to tune RAM very well - they have the top Core i5 12600K result. They are also getting a Ryzen 7 5800X3D,so we can see what

Regarding the core counts. I have done some informal testing myself. When I run my modded game - there is over 4 threads being used. Two to three threads appeared to be mostly used with the next few showing less usage. Cutting back to less cores and the game seemed a bit choppier. Which you can see in the chart I showed - most of the performance is on the first 4 threads but extra threads see usage.

Skyrim Special Edition seemed to use less threads even with lots of mods but it is a less complex game.
 
Last edited:
  • Like
Reactions: psolord

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Here are my results on 2700k+3060ti with msi afteburner osd on top (doesn't seem to affect performance)

However I am not sure I did everything correctly because if you launch the fallout.exe with the provided prefs.ini the game launches in a tiny window and can't see jack. So I used the launcher and used full screen 1080p ultra settings. Hope this will suffice.

Will try to test my 8600k+1070 out of curiosity, during the week.

Oh and dxvk does not work with this game unfortunately. Yes I deleted all enb related files. I just wanted to get a similar screenshot without the enb data of course, just to see the ballpark difference, in msi ab data.



If its in a small window its work correctly. Just reduce your desktop resolution down a bit and its easier to see.
 

psolord

Platinum Member
Sep 16, 2009
2,095
1,235
136
Here are my results on 2700k+3060ti with msi afteburner osd on top (doesn't seem to affect performance)

However I am not sure I did everything correctly because if you launch the fallout.exe with the provided prefs.ini the game launches in a tiny window and can't see jack. So I used the launcher and used full screen 1080p ultra settings. Hope this will suffice.

Will try to test my 8600k+1070 out of curiosity, during the week.

Oh and dxvk does not work with this game unfortunately. Yes I deleted all enb related files. I just wanted to get a similar screenshot without the enb data of course, just to see the ballpark difference, in msi ab data.



Ok so I did the test on my 8600k+GTX 1070.

===========
@4.3Ghz which is its normal operating speed result 66.8fps

the Corega with the provided.ini, both with msi ab osd enabled and disabled (added the msi ab osd just to have screenshots with more info)





No Diamond City for 4.3Ghz sorry

===========

5Ghz test on Corvega 73.3fps





So for a 16.3% clock increase from 4.3 to 5ghz, we get 9.5% performance boost.

Also I did one more test here with the settings set at straight ultra from the launcher and I do not see any fps difference.



The gpu load has jumped from 48% to 83% of course, but it is still perfectly cpu limited. I totally get why the original creator of the test chose the small window size (so to keep it cpu limited). I just did this as a proof enough, that the above 2700k+3060ti tests are valid and I don't have to do them again.

=========

Lastly the Diamond city test at 5Ghz, result 85,7fps





===================

So to recap

Corvega 8600k@4.3Ghz+1070=66.8fps
Corvega 8600k@5.0Ghz+1070=73.3fps
Corvega 2700k@5.0Ghz+3060ti=56.8fps

Diamond City 8600k@5.0Ghz+1070=85.7fps
Diamond City 2700k@5.0Ghz+3060ti=66.8fps

For me it's impressive that the 2700k managed to surpass some 3600 stock results and the 8600k managed to surpass some 5700X results!
 

ZGR

Platinum Member
Oct 26, 2012
2,058
671
136
I messed up and did the test on a 2017 build of Fallout 4 for the 5775C. Probably doesn't matter too much. Forgot that one of my mods will not run past 1.10.20. Gonna just zip up game and redownload from Steam when I get the chance.
On new system with older version I am getting consistently about 3% lower FPS than Madcap. Their RAM is tuned, and mine is not. My CPU is not boosting to 4550 MHz at all. I doubt F4 will utilize the single core boost that often, but that 100 Mhz is still off the table! Waiting on BIOS update.

For fun I am stress testing AI spawning, which is what I did on the 5775C as well. The 5775C crashed with around 9x more human AI. The 5800X3D is doing over 20x human AI and game is not crashing! The game will crash after a certain global AI limit, even if FPS is high (player far away). I am assuming this is game engine limitation; which was never possible to see on the old CPU. FPS is unplayable though, and the AI need a lot more space; many will spawn inside walls. There is a built in limit for max AI to perform functions, but that has been increased to 500. But this is why I love Fallout 4. The AI limit is absurd. Most of the AI causing low framerate are behind buildings.

ai.jpg
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
22,137
11,827
136
For fun I am stress testing AI spawning, which is what I did on the 5775C as well. The 5775C crashed with around 9x more human AI. The 5800X3D is doing over 20x human AI and game is not crashing!

That's interesting. It's not entirely the topic of this discussion, but I wonder what would happen if someone did the same thing on a 12900KS?