- Jul 16, 2016
- 1,114
- 1,153
- 136
The Ryzen FuzeDrive got me interested, and so I went ahead and downloaded PrimoCache to check out what sort of boosts I can get from a combined SSD+HDD setup. I don't care much about synthetic tests, I'm more about real world tests that match my workloads, so don't expect to see Crystal Disk Mark here.
Anyhow, here are my system specs:
Ryzen 7 1700 SMT Off @ 3.9GHz
2x8GB G.Skill TridentZ at 3200MHz CL14
Gigabyte AX370 Gaming K7
EVGA GTX 1080 Ti SC2
OS Drive: 500GB 960 EVO
Other drives:
120GB Sandisk Plus SSD <-- Used as cache
240GB Transcend SSD
2TB WD Blue
1TB WD Black <-- Used as main storage in tests
The RAM Cache test uses 2GB of RAM dedicated to speeding the HDD up.
The All Cache test uses 2GB of RAM + 112GB of SSD cache for speeding up the HDD.
In both cases the block size was set to 32KB, which induces a 500MB of overhead on RAM.
Testing methodology:
I picked 3 games, being The Witcher 3, Rainbow Six Siege, and Battlefield 4. For the two former I picked three saved games/situations, and on the latter a multiplayer map and singleplayer level, to simulate loading different levels of the same game, since in theory a lot of data should be shared between them, leading to caching algorithms to do a speedup even on a first load of a different map after you loaded a different game level.
For these tests, I ran three different setups. One being just pure HDD loading and letting the OS do its own caching and prefetching. After that I cleared the RAM of standby memory to reset any caching the OS did, and used just the RAM Cache to check for speedups there. Then finally, I did the all cache test, using the RAM as L1 cache and SSD as L2 cache. Finally after all of that I moved the games to the SSD and checked the performance loading purely from the SSD. Battlefield 4 wasn't included in the pure SSD tests both due to time reasons, and for reasons that will be clear in the data.
Now the point of this entire thing is to save on precious SSD storage, so I recorded how much data was being cached in the SSD at any given test, to compare with just installing the game on the SSD.
Results:
Quick Analysis:
Looking at the pure HDD results, we can see the OS caching in action. Windows 10 makes use of unused memory, and caches things in standby memory to speedup future accesses. It is obviously quite effective here, as can be seen in the Witcher 3 and Battlefield 4 test results after the first run, being within margin of error with the full cache result (and SSD for Witcher 3).
It isn't nearly as effective however with Rainbow Six Siege. At best load times are reduced by 20%, which is a far cry from the 2.8x and 5.2x(!) speedup on Witcher 3 and Battlefield 4, respectively.
Important to note that these speedups only last until system restart or standby memory is evicted in favor of something else.
Moving on to the RAM Cache, there were no observable speed ups with The Witcher 3 and Battlefield 4, however Rainbow Six Siege did get boosted slightly. About a 16% speedup for the first run, and up to 35% for subsequent runs.
And finally, using the entire caching scheme, here things get interesting.
There was a small speedup in the initial Witcher 3 first run, but it's really nothing worth discussing.
On subsequent runs, we can see the full cache scheme being within margin of error with the rest of the results. Looking at the cached data usage after all 9 runs, it was just 2.23GB! And we were getting the full speedup of an SSD here. The full game install is 51GB for perspective.
Moving on to Rainbow Six Siege, the first load isn't any faster than the RAM cache result, however subsequent loads of the same level show the full speedup of an SSD. After that, my theory of game asset sharing is in effect, and we can see loading different levels for the first time sits between SSD and RAM Cache in performance, and significantly below the base HDD result, showing a 1.733x speedup before the system even got the chance to fully cache everything. Subsequent loads of the same level show a 2.25x speedup over a mechanical drive. After all 9 runs, the total storage use (Which includes the Witcher 3 runs), went up to 6.21GB. The two games together take up 90GB to fully install.
Finally, with Battlefield 4, while I didn't test the SSD due to time constraints, it should be fairly obvious that the caching scheme is highly effective and after the first run, saves significant time equaling an SSD for subsequent loads, without costing much storage. Total usage went up to 8.23GB. Having all three games installed would cost 158GB of storage. More than this SSD can hold, and about 19x more than the cached result. To be fair, I didn't load ALL levels, however from testing it can be seen that the main bump in storage requirements comes from the very first map, and subsequent loads don't add much. I expect after everything is cached that the result would be about a 10x storage efficiency increase, down from the 19x.
Now at first glance it might seem like aside from Rainbow Six, the HDD and SSD provide similar performance after first load. However what I mentioned previously is that the OS cache is far less "sticky". It can be lost from a restart, or even just using the computer for long enough that the cache is evicted from memory. Same can be said for the SSD cache, however the huge difference in capacity means it can hold far more before having to evict anything. And of course nothing is lost after a restart.
To make sure, I evicted everything from memory and tested the effectiveness of the cache. As can be seen, the performance matches the pure SSD runs from the very first run (or arguably slightly faster, depending on the margin of error).
Summary and tl;dr
From my testing I conclude that an SSD caching scheme is far more efficient use of your SSD storage space. Aside from not having to manage your storage tiers manually, it provides most of the SSD benefit after the first level of a game loads, even when loading different levels. When you load the same level again, you get pure SSD performance, with significantly less storage space used up.
+1 to PrimoCache, I'll keep using, and probably buy it past the 60 day trial.
Alternative software would be VeloSSD, however I didn't try it myself and therefore don't know anything about it.
Intel users can look here: https://www.intel.com/content/www/us/en/software/intel-cache-acceleration-software-performance.html
Anyhow, here are my system specs:
Ryzen 7 1700 SMT Off @ 3.9GHz
2x8GB G.Skill TridentZ at 3200MHz CL14
Gigabyte AX370 Gaming K7
EVGA GTX 1080 Ti SC2
OS Drive: 500GB 960 EVO
Other drives:
120GB Sandisk Plus SSD <-- Used as cache
240GB Transcend SSD
2TB WD Blue
1TB WD Black <-- Used as main storage in tests
The RAM Cache test uses 2GB of RAM dedicated to speeding the HDD up.
The All Cache test uses 2GB of RAM + 112GB of SSD cache for speeding up the HDD.
In both cases the block size was set to 32KB, which induces a 500MB of overhead on RAM.
Testing methodology:
I picked 3 games, being The Witcher 3, Rainbow Six Siege, and Battlefield 4. For the two former I picked three saved games/situations, and on the latter a multiplayer map and singleplayer level, to simulate loading different levels of the same game, since in theory a lot of data should be shared between them, leading to caching algorithms to do a speedup even on a first load of a different map after you loaded a different game level.
For these tests, I ran three different setups. One being just pure HDD loading and letting the OS do its own caching and prefetching. After that I cleared the RAM of standby memory to reset any caching the OS did, and used just the RAM Cache to check for speedups there. Then finally, I did the all cache test, using the RAM as L1 cache and SSD as L2 cache. Finally after all of that I moved the games to the SSD and checked the performance loading purely from the SSD. Battlefield 4 wasn't included in the pure SSD tests both due to time reasons, and for reasons that will be clear in the data.
Now the point of this entire thing is to save on precious SSD storage, so I recorded how much data was being cached in the SSD at any given test, to compare with just installing the game on the SSD.
Results:

Quick Analysis:
Looking at the pure HDD results, we can see the OS caching in action. Windows 10 makes use of unused memory, and caches things in standby memory to speedup future accesses. It is obviously quite effective here, as can be seen in the Witcher 3 and Battlefield 4 test results after the first run, being within margin of error with the full cache result (and SSD for Witcher 3).
It isn't nearly as effective however with Rainbow Six Siege. At best load times are reduced by 20%, which is a far cry from the 2.8x and 5.2x(!) speedup on Witcher 3 and Battlefield 4, respectively.
Important to note that these speedups only last until system restart or standby memory is evicted in favor of something else.
Moving on to the RAM Cache, there were no observable speed ups with The Witcher 3 and Battlefield 4, however Rainbow Six Siege did get boosted slightly. About a 16% speedup for the first run, and up to 35% for subsequent runs.
And finally, using the entire caching scheme, here things get interesting.
There was a small speedup in the initial Witcher 3 first run, but it's really nothing worth discussing.
On subsequent runs, we can see the full cache scheme being within margin of error with the rest of the results. Looking at the cached data usage after all 9 runs, it was just 2.23GB! And we were getting the full speedup of an SSD here. The full game install is 51GB for perspective.
Moving on to Rainbow Six Siege, the first load isn't any faster than the RAM cache result, however subsequent loads of the same level show the full speedup of an SSD. After that, my theory of game asset sharing is in effect, and we can see loading different levels for the first time sits between SSD and RAM Cache in performance, and significantly below the base HDD result, showing a 1.733x speedup before the system even got the chance to fully cache everything. Subsequent loads of the same level show a 2.25x speedup over a mechanical drive. After all 9 runs, the total storage use (Which includes the Witcher 3 runs), went up to 6.21GB. The two games together take up 90GB to fully install.
Finally, with Battlefield 4, while I didn't test the SSD due to time constraints, it should be fairly obvious that the caching scheme is highly effective and after the first run, saves significant time equaling an SSD for subsequent loads, without costing much storage. Total usage went up to 8.23GB. Having all three games installed would cost 158GB of storage. More than this SSD can hold, and about 19x more than the cached result. To be fair, I didn't load ALL levels, however from testing it can be seen that the main bump in storage requirements comes from the very first map, and subsequent loads don't add much. I expect after everything is cached that the result would be about a 10x storage efficiency increase, down from the 19x.
Now at first glance it might seem like aside from Rainbow Six, the HDD and SSD provide similar performance after first load. However what I mentioned previously is that the OS cache is far less "sticky". It can be lost from a restart, or even just using the computer for long enough that the cache is evicted from memory. Same can be said for the SSD cache, however the huge difference in capacity means it can hold far more before having to evict anything. And of course nothing is lost after a restart.
To make sure, I evicted everything from memory and tested the effectiveness of the cache. As can be seen, the performance matches the pure SSD runs from the very first run (or arguably slightly faster, depending on the margin of error).
Summary and tl;dr
From my testing I conclude that an SSD caching scheme is far more efficient use of your SSD storage space. Aside from not having to manage your storage tiers manually, it provides most of the SSD benefit after the first level of a game loads, even when loading different levels. When you load the same level again, you get pure SSD performance, with significantly less storage space used up.
+1 to PrimoCache, I'll keep using, and probably buy it past the 60 day trial.
Alternative software would be VeloSSD, however I didn't try it myself and therefore don't know anything about it.
Intel users can look here: https://www.intel.com/content/www/us/en/software/intel-cache-acceleration-software-performance.html
Last edited: