BonzaiDuck
Lifer
- Jun 30, 2004
- 15,877
- 1,548
- 126
Doesn't the OS already have to cache all reads and writes in RAM anyway?
It sounds to me like what is going on here is RAPID is doing something that the OS does anyway, then claiming credit for it by measuring R/W speed on the wrong side of the SATA bus.
The OS reads into RAM programs and data from the HDD, not different much in any way from the old Von Neumann model describing the architecture of most computers since the first hard disk was forklifted onto a large MAC truck's trailer. But for every program suite read into RAM, it doesn't "stay there" after the program is closed -- that RAM remains available for something else.
We're basically talking about the same caching strategy you see when the CPU pulls these programs and data from RAM: That's why there is L1 cache -- fastest and most expensive, L2 -- larger but slower, and L3 cache -- still even larger but a tad slower.
With ISRT, we had an SSD of NAND memory possibly four times larger than RAM, but also slower. This SSD was a cache for an HDD with much greater capacity but much slower speed.
Using part of RAM as a cache, there may be 2GB that had been loaded from the NAND SSD RAM -- available to read back into RAM as it's normally used. And I believe these reads to different parts of RAM are "block moves" using some of the basic extended CPU instruction set.
And my best guess also follows: the relative speed of RAM and SSD, perhaps as compared to electro-mechanical HDDs, means that a smaller amount of cache is sufficient for similar volumes of progs or data read off the storage device at the "bottom of the pyramid."
In other words, if ISRT using a 60GB SSD cache gave you a three-fold improvement in data throughput over a standalone HDD, the 500GB SSD only needs 2GB of RAM to double the speed of transfers over a standalone SSD running at it's advertised specs.
