Desktop Use: MX100 or Samsung 840 (non EVO)

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BonzaiDuck

Lifer
Jun 30, 2004
15,877
1,548
126
Do you think this is because rapid is using some form of deferred or lazy write? Where windows thinks its done copying yet the data is still in ramcache and needs another x (whatever the deferred write time is, 5 seconds is common in other ramcache software) amount of seconds to actually complete? I initially thought this, but dismissed it as it should only apply this while writing TO the Samsung drive. Then again??

Edit: After doing some quick tests myself I think that's exactly what's happening. The longer time is due to the slower medium its transferred to and the ramcache being too small to hold the entire file to transfer. I'm going to run some large file transfers with supercache with a 4gb cache vs an 8gb cache to verify.

Pulling the USB drive immediately would simulate a power loss which is exactly what they warn you about with deferred writes...data corruption.

Edit 2: Deferred write time seems to affect how soon it starts to drop/slow the transfer rate. Timed 5, 10, 20 second and each time when it was 5,10,20 seconds into the transfer is when it started to drop. This also seems to affect how long of a "hang" time you have at the end of the transfer.

For what its worth, Rapid time seems to match up closest to a 5 second deferred write.

I can appreciate the trouble some here are accepting in doing these tests. But a caching scheme is just a caching scheme, and if I think I truly notice the difference, the software is stable and reliable -- I won't trouble myself to put a magnifying glass on the process. And I appreciate the effort of those so inclined.

I would only GUESS that there is a lazy-write feature of the process. This, of course, was the same feature with a different caching scheme in ISRT SSD-caching and its "Maximum" setting. You were advised to use the more modest setting -- whatever it was called -- in which writes occurred to both the cache and the disk immediately.

I can't say if the risk is lower for having no other option with the RAM-caching of an SSD -- or someone more confident can comment.

I mentioned in other posts: I just purchased a refurbished laptop -- six-year-old technology -- with a maximum 4GB 2x2GB of RAM. I cloned the HDD to a Crucial MX100, for which sequential tests don't exceed 300MB/s. There was a hope that this might extend run-time on a single battery charge, but looking at the lappie HDD, I'm now thinking any gains are miniscule. But performance seems "way up there" just for the 300MB/s.

Should I swap the HDD back into the laptop? Can't say. Sometime this week, a package will arrive in the mail and I'll swap the 2x1GB RAMs for a 2x2GB kit. Given the way it works now, I can easily allow for 1GB to serve as cache for the PrimoCache product. That brings me to the question: "Is the $30 software purchase worth it?" Curiosity what it is, I'll stop buying lottery tickets for a few months and buy the Romex PrimoCache instead.

. .. Just to find out . . .
 

ctk1981

Golden Member
Aug 17, 2001
1,464
1
81
I think the lazy write feature is pretty useless. Caching commonly used files to RAM for faster read access seems worthwhile and noticible in day to day use. If you want blazing read write speeds for every file I guess its time to step up to m2 drives like the xp941 or sm951 from Samsung.
 

BonzaiDuck

Lifer
Jun 30, 2004
15,877
1,548
126
I think the lazy write feature is pretty useless. Caching commonly used files to RAM for faster read access seems worthwhile and noticible in day to day use. If you want blazing read write speeds for every file I guess its time to step up to m2 drives like the xp941 or sm951 from Samsung.

Probably no argument there, but -- really -- given the speeds we get with these things now, I see no hurry in migrating to m2 or (?) mSATA -- whatever.

Sure you would like to have a hardware technology which needs no enhancement from software solutions like caching. But various caching schemes and strategies are well-established in computing history and evolution.

I just think that pronouncements of RAPID or similar being "fake" or a "lie" are extreme and no more true than to say the opposite. It's a performance enhancement which "learns" based on user computing habits. It cannot be as genuine or performance-enhancing as a hardware development which simply opens up a bottleneck to the same extent.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
I think the lazy write feature is pretty useless. Caching commonly used files to RAM for faster read access seems worthwhile and noticible in day to day use. If you want blazing read write speeds for every file I guess its time to step up to m2 drives like the xp941 or sm951 from Samsung.
Well that depends on what you're using it on, I mentioned this in G73S's other thread that there are software & OS limitations that won't allow you to take full advantage of something like a PCIe drive (also evident in Kritian's 850 pro review) & it'll take alot of time for programs to be updated to make use of such high end storage devices.

The DRAM caching software are mainly useful for HDD's as the cache holds the data & defers the writes, pushing it to L2 if that's available, till the time the drive's idle & as such a high capacity (say 6TB 7200 rpm drive) HDD can mimic the performance of a much smaller SSD.

You'll still need copious amounts of RAM & a speedy L2 to be able to build such a system but with a myriad of software available at our disposal it's still just a trivial task, the SSD however will not see much of a performance benefit from this & the tests in this thread prove my point.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
No, it is not. Anything showing more than about 120MBps on 100MB+ sequential writes is is wrong. Period. RAPID cannot make the drive write faster than it physically can. Windows, however, can and will report impossibly high speeds in the copy dialog.

We are arguing 2 different things. I was talking about write speed to the cache layer, not the time it takes to finalize the write from cache to disk. This is really no different than cache on a RAID controller which can mask the throughput of the actual disks.
 

ctk1981

Golden Member
Aug 17, 2001
1,464
1
81
Well that depends on what you're using it on, I mentioned this in G73S's other thread that there are software & OS limitations that won't allow you to take full advantage of something like a PCIe drive (also evident in Kritian's 850 pro review) & it'll take alot of time for programs to be updated to make use of such high end storage devices.

The DRAM caching software are mainly useful for HDD's as the cache holds the data & defers the writes, pushing it to L2 if that's available, till the time the drive's idle & as such a high capacity (say 6TB 7200 rpm drive) HDD can mimic the performance of a much smaller SSD.

You'll still need copious amounts of RAM & a speedy L2 to be able to build such a system but with a myriad of software available at our disposal it's still just a trivial task, the SSD however will not see much of a performance benefit from this & the tests in this thread prove my point.

The XP941 review on anandtech doesn't seem to show any serious limitations of software/OS...the biggest limitation was support for the drive. Windows 8+ handles this the best. But price and availability are also a problem.

Whatever the drives avg write speed is all the faster it ever will be. It doesn't matter if it transfers to ram at blazing speeds and holds it there, it still has to write that data to the drive...and if that happens to be a slow HDD its going to transfer no quicker than if this feature was turned off...in fact it may take longer because you are introducing a delay into the write in the first place. With level 2 it will be the same thing, but since its persistent cache I would hope that in the event of a power failure if you rebooted the data may still be in level 2 cache and finish transferring correctly...or maybe not. But at the end of the cycle, it still has to write to that slow HDD and that speed is never going to be faster than its physical limit.

As I said, the ram cache for reads seems to work and in day to day usage I like it so far. But the delayed write feature appears to be nothing more than marketing hype to make it LOOK like its faster.
 
Last edited:

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
The XP941 review on anandtech doesn't seem to show any serious limitations of software/OS...the biggest limitation was support for the drive. Windows 8+ handles this the best. But price and availability are also a problem.
Well I was talking about this ~
PCMark 8 also records the completion time of each task in the storage suite, which gives us an explanation as to why the storage scores are about equal. The fundamental issue is that today’s applications are still designed with hard drives in mind, meaning that they cannot utilize the full potential of SSDs. Even though the throughput is much higher with RAPID, the application performance is not because the software has been designed to wait several milliseconds for each IO to complete, so it does not know what to do when the response time is suddenly in the magnitude of a millisecond or two. That is why most applications load the necessary data to RAM when launched and only access storage when it is a must as back in the hard drive days, you wanted to avoid touching the hard drive as much as possible.

It will be interesting to see what the industry does with the software stack over the next few years. In the enterprise, we have seen several OEMs release their own APIs (like SanDisk’s ZetaScale) so companies can optimise their server software infrastructure for SSDs and take the full advantage of NAND. I do not believe that a similar approach works for the client market as ultimately everything is on the hands of Microsoft.
From the 850 pro's review.
As I said, the ram cache for reads seems to work and in day to day usage I like it so far. But the delayed write feature appears to be nothing more than marketing hype to make it LOOK like its faster.
I concur however the addition of an L2 drive (say an SSD) & deferred write gives the HDD much more breathing space, the caching program (say primocache) can then write the data sequentially to the HDD, that'll always be faster than random reads/writes ;)

The thing about deferred write is that it doesn't peg the cached drive continuously at 100% usage & that's why it's useful. You can test this simply by doing a number of tasks on HDD & see the system responsiveness, now do the same with primocache/supercache & you'll see that it's noticeably faster in the latter case. You have to do it right though, as with any software you'll have to tinker with it to get the best use out of the app.
 
Last edited: