• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Boosting file server performance by using SSD as a write-back cache?

Mark R

Diamond Member
I've been looking into assembling a reasonably respectable file server - mainly for streaming media, backups, work to be shared between a number of 'power' home users.

One of the problems I've found is that RAID5 arrays choke a bit on writes - especially, when batch processing vast quantities of files.

The obvious solution would be write-back caching for the disks. Easily done with a high-end RAID card, battery module and UPS for the server. However, I was wondering if there was scope for a lower-cost option:

Write the changes to an SSD (or mirrored pair of SSDs); a 32 GB SLC drive would be all that is required - indeed, it may even be total overkill.
Lazily write the changes back to the primary array.
When mounted uncleanly, all that is required is to pull a list of pending changes off the SSD.

This strikes as a fairly simple idea, so I wonder if anyone has actually put together a functioning system using this idea (e.g. in the form of a linux block device driver). Alternatively, is there some horrible flaw in this idea?

(Sorry. Had meant to post this in the Storage forum. Dear moderator, Please move. Thx)
 
Last edited:
If it's a heavy use file server that SSD wont last too long. A couple years give or take. With the same money you could just use a higher capacity raid 0 along with a good backup solution. I don't know if you can use a raid 0 array as a write-back cache, but if yes that's probably a more cost effective and longer term solution. Just don't use seagates. 😛
 
It would be cheaper to just get a better array controller. The HP SmartArray P410 controllers can easily top 250 megaBYTES per second read and write speeds when paired with the 512mb cache, depending on how many drives you use. One of these paired with 3 7200RPM SATA drives will get you about 200 megabytes/second read and 150 megabytes/second write.

However, it's unlikely that you will actually need this level of speed in any kind of home scenario. Even one with "power users". Enable OS write caching and stick it on a UPS with automatic shutoff. Regular backups will ensure that data loss due to power outage is negligable or non-existent.
 
If it's a heavy use file server that SSD wont last too long. A couple years give or take. With the same money you could just use a higher capacity raid 0 along with a good backup solution. I don't know if you can use a raid 0 array as a write-back cache, but if yes that's probably a more cost effective and longer term solution. Just don't use seagates. 😛
Isn't this exactly what Intel planned to do with Braidwood? Throw a small piece of flash on the motherboard, to cache IO.
 
the p410 controllers now have flash-backed 512MB or 1GB write cache. this eliminates the 3-day (max when battery is new) issue obviously. the question that i've yet to get any documentation is performance. flash is slower than ram; but 1GB might be better than the max of 512mb.
 
Back
Top