• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Distributed computing and SSDs - bad?

VirtualLarry

No Lifer
Just wondering if the constant trickle of small writes (mostly checkpointing data) is harmful to an SSD, long term? I'm running a pair of OCZ Agility (Indilinx Barefoot) 30GB SSDs in RAID-0, so I don't have the benefit of TRIM.

I've heard that for such cases, you are supposed to provide the SSD a period of "quiet time" (log out for a while), such that it doesn't receive writes, so that it can start its garbage-collection routines.

Since the computer is running 24/7 doing DC, and it is constantly writing in the background, I would think that it wouldn't get the "quiet time" that it needs to perform GC, unless it does really aggressive GC like the Kingston V+100 drives.
 
Make sure you update the firmware to 1.7 (I think that's the most recent). With older firmware, your biggest enemy is write amplification coupled with running 24/7. It slowly and surely killed my 32GB Onyx (similar generation/family) on a 2008 R2 server running as my living room PC. The write amplification is significantly improved with the new firmware so small writes shouldn't use up your flash memory so quickly. Other than that, you're good to go. The old Vertex/Agility 1 is fairly good as a drop-in solution (similar to Intel) requiring little optimization.
 
Just depends on the amount of writes, Larry.

If the system is never idled with heavier usage.. or even constant loads being inflicted?.. it will slowly degrade the performance and you will know for sure as it will be very slow in benchmarks and quite possibly in perceived usage speed.

Of course the problem with running benchmarks is that it just adds insult to injury and pushes you even quicker towards a fully dirty drive. Double edged sword there, to say the least.

An eiasier test to run would be only using the 4k R/W test in AS SSD since it gives you the option of checking/running individual tests. The Indy based drives will suffer hard in that area when fully degraded/dirty. Only way out would be an ocassional overnight idle. Additionally you could also run an ocassional free space cleaner(AS Cleaner) with the "ff option" since it will write 11's to the free space of the drive to effectively tell the controller that those blocks are empty. No TRIM is required for that tool to work as many use it on raids too.

Also be sure to not overfill those drives with data(keep the volume below 70% filled). You could also allow some extra unallocated space to let the controller maintain greater efficiency which can help a bit more as well.
 
Last edited:
Back
Top