• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

RAID-10 vs. RAID-5/Z

raid 5 fails 2009...
yet raid z came out in 2012?

:\

interesting...
id say that article is poorly outdated.


Raid 5 u only use with a dedicated raid controller with a good amount of cache and battery backup. (1000 dollar raid cards u wonder why people buy them..)
Raid-Z is doable on FREENAS software, but has limitations like u cant add another drive, however u can replace a drive to a larger one without losing redundancy.

Raid-5 and Raid-Z are SLOW as hell... no one will argue on that... and rebuilding the raid array should your system suffer a power down can take a while.
It has its problems, but if u have failsafes for it, raid5 and raidZ can make excellent secondary fail safe's after u got your ups's in check.

If ur after 4TB in storage... it would all depend on how much failsafe you want... and how fast u want your array.
Raid10 is fast.. its 2 mirrored raid 1 drives in raid 0... however thats a lot of IO writes your controller is required to do....

I assume u have a dedicated controller? or you planning on doing this all though a ICH? or PCH?
 
Last edited:
My intent was an Intel ICH in JBOD mode and FreeNAS's software RAID. I'm only trying to saturate a Gigabit Ethernet line, not make the LINPACK 500. Also, it's just a backup server for my laptop, desktop, roommate's laptop, and a bunch of DVDs I could rerip if I really need to. So it's "backed up" in the sense that if it dies, well, I get another one and re-backup my other stuff.

I did find some benchmarks, and intend to do a few myself before I finalize on a config. But it looks like RAID-10 only has a small speed advantage, which may be as related to CPU overhead as anything else.

But I'm just wondering if the parity calculations will become a problem with rebuilds. (The main thrust of the article.) I'll probably do a couple test rebuilds to salve my nerves. (Again, if I need to wipe and start from zero, I can... but I don't really want to if I don't have to.)

That said, it seems to be a pretty common sentiment, with server jockeys on other forums saying things like "don't use drives larger than 750GB in a RAID-5" or "RAID-10 4EVA." I don't know if they're just repeating what they've read elsewhere and are full of beans, or if I really shouldn't be doing what I thought I was going to do.
 
Last edited:
RAID-10 is fundamentally no different than RAID-5 or RAID-Z in this regard. If a drive fails, you still have to read a full drive's worth of data to do the rebuild. It's just one drive instead of being distributed across many.

RAID-Z is fundamentally different from RAID-5. First off, RAID-Z is done on the object level, not on the volume level, so during a rebuild you only have to actually recover the amount of data that was in use, making rebuilds much faster. Secondly, small objects are actually just mirrored instead of distributed, further reducing the amount of data that has to be rebuilt during a drive failure.

If you're really paranoid, there is always RAID-6 or RAID-Z2 which doubles the amount of parity you have. But really, I wouldn't worry about it.
 
Back
Top