• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NAS System Question - ZFS Raid 1+0 or Raid 5?

tomoyo

Senior member
I've been planning out a ZFS raid based mini-itx server with probably 4 hds for the past year and maybe I'll finally pull the trigger sometime post-sandy bridge release.
My original plan was to do a ZFS Raid-5 with 4X1TB drives, but now I'm thinking I could go with 4X2TB with ZFS Raid-10. This would provide more redundancy unless both mirror pairs fail and provide a pretty good amount of storage at 4TB. Does anyone else have experience in this area and have thoughts on what the best plan would be?
 
raid-1 striped just is hella fast. if you need to backup several machines and playback many streams at once - you won't get the bandwidth with raid-5. raid-10 is far less liability to me.. i was reading the serial # of a dead 146gb sas drive to hp (replacement) and another one popped dead right then and there. maybe 20 minutes later, thankfully raid-10 saved me from a monstrous rebuild/restore. you say it won't ever happen ; until it does. then its a ginormous mess to clean up
 
ZFS RAID-Z is a software RAID. I would recommend using it over a hardware RAID with ZFS. ZFS RAID-Z is like RAID5 with single parity. RAID-Z2 offers dual parity like RAID6 and RAID-Z3 will offer triple-parity. Unless it's an actual HBA, motherboards will only offer fakeRAID which often doesn't play nice, and is worse than RAID-Z. Go with RAID-Z2. It'll offer rebuild from up to 2 drive failures like RAID-10, but it'll offer ANY two drives. RAID 10 requires 1 drive from each striped mirror to fail for 2 drive failures. If both from a single mirror fail, then you've lost the entire array. At least with RAID-Z2 (and RAID6), it offers any two drives to fail.
 
Ya Raid-Z2 definitely is another option. I'm kind of curious the performance loss from having to calculate and write all the parity data, but it certainly is the best for data redundancy over Raid-Z, or ZFS with Raid-10 with the wrong mirror drives dying. So many options, so little certainty on what's the best mix of storage, reliability, read/write speeds, etc. Heck I'm still trying to figure out the real disadvantages and advantages of iscsi vs samba/cifs
 
Not only do you want to avoid hardware RAID if you want to run ZFS, you should also not use these controllers in passthrough/JBOD mode, as they would still require TLER disks, and detach/fail your disks on >10sec recovery times; basically on every bad sector your disks encounter.

For ZFS you want ZFS to do the RAID part and ZFS likes to be as close to the disks as possible. That means: no expanders, no port multipliers, no virtualization, no hardware RAID controllers. Just normal HBA with several disks attached.

For that reason, an expensive RAID controller is almost useless for ZFS. A much cheaper controller would provide all benefits of that expensive card with one invaluable advantage: it does not detach/fail/disconnect/punish your disks whenever they scratch their bums; like when dealing with a bad sector. My experience with Areca Hardware RAID, for example, is that you need to reboot after each bad sector; the RAID controller would fail/detach any disk that encounters a bad sector or other timeout. This also presents itself in JBOD and passthrough mode; so this controller is useless to ZFS.

iSCSI is a SAN protocol; the server does not know what is stored on there; only the client does. Only 1 PC has access to the iSCSI-share. NFS/Samba is a NAS-protocol; only the server knows how to deal with the filesystem. All clients virtualize access via the network protocol. Windows thinks ZFS is actually NTFS; just because it doesn't have to deal with the actual filesystem, since SMB/CIFS is a NAS-protocol.

NAS = multiple access, server controls FS
SAN = single access, client controls FS
 
is zfs raid-10 actual proper striped pairs?


raid-1 raid-1 raid-1

stripe all above?

so you have disk 1 in cabinet 1 mirrors to disk 1 in cabinet 2 so if a cable of power loss occurs you only lose half of the raid.

would raid-z2 allow a 50% loss in the event of defective cable or power supply to half of the drives?
 
Each vdev you add to a ZFS pool is striped.

vdev1: mirror (disk1, disk2)
vdev2: mirror (disk3, disk4)

so disk2 is a copy of disk1, and disk4 of disk3. Both mirrors are striped; so performs comparable to a 2-disk RAID0 on writes; reads are higher and can go as high as RAID0!

RAID0 would be:
vdev1: disk1
vdev2: disk2
vdev3: disk3
etc..

RAID-Z2 is double parity, so 2 failed (guaranteed) disks. Though you lose protection against BER after losing two disks, so some like to think this as a RAID5 with protection for BER. Note that ZFS metadata is already replicated (ditto blocks) and as such BER on double degraded RAID-Z2 could never corrupt the filesystem; only certain files which you can see in zpool status output, and remove them or replace them accordingly.
 
Not only do you want to avoid hardware RAID if you want to run ZFS, you should also not use these controllers in passthrough/JBOD mode, as they would still require TLER disks, and detach/fail your disks on >10sec recovery times; basically on every bad sector your disks encounter.

I had no interest in using hardware RAID at all. Anything I do with ZFS is definitely going to be purely based on software and probably using Nexenta unless I find something more userfriendly to deal with.

iSCSI is a SAN protocol; the server does not know what is stored on there; only the client does. Only 1 PC has access to the iSCSI-share. NFS/Samba is a NAS-protocol; only the server knows how to deal with the filesystem. All clients virtualize access via the network protocol. Windows thinks ZFS is actually NTFS; just because it doesn't have to deal with the actual filesystem, since SMB/CIFS is a NAS-protocol.

NAS = multiple access, server controls FS
SAN = single access, client controls FS

That is really useful information to know. My goal is to basically have a server I can access from any other computer in my own place over likely gigabit connections, so multi-access really isn't that important, but it's clear NFS/Samba is far more flexible in general. I'll probably actually test both protocols out when I build my machine to see what makes sense in the long run.
 
is zfs raid-10 actual proper striped pairs?


raid-1 raid-1 raid-1

stripe all above?

so you have disk 1 in cabinet 1 mirrors to disk 1 in cabinet 2 so if a cable of power loss occurs you only lose half of the raid.

would raid-z2 allow a 50% loss in the event of defective cable or power supply to half of the drives?

Yep it's proper striped pairs in ZFS as Sub.mesa demonstrated. You'd be safe as long as you don't lose both mirrored pairs. Sadly that's not quite 2 disk protection, but I feel like it's a nice option if you don't want to do Raid-Z2. Since I'm in no hurry to get a system up fast, I'll probably do some benchmarking on various setups to see what the speed differences are. I'm also considering using an extra SSD cache drive, but that might be overkill for my relatively low end needs.

And it's pretty easy to just think of Raid-Z2 as the ZFS form of Raid-6.
 
yeah raid-6 is miserable slow - unusable on esxi - raid-5 is bearable with enough cache (4-8gb) and raid-10 well rocks.

the delta between raid-5 with 4-8gb of cache and raid-10 is oddly not as large as i though. but raid-6 really reduces the iops big time.
 
Back
Top