ZFS Setup Tips for best redundancy/performance/storage?

GWestphal

Golden Member
Jul 22, 2009
1,120
0
76
I'm thinking of making a ZFS file server for my digital life, about 3TB and growing, though mostly archival so redundancy > performance for me as long as I can get around single drive performance around 50-100MBps.

I'm thinking of getting an i7 3770, 64 GB DDR3, adaptec PCI-E controller with passthrough, 8-10 2TB WD Red SATA drives, and ssd for arc.

I'd like to have encryption, deduplication, and redundancy (RaidZ2 or perhaps RAID1+Z1 if that's possible), so Solaris 11 would seem the only choice. But then I'm sort of forced to stick with solaris, the alternative being FreeBSD with ZFS pool 28, so no encryption.

Now, it seems that RAIDZ has optimal number of devices per vdev something like 6,10,18 if you're using 4K Advanced format drives. Then there is drive alignment. Then how do you arrange the vdevs in pools for best performance/redundancy. Setting it up so it can expand over time. I guess what I'd like to know is what is a general (or specific) checklist of what I need to do to setup the system right the first time.

1. Get All 4K Drives
2. What partition scheme? GPT presumably?
3. Make sure partition is aligned to where? 1 MB?
4. label disks with stickers based on port number.
5. Install Solaris (maybe to a separate 2 disk mirror) then have other disks as data only.
6. vdev/pool setup,1x RaidZ2 with 10 disks so you have 128/8, or perhaps 2x 6 disk raidz2, 128/4 + a mirror, each 6 disk group as a separate vdev. I understand the smaller vdevs can be good with regards to resilvering/scrubbing, is that correct?
7. how does stripe size affect performance, is bigger better with 4K Advanced format drives? BSD is limited to 128KB and I think solaris can go to 1MB.
8. Enable ZFS features encryption, dedup, ZIL, ARC2L, hot swaps
9. Set mount points (mount properties, space reservations etc)
10. Enable samba/NFS what have you.
11. Profit?

Is that about right? Did I miss anything or mix anything up? What do you think about the layout of the drives as vdevs etc? What about using RAM for ARC?

Also, what about ways of optimizing the sharing of data over NFS or SMB. Having directory indexes cached to RAM somehow, so traversing the directory structure remotely could be lag free?
 
Last edited: