That advice is not entirely incorrect.
The 'no more than 9 devices in a vdev' advice is outdated and should be nuanced. The problem is that 30 devices in a RAID-Z3 vdev will still have the random I/O performance of a single drive. IOps scale with the number of vdevs, load balanced.
In fact, 10 drives in RAID-Z2 is probably the very best ZFS pool configuration for many home users typically storing large files and want both good protection but good economy/cost as well. RAID-Z2 gives you double parity so pretty good protection at only 20% overhead: 2 parity drives on 10 = 80% usable storage.
Furthermore, some ZFS pool configurations are much better suited towards 4K advanced format drives.
The following ZFS pool configurations are
optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives
The trick is simple: substract the number of parity drives and you get:
2, 4, 8, 16, 32 ...
This has to do with the recordsize of 128KiB that gets divided over the number of disks. Example for a 3-disk RAID-Z writing 128KiB to the pool:
disk1: 64KiB data (part1)
disk2: 64KiB data (part2)
disk3: 64KiB parity
Each disk now gets 64KiB which is an exact multiple of 4KiB. This means it is efficient and fast. Now compare this with a non-optimal configuration of 4 disks in RAID-Z:
disk1: 42,66KiB data (part1)
disk2: 42,66KiB data (part2)
disk3: 42,66KiB data (part3)
disk4: 42,66KiB parity
Now this is ugly! It will either be downpadded to 42.5KiB or padded toward 43.00KiB, which can vary per disk. Both of these are non optimal for 4KiB sector harddrives. This is because both 42.5K and 43K are not whole multiples of 4K. It needs to be a multiple of 4K to be optimal.
Hope this helps.
Cheers,
sub.mesa
