I felt that way until I had a second drive fail during a rebuild. Now I give that stance more consideration. How many large (> 8TB) rebuilds have you done?
I'm a SAN admin, at a company that makes SANs, so... approximately
one grillion?
I have about one drive failure a week. Sometimes two. Drive population is... maybe 500? A lot of them are older. I went about six weeks in 2014 with no failures. It was pretty awesome.
At least if you mean a 8TB RAID group size (individual drives in the machines I run are <=4TB, which means, for instance, 36TB raid groups plus 2 parity drives and 1 hot spare.) It's all automated. Usually, by the time I req. the new drive and swap it out, the RAID restripe is already complete, without me having had to touch anything.
I mentioned double-parity, because that's REALLY important. Losing a second drive while restriping your group is common enough that everybody knows somebody who's had it happen, even if they haven't had it happen themselves. But losing a third (and actually hosing your array) is basically "death by lightning strike" odds. Basically, it doesn't happen, hence my comment about FUD. (And if you have 3 failures that close together, the other 9 in the shelf are basically time-bombs.)
Flipside is that in most mirrored configurations, if you lose one disk, you're fine, two disks, you
might be fine, but if it's the wrong second disk, you're equally boned. The only saving grace is that rebuilds are way faster. But newer CPUs process parity
soooo fast...
If you're running ZFS, there's also expansion considerations that make it easier to run mirrored vdevs rather than RAIDZ. I'm not quite sure if you agree or disagree with the rest of the statements, I'm assuming disagree, but I'd love to hear why.
I agree with most of what you posted, actually. It's all pretty common-sense stuff. I was mostly just riffing on the common themes - we do have these sorts of threads a lot, and the responses are pretty predictable.
I'm kind of sour on ZFS in general, since... well... you probably don't want to hear those stories.
::dave tears out some of his remaining hair::
Anyway, I don't have to use ZFS RAID management at work, but I'm familiar with ZFS and some of the "expansion considerations" from home use. I'd be curious what configuration of vdevs you prefer, but mirroring all your vdevs (and straight-up doubling your storage costs) strikes me as a hard sell - whether to the spouse or the purchasing department.
Plus, well, mo' spindles, mo' problems.
I guess it depends on the type of expansion you want to do. If you're just going to set something up and leave it alone for a few years, then RAID-Z(1|2|3) is fine. But it's not for people who want to grow an array incrementally (adding one or two drives at a time) and that's a REALLY common ask for home users or people on a budget. To do that, you'd need to be using mdadm or unRAID. (Or Synology OS, or...)
OTOH, if you deal with shelves of disks instead of individual disks and have a corporate budget, ZFS pool and vdev expansion limitations seem pretty easy to work around.