I would argue with that "limitation". The limitation only exists in that it isn't inherently a "clustered" filesystem meant to be actively controlled via more than one host server. Nothing prevents you from creating a high availability cluster which runs your storage as a clustered service, either as a virtual machine, a Veritas service, or Ricci/Lucci/Cman_tools service. The idea is that you create your disk pools from disks on your SAN, and have multiple servers connected to the SAN which have access to the disks. If the server that was currently hosting the disks fails, the HA service would notice that it is down and failover to another server (using the zpool export/import commands), fires up the IP address used to host this data on the new server, and you are done.
This isn't even close to the same level of redundancy and high availability of a truly clustered file system implementation. There is an *immense* difference in telling my client that they had a node shutdown, but they still have one or more nodes holding up the system, and there was no downtime, vs. a node failure, high availability kicking in, losing 300 VMs hosted on it, and now we're in the process of starting them back up and hoping there wasn't any data loss.
As I just explained, no, it isn't limited to ludicrously expensive RSF-1 deployments for high availability. You can do this with a free linux distro such as CentOS and setup lucci/ricci/cman_tools high availability clustering. The only cost is the SAN or other methods of attaching disks to multiple servers (iSCSI, etc).
Until ZFS gets the ability to either A: Have its controller exist on multiple nodes, or B: Failover seamlessly, and automatically occurs within 10 seconds of node failure, then its just a bandaid when there are indeed other alternatives.
Like I said, I use ZFS now in my home. I love it. I have no real problems with the system. But it doesn't exist in a bubble, and when you mentioned that no other system has such data protections, that's not true anymore. Likewise Storage Spaces has a real advantage when it comes to system redundancy and high availability. Again, the only thing that gets that close to that, that I know, that gets ZFS close to that is RSF-1, which has some caching special sauce to get failover time down to a couple of seconds, which is acceptable for the vast majority of environments. But whether we like it or not,
a system that is not designed to seamlessly hide failures is not a system that is designed for today's modern infrastructure.
Everything is going redundant. Disk systems have been forever. Disk controllers, servers, and programs are all built in modern times to exist in active / active groups. No failover, no failures, just work getting shunted to whatever is available to participate. Now high performance file systems are starting to get this focus. And whether or not true non-failure behavior is important to your project, there's no doubt that it's important to a lot of stakeholders.
