To address these:
A corrupted OS installation can and will take everything with it. It's also a lot rarer than you would think, unless you do something really stupid. Having a hardware raid controller is no protection against someone accidentally going "rm -rf /" (or the equivalent gui command), or against a broken operating system install doing the same thing. It might be proof against a badly-written software raid implementation - but unless you're using a bleeding-edge untested linux kernel version or a development version of md, that's not a problem. And anyone using bleeding-edge ANYTHING for a server that needs to be reliable deserves what they get. Well-done software raid is very portable - plug the disks into another linux machine, run one command, and the array is again accessible. If a disk dies during the transfer, you can rebuild on the new machine.
Performance: I can easily saturate gigabit with a 4-disk raid 5 array (using md in linux). For sequential writes, it's almost fast enough to saturate two teamed gigabit connections. Unless you're dedicated enough to invest in 10Gb ethernet, it's perfectly adequate for home use. Sure, random iops isn't high - but if you care about that, an SSD will do much, MUCH better than any traditional disk setup. Sure, it won't be anywhere near as fast as a good hardware controller, but that sort of speed is rarely necessary in a home situation, and not worth the extra money/hassle.
Error proofing: Firstoff, RAID is not backup. There are far too many points of failure that will destroy the entire thing (power supply blowing up from a nasty power surge, someone breaking in and taking the box, an accidental rm -rf or similar, etc) to count on it alone for protecting important data. I have HAD an computer die (an old VIA motherboard killed itself, taking almost everything with it). I moved the hard disks to a different install of linux on a different computer and it took one command (mdadm --scan --assemble iirc) and the whole thing was again accessible. I've never had problems with data corruption from software raid, though non-ECC memory isn't perfect. If you DO want ECC memory, the microserver supports it.
I realise anecdotal evidence doesn't tell the whole story, but I've lost two raid arrays to hardware raid controllers malfunctioning (one started corrupting everything on the array - I think it was some sort of problem with the onboard memory. I had another controller die, and couldn't find a compatible replacement, so although the disks were fine I couldn't recover the data). Software raid, on the other hand, has been reliable - even when the computer running the array died, I was always able to move it to a different linux box and reassemble the array.
To address these points:
Your points about portability are valid, but really don't address any of the reasons I made regarding hardware RAID. Like I had said before, RAID 0 and 1 is fine on md and Windows equivalents, but the corruptions, and write hole issues is what makes RAID 5 and higher insuitable for software implementations.
Secondly, you are trying to reason anecdotally that URE's rarely happen, but real-world data suggests otherwise.
http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
The above article is taken using 1TB and 2TB drives, which is nothing compared to the 3TB and 4TB drives we're using now. Just do a google search and you will find many many people with RAID 5 corruption issues. Why? Because they were doing software RAID 5 and not scrubbing their array. The article I linked to made the faulty edit that this will happen regardless of your RAID setup, but this is not true. The whole point of a hardware RAID card with ECC memory is that you can error check and scrub the array regularly. Any URE's that do develop can be fixed before you lose a drive. There's always a possibility of developing yet another URE during rebuild, but everything is risk, which brings me to my next point.
You claim that RAID is not a backup, and I honestly have no reason why you would bring this up. While it is an important point, it has literally 0 to do with my points. If you *really* believed what you posted, why would you do RAID at all? Why not just a giant JBOD? That way, if a drive goes down, you lose the whole thing. But you can always reclaim from backup right?? Right? Oh wait, you mean you were trying to ensure you didn't have to restore from backups every time there is a drive failure? That's the *point*. RAID 5 in its default state, and with today's high capacity drives is more or less unfit to rely on for data protection. RAID 5 needs scrubbing to be considered reliable, and you can two that two ways:
ECC memory in a RAID controller to compare for bit rot or other UREs
Read the parity information for every bit of data on all the disks.
You do not need ECC memory to scrub an array, but it makes it easier and less invasive on the array. Linux md will even scrub the data regularly if you set a cron job for it, and if the OP is going to be using the Linux MD RAID system he most definitely should.
Another point I want to make is that VM workloads are usually not even remotely sequential. You have multiple VMs hitting the same array. It is, by definitely, not sequential. An SSD would make a great addition, but traditional RAID doesn't support storage tiering, which brings me around in a circle again to my original point.
If one is not willing to use Hardware controllers (and I do not believe there is much point in them), then ZFS is pretty much the de facto filesystem to use for storage cheaply. ZFS's checksumming removes concerns of URE's, supports tiered storage with SSDs, supports fast synchronized writes when using Log disks, and provides the same mobility as MD.
One thing that is becoming quickly apparent is that RAID is obsolete. RAID is an old system that will be supported for many years to come, but it is obsolete. While everything supports RAID, no large system relies on it as the final product but rather as the base. The system is then built on. Smart Software Storage is the future. EMC, NetApp, and Compellant all create software-based storage pools that are based on small RAID groups for redundancy. That way a RAID group going down doesn't wreck the whole array.
If you want to do Software arrays, there are much better ways than RAID to do it.