But I've got one question: Why HW raid at all? I can understand it for large servers that want to limit the CPU overhead, but today with all that CPU power around I couldn't care less for a homeserver. Also good controllers aren't cheap and the cheap ones perform usually worse than SW raid. That coupled with the fact that you can't get ZFS/Raid-Z but are stuck with Raid10 or something just doesn't make it look especially attractive.
Anything I overlook?
The RAID 5/6 write hole. A hardware RAID engine with non-volatile cache effectively protects against the write hole problem. Data is only cleared from NV cache once the parity has been written.
With software RAID, a power failure/kernel crash, etc. could result in the parity not getting written to disk, even though the actual data has been written. The RAID will appear to work correctly, but if a drive dies, the parity will be wrong (with no way to detect this), and the recovery process will 'recover' random garbage instead of your data.
ZFS is able to work around this, by only updating file metadata once the parity has been written. In effect, the data just written to disk is inaccessible until the metadata is updated, which is only done once the parity is complete. If power is lost at an inopportune time, then all that happens is that you are left with an perfectly preserved old copy of your file (with parity intact in case you suffer a subsequent drive failure). You are never left with an unprotected file which could be corrupted at any time.
To some extent linux software RAID now offers a similar feature. Ext4 and some other up-to-date file systems do something very similar (but not quite as paranoid as ZFS), by writing new data in such a way that it is not accessible until the drive confirms that the data is safe. This has worked fine for ages in non-RAID, but software RAID hasn't supported the necessary 'write barriers' until very recently. However, since kernel 2.6.33, software RAID now offers write barriers, so with ext4, the write hole should in most cases only affect data that hasn't been made visible yet.
The write hole is only really an issue with RAID5/6 as it can take quite a while to read in a whole stripe so that parity can be calculated. However, it could theoretically occur in RAID 1 or 10 (where one drive in a mirror misses a write just before a crash because it is busy with something else). Again, NV cache deal with this problem.
Of course, there's nothing to stop you using a HW RAID card with NV cache and using software RAID. You'll get the benefit of very high performance and data integrity of the NV cache and flexibility and portability of software RAID - the only difference is that you don't use the cache as efficiently - but with big caches (1GB on modern cards) this shouldn't be much of an issue.
You're right, CPU usage for RAID on modern CPUs is negligible. In fact, with lots of fast drives, and complex RAID types (e.g. RAID 6 or 60) even a top-of-the-range HW RAID engine can be the bottleneck. There are a number of benchmarks about where people have benched 8 or 12 drives on a top-end card, like an HP P812, in HW RAID 6. The re-run the benches with SW RAID 6, and seen 10-20% increase in performance.