I think we can conduct this conversation without having a bunch of unnecessary fluff.
The fact that some RAID cards are finally getting write and read caches doesn't change the fact that nearly all virtualized storage systems support multiple tiers or storage not only as a cache but as a data storage area. This allows a full mix of SSDs, 15K, 10K, and 7.2K disks depending on data needs and budget. RAID cards will likely never get this sort of granularity.
Your talk about backwards compatibility with previous HP SAS RAID cards is entirely untrue. The first SAS hardware didn't even show up until late 2005. This is the exact time ZFS also started showing up and WAY later than Linux MDADM has been around (2001). This still doesn't take into account the other factor that I mentioned. If it fails, you still have to buy another HP RAID controller to fix the problem.
Additionally all advanced virtualized storage systems support ADM, mainly for performance reasons (If you have 24 hard drives in an array, making it into 4 6-disk "RAID 6" arrays in an aggregated pool is FAR faster than having a single huge 24 disk array, especially when its time to rebuild).
Force trusting parity isn't a good idea at all in practice and is only useful when you know the data should be right (like when you move a hard drive around). This "option" is also available in virtualized storage systems. With today's modern processors, this isn't a problem. It's storage encryption that is the real CPU sucker, and a RAID ASIC won't do a thing to help you there (until they want to release one, but of course the real answer is FIPS end to end, but that's an industry that's just getting started).
As for bitrot checking in the RAID world, this needs ECC memory to be done properly (which tends to limit them to hardware RAID controllers or Virtual RAID drivers on systems with ECC memory). As for regarding ZFS as the only file system, I don't believe anyone ever said it was. I'd be curious why you feel that way, since I believe I mentioned ReFS in my previous post. When it comes to bitrot checking, RAID does not check end-to-end. The only thing RAID checking does is verify that whatever data made it from the card to the hard drive is ok. This doesn't verify that its the right data, that it's in the right place, or even if the card itself had an error (like you said buggy firmware). Virtualized storage systems do a better job, but they aren't perfect either.
There's a pretty interesting blog post about that subject
here
Another thing about virtualized RAID vs hardware RAID is that in large parity arrays (like RAID 6 over 10 disks), parity calculations can get *expensive*. And this would matter *if* you had an all in one storage box that not only was holding data but doing calculations as well on data. You take that out though, in the form of a virtualized storage appliance, and all it is doing is managing storage. So what does it matter if the CPU in the storage appliance then has something to do?