The point is data consistently. Just because you have never worked with it don't tell everyone "never." This $50 highpoint card is doing it right now. You can and should also turn on scrubing in md and Windows if you are using it. However since the author of md has taken the position that he can't be bothered to add proper scrubing and parity checking on read then I wouldn't use md personally. I'll stick to the $50 -> $150 cards I use at home where I can turn this functionality on.
To rephrase, how is it checking the data? The OS wants to see 512B or 4KB sectors, and the controller gives it stripesize*drivecount, which should be aligned multiples, but containing (stripesize*drivecount)/fsblocksize blocks each, and nothing else. If it is using additional space for its own ECC info, how much is overhead is it adding, and how is it spread across the drive?
The core problem with RAID 1 is that if drive A got bad data to write, and drive B got good data, or they were updated out of sync somehow, the drive could report no error on that sector, but one set of data wouldn't match. At that point, you would need every FS block CRC'ed,
and FS awareness of the RAID, or RAID driver awareness of the FS, so that the data could be verified against the FS' CRCs. And, that's the easy scenario, and one which probably gets corrected well under Windows, I would imagine
(but, I'm not sure).
It would also be possible, with a bad PSU or power failure, or driver-level bug, to have bad writes to both drives, leaving the data in a state that can b read as incorrect, but where figuring out whether either, or which one, is correct would be difficult to impossible, without FS integration.
Unless the RAID controller explicitly stores the correct CRCs somewhere, you won't even be able to tell which is which. Scrubbing without an additional software layer of checking and correction only prevents the easy errors: 1 drive reporting a bad CRC, one a correct CRC, or both bad CRCs
(bad block, data integrity compromised). Anything but that or bitrot won't be caught, much less prevented or corrected. OTOH, if it does, what mechanism is it using to not break RAID 1, yet also not harm performance by needing to perform several seeks per write?
RAID 5 ends up in a similar boat.
The correct solution is to invalidate some, "free space," and write new data there, rather than overwriting the old data in-place. This pretty much necessitates either a filesystem-integrated method; or a fully proprietary RAID arrangement, with some additional space per stripe allocated for error checking[ and correction] data, some dedicated EC[C] stripes, etc..
I'm not convinced that going from 10^-14 to 10^-15 to 10^-16 is really worth a damn, except when in a data center environment
(at which point it will still only allow the drives to reach around parity with the consumer drives, in errors/time). Commodity hardware has so many sources of light failure that a pure software answer to data corruption will offer many more orders of magnitude the robustness, in practice
(UER specs are assuming the drive got good data, has a good RAM chip, and there was no other error in the system--we really want to find those other errors, too, especially the ones that we aren't expecting, because expected errors tend to be prevented errors), not unlike it has for decades with networking protocols
(where each layer, for protocols that care about integrity, will be made to not trust the layer it rides in). Traditional RAID, however, doesn't do that. It fundamentally trusts the drive controller(s) and disks.
They are trickling in. They showed up on Newegg two days ago for $120 just to be sold out an hour later. When they restocked the next day, I bought a set at $120 each. Promptly sold out again.
Listing on Newegg at $160 and $140 on Amazon (3rd party) this am.
They are a nice Goldilocks drive series. Right now, cost and availability kind of suck, but that will work itself out over the next few months, for sure.