Well I pretty much get where you're coming from. I've been having similar questions about how to engineer my next generation personal data storage system, and avoiding silent data corruption is high on the list.
Modern CPUs do tend to have some parts of their workings protected by ECC or PARITY type checks and some level of fault tolerance protocol to recover the data if there's a problem. That may just be using ECC bits, or it may mean just invalidating the cache line if there's a parity error detected in a cached read, or whatever.
Modern hard drives do almost universally use an ECC type encoding algorithm at their low levels of physical read / write sector operations. You can query the full SMART data of a typical PATA/SATA disc to see the statistics on things like *corrected* read errors et. al. for instance. This detection / correction / encoding can and does happen typically totally without operating system / filesystem / driver control; it is all inside the disk drive's hardware/firmware. Usually the OS doesn't even TELL (or maybe even NOTICE) when an error correction is performed so long as the drive doesn't indicate an "error" result for the read operation indicating lost data or malfunctioning hardware etc.
It is also typical that over an PATA / SATA / SCSI type link that the TRANSFERS of data packets between the HDD and the PC (this has nothing to do with the data STORED on the disc) during a READ or WRITE will actually be protected by a CRC checksum within the packet; it is calculated at the sender, and checked at the receiver, and the data received is not considered valid if the CRC isn't correct for that data transfer operation. The operation is generally semi-automatically retried at the driver / firmware level if the CRC of the data *transfer* is incorrect.
However it doesn't take too much of a scrape / crash / glitch to cause an UNCORRECTABLE ECC error that will prevent a READ of valid data, so it is good to have some protection from unrecoverable read (or write) errors by using a redundant mirrored or parity set (RAID 5 etc) type of disk aggregation system.
RAID 5, for instance, will calculate true "parity" checksums for your data and distrubute that error detection / correction information among your drives so that not only is the loss of a single drive detectable / repairable, but also the corruption of data read from and single drive. Now this is where the fine details start as to wondering what YOUR RAID software / controller really does in the case of checking parity for data from read operations where the read operation itself was SUCCESSFUL according to the individual drives involved. In theory the individual drives hardware level ECC and transfer level CRC should detect corruption of stored / read data with good probability, but it's possible to double-check according to the distributed RAID parity data. I assume most implementations do perform raid-level parity checks 'always' even for cases where no read errors are detected by the storage drivers.
There are some vulnerable points, though. As it turns out a typical PC case will have several thousands of cosmic rays passing through it every minute, and additionally several events of radiation bursts from natural materials in the environment. If one of these hits a spot in non-ECC protected RAM, or a vulnerable part of your CPU / chipset which isn't protected by ECC / parity, data could be corrupted.
I think the statistic at one point was several induced errors per month per gigabyte of memory were expected from sources like cosmic rays.
Of course poorly designed hardware that is susceptible to electrostatic discharge, EMI/RFI type problems, errors due to bad power fluctuations, due to dust / humidity affecting the circuits, high temperatures, vibration, etc. will also introduce errors.
With my 1TB RAID 5 system it isn't too uncommon for me to see a couple undetected corruptions per terabyte of data read from the system. Since they were undetected, I can only assume it was a "glitch" in some non parity / ECC protected portion of the transfer, or due to a rare software bug.
To get still better protection you can "tune" or select / configure your RAID to be more on the "paranoid" side with more intense levels of ECC / parity per amount of data stored to improve the chances more severe errors will be detected / corrected.
Beyond that, yes, you can use filesystems that at the logical filesystem level use some kinds of parity / ECC type of data to detect corruptions in blocks of ordinary file data as another tier of redundancy and protection. Few filesystems do this, and I've thought about advocating change in this area or maybe even forking one of the popular filesystems to implement this mechanism as an optional feature. If you'll notice the
"Checksum / ECC" column in the extreme right hand side of one of the colored tables, you'll find a few filesystem choices that do implement this sort of thing at least as an option. You might give GPFS / ZFS a try. You could do much worse than to set up a ZFS 'RAID' on a OpenSolaris cheap file server or two.
http://en.wikipedia.org/wiki/Comparison_of_file_systems
Beyond that, you can start storing redundancy / recovery metadata yourself at the application / end user files level. Programs like "par2", "parchive", "dvdisaster" et. al exist to calculate such redundancy data and store it in files that can later be aggregated and processed to recover from certain amounts of the protected files being corrupted / missing.
http://parchive.sourceforge.net/
http://dvdisaster.net/en/index.php
et. al.
There are also programs like AIDE / Tripwire / etc. that calculate parity hashes of your files (and you can do this somewhat automatically for virtually EVERY file on your discs). They store these hashes either in some kind of list / table file, or in distinct files, or as metadata in a stream of the original file, or whatever, depending on the program you use. These hashes (of which MD5, SHA1, et. al. are just examples of a wide class of options) can then be automatically verified at any point to determine any unexpected alterations / corruptions of the original files' contents.
This is a fairly "lightweight" and handy thing since the hash of a given file is no more than several dozens of printable characters long, so it is usually trivial in size in comparison to the size of the hashed file, and it is easily readable / verifiable by a person, and it can be transported / copied / backed up along with the file(s) it relates to easily enough.
I see little reason NOT to use something like this as a matter of course, especially if your mode of operation is often "acquire / create / download once, store unmodified forever" for many of your digital assets. Any hash / ecc data you ever create for a file should stay correct forever, so the overhead in calculating it isn't high, and you can check it either quite infrequently (in an automated exhaustive way), or at the point of use when you retrieve something from your digital library.
http://www.cs.tut.fi/~rammer/aide.html