If you're concerned enough about data corruption, before buying ECC memory you best get yourself some extra hard drives and run an appropriate RAID level and do regular RAID scrubs, because even CERN's data shows that data on platters is more susceptible to "silent data corruption" than data in RAM:
http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191
No doubt. The differences are that RAID and filesystems are cheap and easy to use (for relative values of easy), and that disk corruption has been known as a common occurrence for decades. NTFS checksums most data on the drive, and is usually pretty good about CRC errors, IME
(I'm really more disappointed in MS doing half-assed things like ReFS, instead of making real next-gen FS to replace NTFS--it will be more cost on their part, more stress for users, and do less good, over the years). Data I care about also rests on EXT4, which checksums most everything they could without breaking EXT3
(I plan to move to BTRFS once the offline FSCK gets farther along), and static data rests in archives with parity to back them up. All of that cost only the use of my time which was already free to use. The added cost of HDDs should never even be questioned: you either need RAID, or automatic backups to a second HDD
(in either case, more HDDs are needed than just enough to fit the data). For most users, that would be a good reason to get a specialized NAS box, and maybe commercial backup software for their desktops (which might do incremental or diff backups with its own CRCs). With CRC checking, backups are generally enough, as long as you can be sufficiently confident of the initial storage effort.
Now, that's mostly taken care of. What's left? ECC RAM is definitely not the first line of defense, but is the last line of defense that hardware functioning as intended lacks, and it has uses in early detection, and diagnosis of, hardware that may be bad or failing. Then, Google's first study came out, and then the follow-up, more recently, showing (a) that hard errors were more common than soft, (b) that error rates correlated positively with utilization (not unexpected, but the old alpha particles hitting resting data does not fit), and (c) that the rate of errors on hardware not otherwise malfunctioning was far greater than previously thought, (d) though many systems never encountered any errors at all.
Add to that, without ECC RAM, you can not state with high confidence that you have not experienced an error, even if the chances are <1/92nd/yr. Since data CRCs make verification checking of resting data fairly easy w/o ECC, I'm of the opinion that ECC on a home server should be done after the desktops, not before, unless said server is also running VMs that do a lot of data manipulation (not the common home server case).
The big problem with ECC today is that the marginal cost is high, due to Intel's market segmentation, and AMD's lack of competitive CPUs (if BD hadn't been the flop it was, I probably would have upgraded, already). A $400 PC becomes a $500+ PC, and a $1000 PC becomes a $1200+ PC that can't be overclocked much, rather than just needing to spend a little more on the RAM (assuming data and command parity). That makes implementing it a non-trivial value judgement.