A 100,000 drive server farm is honestly a bad way to test off the shelf personal use hard drives. And please note that they said "excessive cooling over running a little hot" I did not read the entire article but, thats neither a very scientific claim nor one that says much. Most drives are designed to run in the 30-40C range. So one being pushed to 20C is more likely to fail than one runing 42C. It's all about how far out of specs it's being run, not which direction. Their conclusion that failure rates weren't directly correlated to usage is IMHO poppycock. The drives were in googles server farm, what drive in googles server farm has access rates any where near as low as my mothers 80gb drive that runs 24/7 and gets used weekly for 30 minutes at best? These are consumer drives and while thats an extreme end of the usage spectrum, compare that to an enthusiast who's on their computer daily, for a few hours, and runs some kind of number crunching when they're not on the computer, hd access is going to be constant, though actual data is going to be low while the user isn't present. I do have to agree with their conclusions about smart not detecting anywhere near enough hard drive issues. It's whole purpose is to detect hard drive errors, I've had drives clicking that won't boot into the OS that smart says are fine, useless IMHO.