Originally posted by: zephyrprime
Even by that way of figuring MTBF, the numbers are still BS. I did helpdesk support for 7000 seat company and the site I was at probably had about 1000+ people and computers in it. I probably did a HD replacement once a week and the other 2 guys that worked there did also. If the computers are on for 8hrs a day & 5days a week, that works out to one failure every 13333 hours of operation. No where near the rated MTBF.
Needless to say, MTBF figures are bigger BS than LCD response times and Apple performance claims.
Also, all the manufacturers list similiar MTBF figures even though at storage review, the reliability survey database shows drastic differences in reliability from model to model.
I bet they do all sorts of realistic things to calculate these numbers. Like they could extrapolate the figure only from new drives which since they are new a less likely to fail. Also, I notices that the wording on the samsung site was "power on hours". It doesn't mention time in actual USE. A drive could last a long time if it were turned on but never had to perform any reads or writes. Then there's the possibility of outright lying.
You're taking the MTBF a bit too literally. It's not that the companies are lying, you have to understand that the quoted values on spec sheets are
theoretical MTBF, not operational MTBF. Why are they theortical? Well, if you have ever looked at the specs of an announced new family of drives before it is released, you will likely see a MTBF spec on there. Obviously there is no way a HD manufacturer can know how a drive is going to perform in the field for certain before it is ever released into the field. So they use a number of techniques to guestimate what the value should be. Things like how previous/similar drives performed, component failure rate, and limited in house testing for a limited period of time (a few thousand drives for a few months) are used as a model for determining what the MTBF should. Still, no matter how accurate they make their model it's still just theoretical.
That said, you can pretty safely assume that on average a drive with a 1.2 million hour MTBF will fail less frequently than one with 500,000 hours. If the difference is like 100,000 hours, that's not a statistically large enough gap to declare one drive likely more reliable than the other.
If you're only getting about 13,000 hours, either you're not calculating your failure rate accurately, there was a defect in the drive run, there is something in your environment that the MTBF model didn't account for, or some combination of all of those and other variables contributed to the difference.
The problem is, how come HDs get away with a completely different definition of MTBF.
For example?