• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

BackBlaze Q4 2015 results are in!

Elixer

Lifer
https://www.backblaze.com/blog/hard-drive-reliability-q4-2015/

Let's start off with the duds...
blog-drives-removed-2015.png

and now, things get more interesting...
WD reds 😱
blog-drive-model-stats.png

Let's see the overall brands... (NOTE: look as sample size!)
2015-drive-failures-barchart.jpg

How about 4TB drives? (again, pay attention to sample size!)
2015-4tb-drive-fails-barchart.jpg

All of the 4TB drives have acceptable failure rates, but we’ve purchased primarily Seagate drives. Why? The HGST 4TB drives, while showing exceptionally low failure rates, are no longer available having been replaced with higher priced, higher performing models. The readily available and highly competitive price of the Seagate 4TB drives, along with their solid performance and respectable failure rates, have made them our drive of choice.
Again, for those that missed it, HGST no longer makes the good drives that are being sampled here...
What about 6TB HDs you say?
blog-6tb-drive-stats.png


Helium drives?
Well...
We continue to only have 45 of each of the 5TB Toshiba and 8TB HGST Helium drives. One 8TB HGST drive failed during Q4 of 2015. Over their lifetime the 5TB Toshiba drives have a 2.70% annual failure rate with 1 drive failure and the 8TB HGST drives have a 4.90% annual failure rate with 2 drive failures. In either case, there is not enough data to reach any conclusions about failure rates of these drives in our environment.

What does this all translate to?
Well, since lots of those HGST drives aren't made anymore, and such a low sample size for WD drives, the only thing people can really take away from this is, backup your data no matter what you have!

On the plus side, Seagate seems to have fixed whatever issue they had with their duds, so, might start sampling those once again.
 
After the duds that were the 1.5TB drives and the ST3000DM001, the new Seagate 4 and 6TB drives seem to be of excellent reliability, when tested in the same environment against their older brethren. Great work Seagate.
 
After the duds that were the 1.5TB drives and the ST3000DM001, the new Seagate 4 and 6TB drives seem to be of excellent reliability, when tested in the same environment against their older brethren. Great work Seagate.
and now a class action suite against the old drives is going to make Seagate go bankrupt :biggrin::biggrin:
 
I find it amusing that the 1.5TB Seagates were "duds". Compare their failure rates vs age to the 2TB WDCs. 10.16% @ 68.1 months versus the WDCs at 9.92% @ 8.7 months. I'll risk that <1% extra failure rate and take an average of 5 years longer lifespan. Even if the WDCs are just a recent addition, that's still pretty bad.

I'm not going to argue that 10% is a remarkably high failure rate, I'm just getting tired of the Seagate bashing, as if every Seagate product is prone to failure, and ONLY Seagate. If the comments from people who read (half of) the Backblaze stats were to be believed, Seagate drives crap out instantly, and should never be trusted for anything. Meanwhile, in the real world, the 1.5TB Seagate drives were lasting an average of 5 years, give or take. I'd expect some failures from a consumer drive being run in their environment for 5 years straight. HGST showed a much better failure rate on their 2TB model (1.55% @ 58.6 months), but Seagate isn't the only brand to have failure rates of epic proportions.
 
I've been buying various flavors HGST for years and I can't remember the last issue I had with one. Good to see this is corroborated elsewhere.
 
I want to get 8x 3TB disks for a NAS (long term goal, 4 or 6TB, but can't afford yet so I want to fill bay capacity first - can't add disks to ZFS arrays)... what the hell am I supposed to get?

I thought I've heard great things about the WD Reds, and still want to avoid the Seagate 3TB disks.
 
I'd like to see any statistics on large scale usage of SSDs. Would be interesting to see the failure rates of thousands of SSDs.
 
None of those are actually Enterprise drives either by the looks of it. So that makes me feel even better about HGST since they are using "regular" drives and the HGST drives are still holding up better.

What's silly is that HGST is a subsidiary of WD. One would think that they are sharing drive technology amongst each other to make more reliable drives as a whole but that doesn't appear to be happening on the WD side.
 
Last edited:
None of those are actually Enterprise drives either by the looks of it. So that makes me feel even better about HGST since they are using "regular" drives and the HGST drives are still holding up better.

What's silly is that HGST is a subsidiary of WD. One would think that they are sharing drive technology amongst each other to make more reliable drives as a whole but that doesn't appear to be happening on the WD side.

Due to antitrust stipulations, Western Digital was basically forced to create WD and HGST businesses as separate entities that would compete against each other. That stipulation was to apply for two years at minimum following the acquisition. I'd be curious to know if they are maintaining that just because, perhaps, it has been working well for both of the business units, or we just don't see the slow assimilation just yet.

Perhaps, as part of this effort to maintain competitive approaches against each other, certain tech was not shared. Even without that antitrust stipulation, the acquisition was only a few years ago, yesterday in the world of technology R&D. Perhaps they have begun sharing insights but that has yet to produce a product on market. Time between development and market release can be a little while, but I have no idea what kind of time frame we are talking for HDD production. For smart phones, it is typically a 2 year development cycle. For something like rotating platters, I wouldn't doubt if perhaps that gap is even longer.
 
What's silly is that HGST is a subsidiary of WD. One would think that they are sharing drive technology amongst each other to make more reliable drives as a whole but that doesn't appear to be happening on the WD side.

I doubt it's a technology issue.
 
I've been buying various flavors HGST for years and I can't remember the last issue I had with one. Good to see this is corroborated elsewhere.

I've been buying various flavors of Seagate for years and I can't remember the last issue I had with one.

...including one of the Evil 3TB Seagates.... 😱
 
While these are always interesting to look at, the sample size is still pretty darn small compared to the total drive units seagate (and everyone else) sold.
(Seagate)
seagate_0_Prime951.gif

(WD)
wdc_0local1.gif


I wish Microsoft, Netflix, Google, Facebook, Twitter and all the rest would post their findings as well.
 
I wish Microsoft, Netflix, Google, Facebook, Twitter and all the rest would post their findings as well.

I would want to know Amazon's drive failure data as well. AWS has basically become the defacto cloud hosting center for most new businesses.

Most of the Netflix servers are hosted on AWS now, anyway.
 
One thing I found interesting:

A relevant observation from our Operations team on the Seagate drives is that they generally signal their impending failure via their SMART stats. Since we monitor several SMART stats, we are often warned of trouble before a pending failure and can take appropriate action. Drive failures from the other manufacturers appear to be less predictable via SMART stats.

In some ways, I would take predictability over a small increase in durability. I would be curious which stats they monitored, specifically. I would assume things like reallocated sectors but.......?
 
One thing I found interesting:

A relevant observation from our Operations team on the Seagate drives is that they generally signal their impending failure via their SMART stats. Since we monitor several SMART stats, we are often warned of trouble before a pending failure and can take appropriate action. Drive failures from the other manufacturers appear to be less predictable via SMART stats.

In some ways, I would take predictability over a small increase in durability. I would be curious which stats they monitored, specifically. I would assume things like reallocated sectors but.......?

https://www.backblaze.com/blog/hard-drive-smart-stats/
While this hasn't been updated AFAIK, they do show what drives failed what here... https://www.backblaze.com/blog-smart-stats-2014-8.html
 
https://www.backblaze.com/blog/hard-drive-smart-stats/
While this hasn't been updated AFAIK, they do show what drives failed what here... https://www.backblaze.com/blog-smart-stats-2014-8.html

Great Links.

I checked the SMART data on the drives in my server (a mix of Toshiba, WD, Seagate, Hitachi) and found it interesting that Seagate logs so much more SMART info than the other 3 brands. Of the five Metrics that BackBlaze uses, only Seagate uses SMART 187 & SMART 188. In fact, HD Sentinel has scored one of my Seagates down for throwing an error a couple of years ago on SMART 187. I wonder if my other drives might be scored down, too, if they logged the same thing.
 
It sure would have been nice if SATA specs also had SMART data involved, where each of the values would clearly be defined, and all devices must have compliance to those, or not be able to be SATA certified.
It is a PITA trying to find what vendor specific stuff is, and programs that read SMART data must constantly be upgraded because the manufactures keep moving stuff around.
 
It sure would have been nice if SATA specs also had SMART data involved, where each of the values would clearly be defined, and all devices must have compliance to those, or not be able to be SATA certified.
It is a PITA trying to find what vendor specific stuff is, and programs that read SMART data must constantly be upgraded because the manufactures keep moving stuff around.

I agree. I've got a LiteOn (Dell OEM) SSD, and it won't even tell me power on hours, let alone much of anything else.
 
backblaze is a bunch of retards when it comes to data presentation

they release their data in such a mess of incomparable results that it's worthless as is

they talk about failure rates but they're comparing across different age drives which you just can't do

i mean saying these drives only have a 1% failure rate but we've only had them 6 months, is that better or worse that a drive that has a 10% failure rate after 5 years? No one knows

what they need to do is very simple:
a chart of cumulative failure rate by age

what % died by 6 months, 1 year, 2 years, etc

some drives will only have 6 months of data as they are new, some will go all the way through 5 years, that's fine, but at least the numbers at those points that they do have in common will be directly comparable
 
Back
Top