Various quotes from it disagree with you
The Article you Linked said:
The only definitive conclusion we can reach right now is that you should take any claim of reliability from an SSD vendor with a grain of salt.
Its a matter of reading comprehension
When it comes to enthusiasts, we really can't make the assumption that an SSD is more reliable than a hard drive.
Can't make the assumptions =! the opposite is true. It means "we do not know".
Suffice it to say, the researchers at CMRR are adamant that today's SSDs aren't an order of magnitude more reliable than hard drives.
Not an order of magnitude more reliable =! not more reliable. It can be more reliable just not a whole order of magnitude.
Giving credit where it is due, many of the IT managers we interviewed reiterated that Intel's SLC-based SSDs are the shining standard by which others are measured. But according to Dr. Hughes, there's nothing to suggest that its products are significantly more reliable than the best hard drive solutions
Lack of evidence is not evidence of lack. However that being said common english practices mean that this particular statement DOES actually interpret into "the two are identical" under common english interpretation.
I believe that despite their bad phrasing they do mean what you think they mean here and that Dr Hughes says that the best of the best in both SSD and HDD are equal.
And if Dr Hughes says so then we must accept his belief over every other scientist in the competing studies which this one seeks to discredit.
The whole article's premise is the discrediting of those touting SSD to be an order of magnitude more reliable and replacing it with a big "we do not know yet". While calling into question other studies they are careful not to draw definitive conclusions and in fact explicitly state the only conclusion they CAN draw is that we should take other studies with a grain of salt (but no conclusion can be made about actual relative reliability)
can you please provide a link demonstrating that write limits self-regenerate? Thanks.
I would love to but unfortunately googling for it swamps me with results of data recovery rather then NAND cell write endurace recovery.
And I don't go around with hyperlinks in my head.
And where’s the comparison to SSDs in Google’s document?
The comparison is in the other 3 links I provided. It is possible to link to an article discussing just HDDs when the two of us are discussing SSD vs HDD.
It doesn’t really matter if you disagree; figure 2 in Google’s study proves you wrong. Also they already covered this in the article:
Figure 2 mixes a lot of variables together. it also separates 3 mo, 6mo, and 1 year. Together they are about 7% AFR, which is indeed lower then year 2 and 3 but higher then year 4.
Also see fig 3. Where utilization is taken into account. Notice that high utilization drives failure rate in year one is 15-16% which is about equal to year 2, 3, 4, and 5 combined.
Actually, this is rather odd
Note that this implies some overlap between the sample sets for the 3-month, 6-month, and 1-year ages, because a drive can reach its 3-month, 6-month and 1-year age all within the observation period. Beyond 1-year there is no more overlap
If there is overlap and the 3mo, 6mo, and 1year figure are not meant to be added up, then how come the 3mo exhibits ~3% AFR while the 1 year ~2%?
The only possible explanation is that they meant overlap in measurement dates and that to get the total 1 year failure you need to add up the 3. Which adds up to ~7% AFR, which is in line with other years
From fig2 and fig3
0-1 year = 7% AFR average for all test types (9-11% on high usage, failing during the first 3 months)
1-2 year = 8% AFR average for all test types
2-3 year = 8.5% AFR average for all test types
3-4 year = 6% AFR average for all test types
4-5 year = 6-10% AFR average for all test types
BTW before you bust on me for "AFR average"... i know AFR stands for average failure rate, but I am clarifying its average failure rate for the average testing scenario, since there was variation in testing conditions.
I don't know about you but the above are clearly staggeringly high in my opinion. Furthermore, the above data does not include drives which did not survive initial burn in test.
Before being put into production, all disk drives go through a short burn-in process, which consists of a combination of read/write stress tests designed to catch many of the most common assembly, configuration, or component-level problems. The data shown here do not include the fall-out from this phase, but instead begin when the systems are officially commissioned for use
So that 1 year AFR is artificially low by removing such drives. I find about 10-20% of drives fail the short initial 1 day burn in test but my sample sizes are too low to be more then mere speculation.