Anybody here "endurance out" an SSD in normal use?

Hulk

Diamond Member
Oct 9, 1999
4,212
2,001
136
There's a lot of worry and perhaps paranoia about SSD endurance and I'm wondering how many people have experienced it. Of course I understand SSDs are a relatively new technology and eventually there will be a steady stream of used up SSDs and we'll have lots of real world data.
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
I have 4 80GB X25-M G1's. They are at 97%... I don't think I am going to wear these out anytime soon. My guess is something else will die before the write cycles are used up.
 

GlacierFreeze

Golden Member
May 23, 2005
1,125
1
0
Everything I've read, "normal" use (light/moderate) should take 10+ years for 128GB and quite a bit longer for larger sized drives. Much more likely these days that a controller would fail than the memory to wear out. Testing shows that many drives last several months with extremely heavy and constant writing (a few TBs per day). That is way, way beyond normal usage.
 

franzy

Junior Member
Mar 19, 2013
5
0
0
been using my OCZ for almost a year now, my 10Krpm which i had before this one broke down after 6 months.... so no worries!
 

Puppies04

Diamond Member
Apr 25, 2011
5,909
17
76
In "normal use" you will not be seeing anyone wearing out a drive just yet. Try posting again in 5 years.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
TLC lasts 10 times less than samsung 830. the 840 pro of course is a die shrink which leads to less p/e cycles.

In 5 years ssd will be dead, Phase change or MRAM will have replaced it. SSD cannot scale sadly.
 

RU482

Lifer
Apr 9, 2000
12,689
3
81
not from normal use, but at work, I've run extended endurance tests on many models of SSD. Just today I killed a Plextor M5M 64GB drive. had about 250TB of writes on it, then I ran my daily power cycle and it kicked the bucket
Testing some Intel 525 and Micron M500 drives now also
 

ghost03

Senior member
Jul 26, 2004
372
0
76
I've had a very early SSD since back in 2008. I don't think it had the write-shuffling type of technology, as high traffic stores (such as drive indexing) have failed on it. It still works, but I'm not putting it in any system requiring even a modicum of reliability.

My later Intel drives have been bullet proof. I've installed and deleted several games as well as over a year's general use on my 520, and it still says 100%!!
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
not from normal use, but at work, I've run extended endurance tests on many models of SSD. Just today I killed a Plextor M5M 64GB drive. had about 250TB of writes on it, then I ran my daily power cycle and it kicked the bucket
Testing some Intel 525 and Micron M500 drives now also

Would you be able to share the statistics for the drives you have tested? You can also email them to me (kristian@anandtech.com) if you don't want to post them in public. Just interested in seeing what kind of data you have, in addition to XtremeSystems of course.
 

lagokc

Senior member
Mar 27, 2013
808
1
41
I've worn out an SSD but it was the 4GB SSD that came built into an Asus Eee 701 and since endurance increases exponentially with capacity I'm not really concerned about it with modern SSDs.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
I've wondered this. I was thinking of upgrading to an SSD based rMBP for work, but being a developer that is constantly deploying new builds to a local app server, I was wondering of the wear on this.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
Based on everything I've read over the past couple years, if your SSDs fail it will be due to any number of other reasons and not excessive write failure. It is widely said that not having moving parts gives it a big leg up compared to HDDs in reliability, but in practice SSDs are far more susceptible to failure due to buggy/bad firmware or operating in an environment where SSDs aren't fully supported.

Thinking back of all the posts I've read about SSD failures, the majority of them seem to be comprised of spontaenous bricking causing a loss of data. In these cases it is hard to pinpoint the problem and is usually blamed on buggy firmware. SSD failure often comes without warning. Fortunately, many of these issues are resolved with drive wipe and reformat, but the downside is that many drives that have problems aren't reported or turned in for warranty work. I think this skews the stats a bit. As far as I'm concerned these are failures and they should be considered when judging reliability.

My two Samsung 830 256GB drives are still going strong so I haven't had to deal with failure personally. I'm certainly not worried about write endurance. My guess is that I'll replace these drives with larger ones long before it becomes an issue.
 
Last edited:

HighEndToys

Junior Member
Sep 9, 2010
7
0
0
I've killed one SSD under normal use and to the point it would read but not write without deleting data.

1 Crucial C300, it was in my notebook as a secondary drive for a couple of years. I would download Usenet files to it, 24/7. That drive had to take the RAR file, run the QuickPar, WinRAR and then the data would get dumped to a NAS. I would dump the data when the drive needed space so it was getting worked pretty hard.
 

smackababy

Lifer
Oct 30, 2008
27,024
79
86
Based on everything I've read over the past couple years, if your SSDs fail it will be due to any number of other reasons and not excessive write failure. It is widely said that not having moving parts gives it a big leg up compared to HDDs in reliability, but in practice SSDs are far more susceptible to failure due to buggy/bad firmware or operating in an environment where SSDs aren't fully supported.

Thinking back of all the posts I've read about SSD failures, the majority of them seem to be comprised of spontaenous bricking causing a loss of data. In these cases it is hard to pinpoint the problem and is usually blamed on buggy firmware. SSD failure often comes without warning. Fortunately, many of these issues are resolved with drive wipe and reformat, but the downside is that many drives that have problems aren't reported or turned in for warranty work. I think this skews the stats a bit. As far as I'm concerned these are failures and they should be considered when judging reliability.

My two Samsung 830 256GB drives are still going strong so I haven't had to deal with failure personally. I'm certainly not worried about write endurance. My guess is that I'll replace these drives with larger ones long before it becomes an issue.

I had an original Vertex that one day just bricked. My PC wouldn't start for some reason and in BIOS it was showing it as a completely different drive. I couldn't get it to respond anymore. It showed no signs of problems before just one day dying.
 

hhhd1

Senior member
Apr 8, 2012
667
3
71
TLC lasts 10 times less than samsung 830. the 840 pro of course is a die shrink which leads to less p/e cycles.

In 5 years ssd will be dead, Phase change or MRAM will have replaced it. SSD cannot scale sadly.

TLC last 3 times less, not 10 times less.

MLC has 3000, TLC 1000.

It is still ok considering they are offering capacities starting at 120gb.
 

daveybrat

Elite Member
Super Moderator
Jan 31, 2000
5,730
949
126
Out of ALL of the SSD drives i've sold customer's at my work, i've yet to have a single one fail yet. I've had many mechanical drives fail, but not a single SSD drive.

The only brands that i purchase and sell at my work are Samsung, Crucial, Kingston, and Sandisk.

I'd trust an SSD anyday over today's mechanical hard drives.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
With Over Provisioning (trim doesn't help here) you can extend the performance and life greatly. Intel 710 is just a intel 320 but they tuck away 40-50% of the storage.

I've lost 3 intel and few kingston and bunch of SF-1222 first gen sandforce that nobody bothered to patch up so they seize up on wake and die.

Haven't lost any samsung 830 in raid-10 (256gb) and had 1 samsung 840 pro drop after a week (256gb) but it checked out fine afterwards.

Bringing up 8-16 samsung 840 pro 512gb with two 9266-8i raid controllers to spread the bandwidth.

I'm thinking it's about time to do SSD -> SSD tiering.

SLC -> READ/WRITE cache to MLC then TLC for read-only caching, then 15K SAS and/or 7200RPM 4TB SAS RE4 for last tier.

Could probably just do MLC -> TLC -> 4tb RE4 sas - but with the low price junk nand like the 1TB Crucial and 500gb TLC it might turn out good for caching.
 

Sunburn74

Diamond Member
Oct 5, 2009
5,027
2,595
136
I think the testing done at xtreme systems is proof enough that with normal use its pretty much impossible to wear out your drive. they were using only 40-64gb drives and writing >100+ Tib worth of writes without any issue.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
you'd be surprised at how much your system writes to disk.

If you haven't - I strongly suggest you run a week or four of write counts - tally it up and see what apps are doing it. You'd be surprised that some applications write ALOT to caches - say to speed up rescanning a file for a virus when new definitions come in by keeping a hash for a period of time. Some apps for instance keep a file per file :( Think about that. Doesn't really matter if they kept a sha for each file, in one big file because they would probably read-write the data anyway making small changes like timestamp so they know when to next check the file.(I'm talking to you SEP/Norton).

Some backup software keeps a lock of pointers too. If you think about it, distributing the work across the day rather than at one time makes it feel less invasive, especially with an old 5400rpm drive and not enough ram swapping cpu. However the caching may be trading drive write (which for hard drives, doesn't matter much) for speed.

I think Windows needs to come up with a super-tiering filesystem so we can do the SLC(20gb) -> MLC(256gb) -> TLC (2TB) shuffle dampening writes to the lowest tier.

It's very well possible someone can do this in controller, but the application/os layer probably can do a better job since it knows intent. Throw all that caching at the SLC, arrange the writes so that compression and dedupe is given intent to the filesystem and let it do its job best - as a filesystem
 

fffblackmage

Platinum Member
Dec 28, 2007
2,548
0
76
Haven't killed either of my two OCZ Agility 60GB SSDs yet. Both are 3 years old now.

But in the same time frame, I had a WD20EARS starting getting bad sectors and dropping out of a RAIDz array and one Patriot 8GB USB drive, used as the OS drive for my custom NAS, crap out on me.
 

terente

Junior Member
Apr 2, 2013
18
0
0
I have 2 Vertex 128Gb drives that were used in a RAID0 array as system drive for the past 4 years without a problem! Recently swapped them for Kingston HyperX 120Gb drives, DiskInfo shows Vertex drives at 55% and 43%.
My Corsair Force 3 120Gb in my laptop was used daily for the past year, it still shows 100%.