Does Anyone Actually Know of a Worn Out SSD Due To Too Many Writes?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Red Squirrel

No Lifer
May 24, 2003
70,742
13,855
126
www.anyf.ca
I'd be curious to do some kind of benchmark test to see just how much data it takes but I don't have that type of money to throw away. Wonder if intel or OCZ would want to "loan" me one so I could do a review. :p

That said I have killed a 16GB USB stick by putting too many movies on it and deleting them. (was using it to watch movies on my TV before I got a HTPC) though a USB stick cannot handle as much as a SSD, but considering it went through maybe 20 movies, it gives an idea.

Basically anything that constantly writes, I'd take off, especially the page file. Windows uses it even when there's enough ram. Though you could just go without one alltogether if you have the ram to handle it.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
That said I have killed a 16GB USB stick by putting too many movies on it and deleting them. (was using it to watch movies on my TV before I got a HTPC) though a USB stick cannot handle as much as a SSD, but considering it went through maybe 20 movies, it gives an idea.
And what idea does it give you? What a completely different, much simpler controller with low grade flash may do? Yeah I assume that's true - but how exactly do you compare that with a SSD?

Do the math, look at the stress tests some people have done to their SSDs and if afterwards you still think a modern SSD can't handle it then you may propose stuff like putting the pagefile on a HDD and argue accordingly - but before that? It gets annoying having to refute the same stuff over and over again..
 

Red Squirrel

No Lifer
May 24, 2003
70,742
13,855
126
www.anyf.ca
I never said it can't handle it, I said it gives an idea that flash *CAN* and *DOES* degrade. Lot of people try to say "don't worry about it just write the hell out of it" but no, you still have to be smart about it.

I would not use a SSD for a heavy write SQL server, for example. In fact, anything mission critical. Even raid wont save you when you use SSDs. The odds are very good to have all drives in a raid fail at once as they are getting equal writes and their failure rate is based on writes, not random. Though I suppose as part of a preventative maintenance schedule, every year the drives could be swapped out one at a time to do a rebuild. I would imagine a raid rebuild to be quite fast on a SSD as well. Would be interesting to see that.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Interesting how we can switch from consumers to business (or are we actually talking about moving the pagefile from a sql db?).

Still doesn't change the summary - do the math, see that using MLC for write heavy business stuff isn't the best idea, switch to SLC and voila. The "disadvantage" you cite is after all not especially bad - flash write cycles are extremely predictable and manufacterers are conservative when specifying them. Also since that's a business that values uptime obviously over paying a bit more there's no harm done in replacing the drives as soon as they reach some high percentage of their guaranteed cycles (even knowing that the drives would probably be good for much more).
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Yup that's a great experiment and I hadn't looked at the results for some time.

Just shows how much manufacterers underrate their flash, although the tests there obviously don't account for much random 4k writes so they give an optimistic result. But then getting 480TB out of a 64gb 32nm flash drive is nice - even if we halve the results to accommodate for the missing random writes..
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Yup that's a great experiment and I hadn't looked at the results for some time.

Just shows how much manufacterers underrate their flash, although the tests there obviously don't account for much random 4k writes so they give an optimistic result. But then getting 480TB out of a 64gb 32nm flash drive is nice - even if we halve the results to accommodate for the missing random writes..

What's equally impressive is that the only drive that has so far died in the experiment is the Samsung uses one of the worst controllers on the market, as far as P/E cycle usage is concerned.

It will be interesting to see how the sandforce and intel drives go, with their much better write amplification factors (approx 1 for both these drives, compared to approx 5 for the samsung)

The other factor that may be relevant is that the traditional method of measuring P/E cycles, which is by repeated writing/erasing blocks on the NAND until they die, may be inaccurate. It turns out that immediately erasing a programmed block is extremely stressful on the NAND, whereas allowing a cell to 'rest' with programmed data in it for a number of minutes, dramatically boosts the cycle life (by a factor of nearly 10). In reality, most cells on a practical SSD will only be updated on an infrequent basis, so the cycle life is expected to be much higher than the manufacturer's measurements using the standard methodology.
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I would not use a SSD for a heavy write SQL server, for example. In fact, anything mission critical. Even raid wont save you when you use SSDs. The odds are very good to have all drives in a raid fail at once as they are getting equal writes and their failure rate is based on writes, not random. Though I suppose as part of a preventative maintenance schedule, every year the drives could be swapped out one at a time to do a rebuild. I would imagine a raid rebuild to be quite fast on a SSD as well. Would be interesting to see that.

A number of people have proposed using RAID 4 for SSDs. In RAID 4, one drive in the array will get 50% of the total writes, the remaining 50% will be shared among the remaining drives. This ensures that this one drive gets replaced seperately to the others. When that drive is replaced, the array is reorganised so that a different drive takes the heavy-write load, ensuring that no 2 drives eventually reach end-of-life simultaneously.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
What's equally impressive is that the only drive that has so far died in the experiment is the Samsung uses one of the worst controllers on the market, as far as P/E cycle usage is concerned.

It will be interesting to see how the sandforce and intel drives go, with their much better write amplification factors (approx 1 for both these drives, compared to approx 5 for the samsung)
Didn't know much about the Samsung controller, but you're right - getting a WA of ~5 for basically sequential read is anything but great - good catch thanks (didn't look at the WA charts they had before~). I'm really curious how well the 34nm Intel x25 will fare, even with only 40gb seems like it would take an awful amount of time to kill.. not to talk about the SF drives. Seems quite possible that the tested SF drive should claim the petabyte limit.


Also interesting fact about different behavior of flash cells