Does Anyone Actually Know of a Worn Out SSD Due To Too Many Writes?

RhoXS

Member
Aug 14, 2010
194
11
81
I see a lot of discussion about minimizing the amount of writing so one does not prematurely wear out their SSD. Some people suggest placing the paging file on another drive and some suggest not using benchmark applications.

Realistically, is this a probelm that can be expected to occur, even with heavy use, in any reasonable period of time? It is my perception that it is really not necessary to worry about this as the drive will reach the end of its useful life for other reasons much sooner.

Although I now have an 80 GB Intel, I would have to come to the conclusion that SSDs are just not ready for prime time if it is necessary to spend any effort at all trying to minimize the amount of writes.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
nah my G1 80gb x18-m used everyday for the last 2-3 years is rocking out. under warranty (hp extended carepack). no trim. xp only.

no sweat. why are you paging so much? time to upgrade from 512meg?
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
Paging is not really something to worry about, but benchmarking several times a day *will* significantly reduce the lifetime of a modern SSD. It's rather abusive to these drives.

I highly doubt the average user is going to be writing in even a fraction of the volume that benchmarks use.
 

RhoXS

Member
Aug 14, 2010
194
11
81
That is all true but, in practice, are there any reported cases where someone's SSB actually got to the point where worn out memory locations degraded the performance of the drive?

My Intel 80 Gb G2 has been in use since November, 2009. I am as pleased now with its outstanding performance as I was the day I installed it. I have noticed no degradation. I am only asking this question because there seems to be a lot of energy expended in this and other forums about this issue. Yet, I never once remember reading a post where someone said they reached the end of life. Therefore I have to question if this is a real issue requiring active management over the life of the drive or just a theoretical issue that is not likely to present a probelm any time during the lifetime of the device.
 
Last edited:

sxr7171

Diamond Member
Jun 21, 2002
5,079
40
91
I guess that would depend on your definition of lifetime. It would be nice for some review site to run a benchmark loop and graph the performance decline and space reduction as it dies. It would be killed in the name of science.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
well i've had two x25-m fail catastrophically. you might say they had accelerated death?

nasty - no SMART indicators lit up - you hit the bad sector and the controller froze. had to do the ole skip bad sector disk image - not easy these days.
 

Old Hippie

Diamond Member
Oct 8, 2005
6,361
1
0
Does Anyone Actually Know of a Worn Out SSD Due To Too Many Writes?
Nope!

I've had 3 Intel G2s just die but I've never read one report of an SSD dying from old age/worn-out memory.
 

watzup_ken

Member
Feb 11, 2011
46
0
0
Pardon me but for best practice, you should have turned off pagefile on your SSD. If it is going to pagefile so soon, then surely it should be time to increase the amount of ram since they are cheap now. :)
 

Tsavo

Platinum Member
Sep 29, 2009
2,645
37
91
Even with 4GB of ram, the OS writing to page is basically = Jack Diddly Squat.

When the OS decides to write to the page, what do you want it on? One of the slowest components in your system? No. Leave it in the SSD and forget about it.

Offshoring writes to a spindle in many ways obviates the purpose of having an SSD in the first place.
 

RhoXS

Member
Aug 14, 2010
194
11
81
I have 4GB installed RAM but using 32 bit W7 (64 bit not compatible with my scanner) so my available usable memory is at the maximum possible. This is no matter because my system really works well. Between the lightning speed of the SSD and a dual core E8400 at 3.8 GHz, this system has a snappy feel to it that makes it a pleasure to use. Everything just happens instantly. Very satisfying.

15 months ago when I bought and installed my SSD, I made the decision not to do anything that would compromise speed and negate the cost of the SSD. Therefore, I refuse to even consider moving the page file, or anything else for that matter (except for large data blocks - pictures, music, etc.) off the SSD. The SSD has the system, all applications, and even some data. I am certain, especially after seeing the responses to this thread, spending even a nanosecond worrying about wearing this drive out would be a complete waste. I cannot imagine not replacing this SSD long before writes become an issue.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Currently, I'm up to 10.96TB of writes on a intel SSD purchased at release (1.5 years ago). Media wearout indicator reads "96". By this measure, the drive should last somewhere around 37.5 years. It's far more likely that the controller breaks or the drive becomes obsolete.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Even with 4GB of ram, the OS writing to page is basically = Jack Diddly Squat.

When the OS decides to write to the page, what do you want it on? One of the slowest components in your system? No. Leave it in the SSD and forget about it.

Offshoring writes to a spindle in many ways obviates the purpose of having an SSD in the first place.

Straight from the horse's mouth ( http://blogs.msdn.com/b/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx ):
Should the pagefile be placed on SSDs?

Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.

In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that

Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
 

spikespiegal

Golden Member
Oct 10, 2005
1,219
9
76
Tell you what - I'll load up an SSD on my VM'd Citrix server Host and tell you how long it takes to smoke it.

I've seen my Citrix and Terminal Server boxes exceed 1,000 non-sequential random writes per second, and sustain the pattern for hours choking 15k SCSI. Users are simply running Internet Explorer and Outlook, and combined with an AV program in the background minor I/O writes are off the chart. FireFox is an even bigger disk hog.

Load up Perfmon and watch disk writes, or Process Explorer and note just how much disk I/O actually occurs on a Windows desktop while you are sitting doing nothing. This is why I don't use SSD on this type of architecture.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Tell you what - I'll load up an SSD on my VM'd Citrix server Host and tell you how long it takes to smoke it.

I've seen my Citrix and Terminal Server boxes exceed 1,000 non-sequential random writes per second, and sustain the pattern for hours choking 15k SCSI. Users are simply running Internet Explorer and Outlook, and combined with an AV program in the background minor I/O writes are off the chart. FireFox is an even bigger disk hog
We were talking about consumer not enterprise usage schemes though. Sure I can throw a MLC SSD with less than 8% overprovisioning in a heavy write DB and see it smoke, but then that's to be expected. If you can show a worn out SSD in a usual scheme, that'd be somthing different.
 

pitz

Senior member
Feb 11, 2010
461
0
0
I've seen the trend of a lot of lower-end users buying SSDs that are really marginal for what they're probably going to use them for (ie: 60gb SSDs, to use with Win7). I think this is where the problems are most likely to be seen, especially since there isn't a lot of overprovisioning built into those drives.

10% overprovisioning on a 60gb drive is 6gb. 10% on a 128gb drive is 12.8gb (duh!). The 128gb user and the 60gb user very likely do exactly the same I/O activity on the drives (ie: loading up Windows, etc.), but the 128gb drive obviously has a lot more slack to deal with it.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Paging is not really something to worry about, but benchmarking several times a day *will* significantly reduce the lifetime of a modern SSD. It's rather abusive to these drives.

I highly doubt the average user is going to be writing in even a fraction of the volume that benchmarks use.

This is probably the most informative advice here.

Real-world usage you are unlikely to wear out a drive in <2-3 years, even without changing page file locations.

If you have a NAS w/ spindle disks for media (music/ movies assuming you use Amazon.com or iTunes) and downloads (driver updates and etc) you are probably one step ahead on the reliability of that media and also cut a lot of common writes out of the equation.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
Tell you what - I'll load up an SSD on my VM'd Citrix server Host and tell you how long it takes to smoke it.

I've seen my Citrix and Terminal Server boxes exceed 1,000 non-sequential random writes per second, and sustain the pattern for hours choking 15k SCSI. Users are simply running Internet Explorer and Outlook, and combined with an AV program in the background minor I/O writes are off the chart. FireFox is an even bigger disk hog.

Load up Perfmon and watch disk writes, or Process Explorer and note just how much disk I/O actually occurs on a Windows desktop while you are sitting doing nothing. This is why I don't use SSD on this type of architecture.

Random writes are not actually the problem as far as NAND lifetime is concerned. For the reason, see http://www.storagesearch.com/ssdmyths-endurance.html

Let's say you sustain 200MB/s of writes for 12 hours every single day (I doubt even in your enterprise use case that your usage is this high). For something like a 160GB MLC drive (5000 cycles), it would last somewhere around 92 days. Not so good.

Let's use a more realistic number (as you quoted, 1000 4KB writes/second, 12 hours a day). Assuming a ridiculously horrible write amplification of 5x, that same drive would last just under 3 years. With a write amplification of 1x, it would last almost 13 years.

What I mean is that a) it's easy to fudge the statistics in any way that you want to support your particular case, and that b) enterprise suitability of MLC really depends on your particular usage scenario. Long sequential writes are far more stressful than even thousands of random writes, assuming a non-sucky controller implementation. I'd say a SSD is probably quite suitable for your application (but not, say, for a security timeloop video recording system).

PS Also note that unlike HDDs which fail catastrophically (i.e. head crashes), NAND flash fails and becomes essentially ROM. In other words, for correct controller implementations no data should be lost when the drive exceeds NAND lifespan, (assuming the controller doesn't fail first, which it almost inevitably will).

PSS I'll assume that you are making backups, so this essentially boils down to a cost comparison; over the expected lifespan of the device, which IO solution would provide the highest $/IOPS? I think that if you go through the calculations, the SSD will win.

Why, for example, does the data recorder example stress a flash SSD more than say continuously writing to the same sector?

The answer is that the data recorder - by writing to successively sectors - makes the best use of the inbuilt block erase/write circuits and the external (to the flash memory - but still internal to the SSD) buffer / cache. In fact it's the only way you can get anywhere close to the headline spec data write throughput and write IOPS.

This is because you are statistically more likely to find that writing to different address blocks finds blocks that are ready to write.

If you write a program which keeps rewriting data to exactly the same address sector - all successive sector writes are delayed until the current erase / write cycle for that part of the flash is complete. So it actually runs at the slowest possible write speed.

If you were patient enough to try writing a million or so times to the same logical sector - then at some point the internal wear leveling processor would have transparently assigned it to a different physical address in flash by then. This is invisible to you. You think you're still writing to the same memory - but you're not. It's only the logical address that stays the same. In fact you are stuffing data throughout the whole physical flash disk - while operating at the slowest possible write speed.

It will take orders of magnitude longer wearing out the memory in this way than in the rogue data recorder example. That's because writing to flash is not the same as writing to RAM, and also because writing to a flash SSD sector is not the same as writing to a block of dumb flash memory. There are many layers of virtualization between you and the raw memory in an SSD. If you write to a dumb flash memory chip successively to the same location - then you can see a bad result quite quickly. But comparing dumb flash storage to intelligent flash SSDs is like comparing the hiss on a 33 RPM vinyl music album to that on a CD. They are quite different products - even though they can both play same music.
 
Last edited:

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Random writes are not actually the problem as far as NAND lifetime is concerned. For the reason, see http://www.storagesearch.com/ssdmyths-endurance.html
I read that article, but nowhere does he go into detail about the effects of random writes for write amplification, which depending on the controller will be a good bit worse or absolutely catastrophical (e.g. a WA >20).

The difference between sequential and random is just that the former is almost trivial to compute and by far not that dependent on the controller, but just because of that I wouldn't ignore random writes.

Also if we're at it some of his assumptions are wrong - for example "If you write a program which keeps rewriting data to exactly the same address sector - all successive sector writes are delayed". Since we know that every modern controller uses wear leveling, a LBA address has most certainly nothing in common with the address the controller is really writing the data..
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Although I now have an 80 GB Intel, I would have to come to the conclusion that SSDs are just not ready for prime time if it is necessary to spend any effort at all trying to minimize the amount of writes.

It's probably just paranoia induced by reviews exaggerating it. It's 10x more likely you'll encounter a degraded platter HDD. With SSDs I figure its binary.

0-It totally fails within the first month. I don't know, though I bet same thing happens to few regular platter HDDs too.
1-Enthusiast PC user downloading things every day, probably will last 5 years, guaranteed

It's like worrying about getting hit by lightning. Possible, but very unlikely. If you are really doing something that does insane amount of writes, you would have an SLC drive(like the X25-E) anyway.

And let me emphasize about INSANE amount of writes. The poster above me mentioned 200MB/s random writes/day continuously. How likely do you think that can happen? It's extremely unlikely you won't have at least some dips during that time.
 
Last edited:

hal2kilo

Lifer
Feb 24, 2009
25,373
11,776
136
It's probably just paranoia induced by reviews exaggerating it. It's 10x more likely you'll encounter a degraded platter HDD. With SSDs I figure its binary.

0-It totally fails within the first month. I don't know, though I bet same thing happens to few regular platter HDDs too.
1-Enthusiast PC user downloading things every day, probably will last 5 years, guaranteed

It's like worrying about getting hit by lightning. Possible, but very unlikely. If you are really doing something that does insane amount of writes, you would have an SLC drive(like the X25-E) anyway.

And let me emphasize about INSANE amount of writes. The poster above me mentioned 200MB/s random writes/day continuously. How likely do you think that can happen? It's extremely unlikely you won't have at least some dips during that time.
I suspect what you are saying is true. Still, I've got my page file on my data drive. I just love my fast boots, and as being mostly a surfer, my delays are not related to my hard drive. Anal is anal.
 

bmaverick

Member
Feb 20, 2010
79
0
0
If the SSDs are like the USB thumb/jump drives, then they do have a end-of-life running period. I had used a bunch of USB drives to do folding@home for almost a year. All three of the Kingston's 4Gb drives are now totally dead. I'm questionable on going to SSD until they are more proven and the cost seems better.
 

jimhsu

Senior member
Mar 22, 2009
705
0
76
I'd suspect that those Kingston 4GB drives don't have wear-leveling, or have a horrible implementation of it. Could be wrong though.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
If the SSDs are like the USB thumb/jump drives, then they do have a end-of-life running period. I had used a bunch of USB drives to do folding@home for almost a year. All three of the Kingston's 4Gb drives are now totally dead. I'm questionable on going to SSD until they are more proven and the cost seems better.
Ok so we've shown more than enough proof that MLC SSDs under normal useage won't run out of write cycles.. and you stand by your preconception based on USB thumb drives that don't overprovision, use the lowest grade flash available and don't have any intelligent controller that uses wear leveling and has an attrocious WA? Well I assume nobody can stop you there :/
 

Krynj

Platinum Member
Jun 21, 2006
2,816
8
81
No problems here. None whatsoever. This drive was the best $230 (in October of '09) I've ever spent on a single component.

ssdstats.png


4 Re-allocated sectors, but I'm not too worried about it. My understanding is 4 is within the acceptable range.
 
Last edited: