• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SSDs - Limiting writes, is there a point?

jkroeder

Member
So we all know MLC SSDs have a much lower write limit compared to their SLC brethren. However, for as long as these consumer SSDs have been out, I can't say I've seen anyone confirm that normal everyday use will actually kill an SSD quickly.


Now I've had my Vertex for a while running Win7 Pro. The whole time I've been redirecting certain things like browser cache and the Windows temp folder locations to a RAMDisk to limit the amount of writes to the SSD. Sure, this helps limit the writes to the SSD but if you set the RAMDisk to save an image of itself so it can be preserved across shutdowns/reboots but it adds about 30 seconds to bootup and shutdown.

I suppose I could just redirect those writes to a regular platter drive... or of course, just leave things the way they are and let them write to the SSD.

Thoughts?
 
intel claims that their drives can move 100gb a day for 5 years straight or 1.2 million mtbf. Do you see yourself using that ssd in 5 years? It can be much longer than 5 since you probably use a fraction of 100gb a day workload on the drive.
 
Normal everyday use won't wear your SSD down enough to kill it. Plus Intel's SSD comes with a 3 year warranty, by that time you'll probably trash it anyway. My SSD is the only disk on my computer, I don't use a ramdisk, I don't disable any caching, I keep my thumbnail cache, I do regular downloading and I've reinstalled my OS 3 times and 1 disk reimaging.

So far, SMART has recorded only 640GB of hostwrites, a bulk of that coming from the reinstalling of Windows and all my programs and re-imaging the HDD. The downloads hardly add-up to 50GB, what's there so much to download anyway. Normaly computer usage, FF cache, streaming and pagefile hardly does any writes.

So after 3 and a half months, I've manged to use only 8 write cycles. I don't see my 1000 write cycles being used up in 3 years. The truth is, most people out there are either too pessimistic, or never bothered looking at actual data showing how much normal program and Windows' services actually writes before telling you to kill them off. More often than not, they're just doing it for the sake of tweaking their computers to have the psychological sense of having a faster system that will be less likely to fail.
 
Good points guys. I guess with new tech though, I'm just trying to be careful with the drive and while doing that, not quite using the drive to it's full potential.

So far, I've done a bit of work on the drive with installing Win7 Pro twice (I forgot to re-enable AHCI after updating firmware and wanted to be reinstall just in case) and I'm already up to 34GB host writes according to the SSD toolbox.
 
@ current rate it will take me 400 years to run out of writes...
this issue has been pretty much solved. The drives are not going to run out of writes on you.
 
Normal everyday use won't wear your SSD down enough to kill it. Plus Intel's SSD comes with a 3 year warranty, by that time you'll probably trash it anyway. My SSD is the only disk on my computer, I don't use a ramdisk, I don't disable any caching, I keep my thumbnail cache, I do regular downloading and I've reinstalled my OS 3 times and 1 disk reimaging.

So far, SMART has recorded only 640GB of hostwrites, a bulk of that coming from the reinstalling of Windows and all my programs and re-imaging the HDD. The downloads hardly add-up to 50GB, what's there so much to download anyway. Normaly computer usage, FF cache, streaming and pagefile hardly does any writes.

So after 3 and a half months, I've manged to use only 8 write cycles. I don't see my 1000 write cycles being used up in 3 years. The truth is, most people out there are either too pessimistic, or never bothered looking at actual data showing how much normal program and Windows' services actually writes before telling you to kill them off. More often than not, they're just doing it for the sake of tweaking their computers to have the psychological sense of having a faster system that will be less likely to fail.


Your calculation of only using 8 write cycles is not likely to be accurate. If you have an 80GB SSD and the OS and apps eat up, say, 60GB, then the area that gets 'cycled' is 20GB or there abouts so you are likely to be more than 8 write cycles in that portion of the SSD and less than 8 write cycles in the area with the OS and apps.


Brian
 
Your calculation of only using 8 write cycles is not likely to be accurate. If you have an 80GB SSD and the OS and apps eat up, say, 60GB, then the area that gets 'cycled' is 20GB or there abouts so you are likely to be more than 8 write cycles in that portion of the SSD and less than 8 write cycles in the area with the OS and apps.
Thanks to wearleveling that's exactly what's not happening.

Though the algorithms are far from perfect, so some cells will be written more often, but it's still far better than the scenario you're painting.
 
If you have an 80GB SSD with 60GB occupied by OS and apps that aren't moved around then that leaves you about 20GB to handle movable/changeable/new data. So, if you move/change/add, say 10GB/day then you will go through the 20GB every second day. The wear leveling will spread the data use over the 20GB but not the 80GB. What part of that don't you understand?


Brian
 
If you have an 80GB SSD with 60GB occupied by OS and apps that aren't moved around then that leaves you about 20GB to handle movable/changeable/new data. So, if you move/change/add, say 10GB/day then you will go through the 20GB every second day. The wear leveling will spread the data use over the 20GB but not the 80GB. What part of that don't you understand?
The thing is that intelligent wear leveling algorithms also move static data around..
 
write leveling needs to be handled by the o/s or have hints to the controller. of course this filesystem has been around for ages for flash (no brains type)
 
write leveling needs to be handled by the o/s or have hints to the controller. of course this filesystem has been around for ages for flash (no brains type)
Well that, or there's a table with the mappings for the controller. But I'd think that most manufacterer's have implemented static wear leveling in some way.
 
Well that, or there's a table with the mappings for the controller. But I'd think that most manufacterer's have implemented static wear leveling in some way.

If the static data is moved at the same rate as other data that would impose a substantial read/write overhead that could easily cause substantial performance penalties. While it is certainly possible that all data on an SSD is shifted to-and-fro as part of wear leveling, and that would reduce the number of write cycles that some cells would receive, I have not heard that that was how any wear leveling schemes work. Can you point to any documentation that indicates this is being done?


Brian
 
If the static data is moved at the same rate as other data that would impose a substantial read/write overhead that could easily cause substantial performance penalties. While it is certainly possible that all data on an SSD is shifted to-and-fro as part of wear leveling, and that would reduce the number of write cycles that some cells would receive, I have not heard that that was how any wear leveling schemes work. Can you point to any documentation that indicates this is being done?


Brian

The OCZ forum moderators have mentioned it. Works with idle time of the drive. Static stuff gets moved around and remapped in the drive so the OS doesn't know the difference. I'll see if I can dig it up....

Edit: Actually, the mentioned that they cannot mention it because of Non Disclosure Agreement....so they aren't talking...but it's rumored to be written that way.
 
Last edited:
The OCZ forum moderators have mentioned it. Works with idle time of the drive. Static stuff gets moved around and remapped in the drive so the OS doesn't know the difference. I'll see if I can dig it up....

Edit: Actually, the mentioned that they cannot mention it because of Non Disclosure Agreement....so they aren't talking...but it's rumored to be written that way.

Sounds like a complicated software problem. It would make the whole drive usable instead of the free area and that would limit the maximum number of write cycle the drive would see but, as I mentioned earlier, would impose a burden on the drive. Yes, doing this during idle time would limit the perceived performance impact but would also require some sophisticated software.

It was my understanding that typical flash memory could be re-written 10,000 or more times so that even if you re-wrote every cell once a day you would still be looking at about 30 years, but if the drives are more aggressive at using all the memory then 10,000 re-writes is seemingly too high. In fact, it look like it would be closer to 1000 re-writes. And, if the smaller cells of newer memory at 34nm and now 25nm have even fewer write cycles then it would suggest we are very near a wall that would require entirely new technology.


Brian
 
It's a moot point. After talking with an Intel engineer, while deciding on getting a SSD, I immediately ordered a new 160GB drive. I don't usually buy expensive cutting-edge hardware, but he sold me. His words were, and I quote:

"If you're hammering the drive for >75% of it's capacity every day, it'll start to degrade in almost 5 years. No sane consumer is going to wear out an X-25M during that time, and I don't care if they're using it for paging, cache, or anything else. Just install it, put Win 7 on it, and forget it."

and I asked about certain cells dying early due to wear:

"It'll be on your scrap pile or in your grandma's PC before that happens."

In other words, it's a non-issue.
 
If the static data is moved at the same rate as other data that would impose a substantial read/write overhead that could easily cause substantial performance penalties. While it is certainly possible that all data on an SSD is shifted to-and-fro as part of wear leveling, and that would reduce the number of write cycles that some cells would receive, I have not heard that that was how any wear leveling schemes work. Can you point to any documentation that indicates this is being done?

Brian

that is why it isn't moved around at the same rate. But it is also not allowed to languish there.
the controller knows exactly how many writes were made to each sector, if the sector with the least amount of writes has X writes less than the sector with the most amount of writes, then the data from the sector with the least amount of writes gets moved to the sector with the most.
it is really really simple actually.
there is, of course, more to it than that. But basically there are a lot of people whose full time job is to design those algorithms. Inefficient and badly designed algorithms result in high write amplification and low performance. But thats what benchmarking is for. We know which drives are quality drives (And thus, must have quality algorithms)
 
Sounds like a complicated software problem. It would make the whole drive usable instead of the free area and that would limit the maximum number of write cycle the drive would see but, as I mentioned earlier, would impose a burden on the drive. Yes, doing this during idle time would limit the perceived performance impact but would also require some sophisticated software.
Why would it require some sophisticated software? If the controller doesn't receive any(or only a few) commands from the OS he uses the spare time to remap the drive. Makes the controller a bit more complicated (well nobody said that developing these was easy and I think history has proven that correct ^^), but it's well worth I'd say - you yourself made some nice arguments for static wear leveling 😉
Yes if he gets a command while he's working the latency is a bit higher than usual, but a lot of tests show that the max. latency is indeed a good bit higher than the average latency - could have a lot of reasons but at least it doesn't exclude the possibility.
And for people who really need more write cycles (e.g. enterprises) there are always SLC drives with 10-100x the write cycles of MLC drives.


@MagickMan: I think everyone agrees on that, but hey it's fun discussing that stuff. I never disabled indexing or anything on my G2 drive and even the really low Intel recommendations are way higher than what I'll ever drive to the drive before it becomes too small to be bothered with - 160gb in 3-5years? Maybe for my notebook.
 
Last edited:
There are a lot more writes than you think, especially to your OS drive.

For example, I couldn't figure out why Intel's SSD toolkit was telling me over 20GB/day is being written to my X25-M until I saw Microsoft Security Essentials running its weekly full scan and it clicked.... all archive files (.ZIP, .RAR, .CAB, .ISO, etc) are extracted into the C:\Windows\Temp directory for virus scanning. I have since relocated this, hopefully you get the idea..
 
There are a lot more writes than you think, especially to your OS drive.

For example, I couldn't figure out why Intel's SSD toolkit was telling me over 20GB/day is being written to my X25-M until I saw Microsoft Security Essentials running its weekly full scan and it clicked.... all archive files (.ZIP, .RAR, .CAB, .ISO, etc) are extracted into the C:\Windows\Temp directory for virus scanning. I have since relocated this, hopefully you get the idea..

Doesn't matter. At that rate it would still be running well after 10 years.
 
Another data point.

I got my G2 ssd back in August. After 7 months, it currently has 5.63TB of writes. Granted, during the time I have been abusing it quite heavily (copying over 20GB game directories back and forth multiple times a day, running dozens of benchmarks in a row). But even with my current heavy usage, assuming a worst case scenario of 1000 writes/cell, I still have 8 years to go. By that time, the thing will be a piece of crap in the days of 10 TB ssds with 2GB/s write speeds. A more realistic estimate is 10000 writes/cell, giving 80 years. Assuming I'm alive by that point, it'll practically be a museum speciman.
 
There does seem to be some concern that as features size is reduced (now at 25nm) the number of writes before failure is less AND the length of time the data is valid may be as little as a few months before you must re-write it. That was what I meant about a potential wall...


Brian
 
There does seem to be some concern that as features size is reduced (now at 25nm) the number of writes before failure is less AND the length of time the data is valid may be as little as a few months before you must re-write it. That was what I meant about a potential wall...
Link? Especially the "few months" part seems completly implausible as we're now at 10years.

And I'd really like to get some good explanations why that we should get a noticable cut in write cycles, just because of smaller flash. It'd make sense if they were using more levels (e.g. 8), but just because we user smaller NAND cells? Ok smaller = less voltage so maybe there's some kind of correlation - would be interesting.
 
Link? Especially the "few months" part seems completly implausible as we're now at 10years.

And I'd really like to get some good explanations why that we should get a noticable cut in write cycles, just because of smaller flash. It'd make sense if they were using more levels (e.g. 8), but just because we user smaller NAND cells? Ok smaller = less voltage so maybe there's some kind of correlation - would be interesting.


There was some talk on this board a few weeks ago on this and some hints on other boards but nothing real concrete. The retention time in the months wasn't for 34nm or 25nm but a couple steps down the road. It does make sense that with a given technology as you shrink the feature size to double the areal capacity the charge/cell will shrink even faster.

Also of concern is the surface area through which the charge may drain does not shrink as fast as the charge suggesting the retention times will be lower. Now, it is surely the case that the technology does NOT remain the same and that differences in design and chemistry may offset that effect but I'm not sure we can count on that. We were able to go a lot smaller with litho before immersion than many thought but we are now at that point.

Brian
 
Don't forget the spare area. If you use 60/80GB, you actually have more than just 20GB free. Something like 26GB, according to AT.

Personally, I envision using my 80gb x18 in the future as a storage drive. It will only be in a high usage scenario for the next couple of years, after that it will just have music on it or something and will barely be getting any writes.

I'm not worried about this aspect of SSDs at all.
 
Back
Top