But, is there benefit to this? I know the drives come with spare area already provisioned. How much benefit is there to "contributing" more space to this? And if there is enough to be worth while, how much should be dedicated to it.
If you find an answer let us know.Been there and done that on many drives and you probably won't know if it works well unless there's been long term testing with a drive that doesn't have it....you know...like a constant.
Anand's articles cover this in detail.
Intel and SandForce controllers are able to use *all* free space on the drive for wear leveling, not just the over-provisioned amount that wasn't partitioned. There is no performance benefit from over-provisioning beyond what the drives are now configured with.
If you plan on filling up your drive completely, then you can possibly save yourself from the absolutely worst case by limiting *how* full you can actually fill up your drive. If you don't plan to do that, then you're wasting your time. If you *do* plan to do that, then you're dumb.![]()
Anand's articles cover this in detail.
Intel and SandForce controllers are able to use *all* free space on the drive for wear leveling, not just the over-provisioned amount that wasn't partitioned. There is no performance benefit from over-provisioning beyond what the drives are now configured with.
If you plan on filling up your drive completely, then you can possibly save yourself from the absolutely worst case by limiting *how* full you can actually fill up your drive. If you don't plan to do that, then you're wasting your time. If you *do* plan to do that, then you're dumb.![]()
I'm pretty sure that it is impossible to manually over provision most of the SF based drives anyways. I have a G.Skill Phoenix Pro 120GB one that I was wondering about doing that with, since it actually only has ~6.8% spare by default, but then I found out that since I will only be using about 60GB total it is fine. It doesn't see that space as exactly the same as spare area, but it can wear level across it as well.
Great analogy. Very graphic. :thumbsup:Its like taking the furniture out of the living room to vacuum the rug.
A better analogy would be changing the position of the couch at times to prevent the carpet wearing unevenly. I'm sure we've all seen an older room when the furniture has all been moved out of it. You can clearly see where the furniture was because the carpet under the furniture is still like brand new. The carpet under the furniture is "over-provisioned", so to say.Great analogy. Very graphic. :thumbsup:Its like taking the furniture out of the living room to vacuum the rug.
A better analogy would be changing the position of the couch at times to prevent the carpet wearing unevenly. I'm sure we've all seen an older room when the furniture has all been moved out of it. You can clearly see where the furniture was because the carpet under the furniture is still like brand new. The carpet under the furniture is "over-provisioned", so to say.
Manually over-provisioning is akin to bringing in extra furniture that you aren't actually going to use (like a statue, or extra chairs that will never be sat in) just to save more of the carpet. The SSD manufacturers have figured out, and Anand's research and tests have confirmed, that this is unnecessary. Their stock over-provisioning is sufficient, so just "set it and forget it" meaning just partition the whole space and don't worry about it.
Except that isn't an analogy...The best analogy would be setting aside extra space on a SSD so that the controller has more room to provision.
Fixed.iirc the sandforce has 28% and 13% mode in firmware.
120gb sandforce = 7% overprovisioning (6% RAISE data protection)
100gb sandforce = 22% OP (6% RAISE)
060gb sandforce = 7% / 6%
050gb sanfforce = 22% / 6%
Over provisioning works very well in drives that don't have trim, especially if you do it right. How I do it on my G1 is that I overprovision the max amount I can and leave 5gb free for data. When I'm done using the 5gb, I'll add in 5 more gb through disk management.
What this does is it forces the OS to "write over" old data, which signals to the SSD that the data is obsolete. Then internal garbage collection can get to work. Its sort of like a ghetto trim.
Considering that the disk controller should have no idea about such high level constructs as partitions or OSes I doubt that. Why would the controller be interested in distinguishing between free sectors? Assuming we've got trim and the unused space is large enough to trigger a TRIM operation (and that treshhold is surely in the kbs at most) I don't see any reason why it couldn't/wouldn't use it.From that post:
"By priority, the following implements would be best:
1. FW dedicated/mapped OP
2. Unformatted space not available to the OS.
3. Formatted but unused space within a separate Volume/Partition.
4. Formatted but unused space within the same Volume/Partition."
