• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Sandforce SSD Manual Overprovisioning.

BBMW

Member
Anyone with Sandforce SSD manually overprovisioning the spare area on their drives? Is this worthwhile, or just format to max indicated capacity?

Any benefit to manual overprovisioning? If so, how much space to dedicate to it?
 
Been there and done that on many drives and you probably won't know if it works well unless there's been long term testing with a drive that doesn't have it....you know...like a constant.

I saw no immediate benefits from 80GB Intel G1 & G2 and Vertex2 drives running in RAID0 when they were over-provisioned 20%.
 
There is no kink...from SF directly...

There drives will automatically account for any empty space when OP is needed. As long as the space is available, the drive can take care of everything...
 
But, is there benefit to this? I know the drives come with spare area already provisioned. How much benefit is there to "contributing" more space to this? And if there is enough to be worth while, how much should be dedicated to it.

This would be for a "boot" drive (OS and installed software.) There'd be a secondary magnetic drive for bulk and transactional data.
 
But, is there benefit to this? I know the drives come with spare area already provisioned. How much benefit is there to "contributing" more space to this? And if there is enough to be worth while, how much should be dedicated to it.
Been there and done that on many drives and you probably won't know if it works well unless there's been long term testing with a drive that doesn't have it....you know...like a constant.
If you find an answer let us know.
 
iirc the sandforce has 28% and 13% mode in firmware.

120gb sandforce = 13%
100gb sandforce = 28%
060gb sandforce = 13%
050gb sanfforce = 28%

so you have two drives obviously above but the difference is the amount reserved.

don't forget about its compression system.
 
Anand's articles cover this in detail.

Intel and SandForce controllers are able to use *all* free space on the drive for wear leveling, not just the over-provisioned amount that wasn't partitioned. There is no performance benefit from over-provisioning beyond what the drives are now configured with.

If you plan on filling up your drive completely, then you can possibly save yourself from the absolutely worst case by limiting *how* full you can actually fill up your drive. If you don't plan to do that, then you're wasting your time. If you *do* plan to do that, then you're dumb. 😉
 
the question is -> where does all the extra storage that is compressed - say you make a 300gb file with all 0's will that fit?

would 10 30gb files all 0's fit?

would 1 files 55.6gb file? **

** then where does the extra space go to?
 
Anand's articles cover this in detail.

Intel and SandForce controllers are able to use *all* free space on the drive for wear leveling, not just the over-provisioned amount that wasn't partitioned. There is no performance benefit from over-provisioning beyond what the drives are now configured with.

If you plan on filling up your drive completely, then you can possibly save yourself from the absolutely worst case by limiting *how* full you can actually fill up your drive. If you don't plan to do that, then you're wasting your time. If you *do* plan to do that, then you're dumb. 😉

This. There is really no reason to manually overprovision, just stop filling the drive and it will create the same effect.
 
So it will see partitioned, formatted but unwritten to space as usable for spare area?

Anand's articles cover this in detail.

Intel and SandForce controllers are able to use *all* free space on the drive for wear leveling, not just the over-provisioned amount that wasn't partitioned. There is no performance benefit from over-provisioning beyond what the drives are now configured with.

If you plan on filling up your drive completely, then you can possibly save yourself from the absolutely worst case by limiting *how* full you can actually fill up your drive. If you don't plan to do that, then you're wasting your time. If you *do* plan to do that, then you're dumb. 😉
 
I'm pretty sure that it is impossible to manually over provision most of the SF based drives anyways. I have a G.Skill Phoenix Pro 120GB one that I was wondering about doing that with, since it actually only has ~6.8% spare by default, but then I found out that since I will only be using about 60GB total it is fine. It doesn't see that space as exactly the same as spare area, but it can wear level across it as well.
 
I'm pretty sure that it is impossible to manually over provision most of the SF based drives anyways. I have a G.Skill Phoenix Pro 120GB one that I was wondering about doing that with, since it actually only has ~6.8% spare by default, but then I found out that since I will only be using about 60GB total it is fine. It doesn't see that space as exactly the same as spare area, but it can wear level across it as well.

The theory of manual over provisioning is to simply remove all partitions and then create a partition short of the amount of space you want to reserve and then format and use that. The unpartitioned area will cover overprovisioning, however this is done whether you follow this route or not, on a Sandforce anyway.

I might suggest an understanding of overprovisioning because whether or not you overprovision, your drive will slow once it hits a certain point. Overprovisioning is a... safeguard at most if you will. It simply reserves a portion of the drive to be used in GC, wear levelling and other f/w activities where information must be temporarily stored while a block is cleared. Its like taking the furniture out of the living room to vacuum the rug.

OP performs to jobs, neither of which are related to performance. Through the f/w, the OP increases the life of the drive as its methodology reduces the overall writes to the drive. In addition, it is inevidable at some point that cells may die without your knowledge. This is so because when it occurs, they are remapped and the OP cells replace the dead cell

Hope this helps a bit and hope I dont have my head up my @#$ here but this comes from very lengthy conversations with many oem PR, marketing and engineer reps.
 
Its like taking the furniture out of the living room to vacuum the rug.
Great analogy. Very graphic. :thumbsup:
A better analogy would be changing the position of the couch at times to prevent the carpet wearing unevenly. I'm sure we've all seen an older room when the furniture has all been moved out of it. You can clearly see where the furniture was because the carpet under the furniture is still like brand new. The carpet under the furniture is "over-provisioned", so to say.

Manually over-provisioning is akin to bringing in extra furniture that you aren't actually going to use (like a statue, or extra chairs that will never be sat in) just to save more of the carpet. The SSD manufacturers have figured out, and Anand's research and tests have confirmed, that this is unnecessary. Their stock over-provisioning is sufficient, so just "set it and forget it" meaning just partition the whole space and don't worry about it.

Now if you're configuring drives for a server or a rather unique workstation load then there's room to talk about alternatives. But I doubt 99% of people have those situations to worry about.
 
A better analogy would be changing the position of the couch at times to prevent the carpet wearing unevenly. I'm sure we've all seen an older room when the furniture has all been moved out of it. You can clearly see where the furniture was because the carpet under the furniture is still like brand new. The carpet under the furniture is "over-provisioned", so to say.

Manually over-provisioning is akin to bringing in extra furniture that you aren't actually going to use (like a statue, or extra chairs that will never be sat in) just to save more of the carpet. The SSD manufacturers have figured out, and Anand's research and tests have confirmed, that this is unnecessary. Their stock over-provisioning is sufficient, so just "set it and forget it" meaning just partition the whole space and don't worry about it.

The best analogy would be setting aside extra space on a SSD so that the controller has more room to provision.

A better alternative to overprovisioning is to stop filling the drive once you decide your drive has went under the performance threshold.
 
Over provisioning works very well in drives that don't have trim, especially if you do it right. How I do it on my G1 is that I overprovision the max amount I can and leave 5gb free for data. When I'm done using the 5gb, I'll add in 5 more gb through disk management.

What this does is it forces the OS to "write over" old data, which signals to the SSD that the data is obsolete. Then internal garbage collection can get to work. Its sort of like a ghetto trim.
 
Doing research on this very subject when I found this post. I have a new drive on order with 7% OP and was wondering if that was enough.

Here is a posting on this subject I found on the OCZ forum

http://www.ocztechnologyforum.com/f...RIM-OP-area-use-and-Life-write-throttle/page2 see post 17

From that post:

"By priority, the following implements would be best:

1. FW dedicated/mapped OP
2. Unformatted space not available to the OS.
3. Formatted but unused space within a separate Volume/Partition.
4. Formatted but unused space within the same Volume/Partition."
 
iirc the sandforce has 28% and 13% mode in firmware.

120gb sandforce = 7% overprovisioning (6% RAISE data protection)
100gb sandforce = 22% OP (6% RAISE)
060gb sandforce = 7% / 6%
050gb sanfforce = 22% / 6%
Fixed.

The point is that even though there is 13% 'missing space' on the low-end sandforce SSDs, this is not all available for write optimizing. In this mode, the sandforce offers 7% OP which is the minimum acceptable level.

The other issue that people will start to see soon with 25 nm flash, is that the bigger flash dies mean more space for RAISE. E.g the new 25 nm OCZ Vertex 2 drives take 12% for RAISE. Less space is available for use, but this is also NOT available for wear levelling. This truely is LOST space, because the larger flash means that RAISE operates less effciently.
 
Over provisioning works very well in drives that don't have trim, especially if you do it right. How I do it on my G1 is that I overprovision the max amount I can and leave 5gb free for data. When I'm done using the 5gb, I'll add in 5 more gb through disk management.

What this does is it forces the OS to "write over" old data, which signals to the SSD that the data is obsolete. Then internal garbage collection can get to work. Its sort of like a ghetto trim.

i use the tony trim method on my raid0 intel G1 SSDs
 
From that post:

"By priority, the following implements would be best:

1. FW dedicated/mapped OP
2. Unformatted space not available to the OS.
3. Formatted but unused space within a separate Volume/Partition.
4. Formatted but unused space within the same Volume/Partition."
Considering that the disk controller should have no idea about such high level constructs as partitions or OSes I doubt that. Why would the controller be interested in distinguishing between free sectors? Assuming we've got trim and the unused space is large enough to trigger a TRIM operation (and that treshhold is surely in the kbs at most) I don't see any reason why it couldn't/wouldn't use it.
 
Last edited:
I have a theory, I think manual over-provisioning is useful if you plan on filling up your entire hd, my theory is, more space will be available to do trim.

The only case this may be beneficial is if your using your hard drive as a cache. Such as the synapse ocz 128GB that is over-provisioned to 50%.

I was reading about over provisioning a sandforce ssd, but then thought it didn't make sense reading this forum, but then realized that ocz does just that with their synapse drive.

So, maybe over-provisioning allows faster recovery of cells?
 
Back
Top