A Question About Overprovisioning in SSDs

zliqdedo

Member
Dec 10, 2010
59
10
81
Hi,

I am familiar with the benefits - little or otherwise - when it comes to creating extra overprovisioning in SSDs. This is not a question about whether overprovisioning is worth it or not.

I understand why an SSDs has to be manually overprovisioned when benchmarking. Is it necessary to do so when an SSD is going to be used normally, though?

For example, if I want to reserve 25% of my SSD, do I have to set apart a partition when installing Windows, or is it okay to install normally and just not fill it up past 75%? Is there a difference between the two, and if so, which is better?

Thanks.
 

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
25% for OP is a lot since SSDs already come from the factory with some OP space (most recommend around 10%), but you must want the 25% for a reason.

Probably the easiest way to set a OP is when you create a partition for a Windows install. Just limit the installation drive (usually C: ) amount, and leave the remaining space you want as unpartitioned space. The SSD will automatically use it. You can also shrink the partition after installation, it really doesn't matter much. I just do it when I create the partitions before the install begins.
 

zliqdedo

Member
Dec 10, 2010
59
10
81
25% for OP is a lot since SSDs already come from the factory with some OP space (most recommend around 10%), but you must want the 25% for a reason.

Probably the easiest way to set a OP is when you create a partition for a Windows install. Just limit the installation drive (usually C: ) amount, and leave the remaining space you want as unpartitioned space. The SSD will automatically use it. You can also shrink the partition after installation, it really doesn't matter much. I just do it when I create the partitions before the install begins.

Thanks for taking the time to respond. Please, consider the following quote from my original post:

For example, if I want to reserve 25% of my SSD, do I have to set apart a partition when installing Windows, or is it okay to install normally and just not fill it up past 75%? Is there a difference between the two, and if so, which is better?

Firstly, I never said I wanted to provision 25%. And secondly, as is evident, I do know how to set aside a partition during OS installation. My question is different.
 

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
Thanks for taking the time to respond. Please, consider the following quote from my original post:



Firstly, I never said I wanted to provision 25%. And secondly, as is evident, I do know how to set aside a partition during OS installation. My question is different.

My bad.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,122
1,736
126
You don't "set aside" an unallocated portion of the disk before installing windows.

In Windows install, you get the chance to change a target drive's odometer. Then, Windows will create a logical volume on that partition. Or maybe it partitions the whole drive, but the logical volume size is truncated by the 10%. I just know that I do it. I may change it to something more like 7 to 9%.

When you install windows, choose to customize the size option -- I know there's a size-odometer-box when you get to the target disk selection page.

If there were already a partition defined on the target disk at time of Win installation, you should delete it and create a new one -- following my suggestion above. That's all you need to do: Use less of the drive than it offers by 5 to 10%.

If you aren't newly installing Windows or don't intend to install it again from scratch, and if it already has windows installed, then you might look at a utility like Acronis True Image, EaseUS Partition Manager -- perhaps Parted Magic, several other possibilities. Even within Windows, you should be able to resize the logical volume for more unallocated space, although these utilities will take you through at least one reboot with the program features working between post and desktop before reaching latter. Parted Magic may be only a self-booting program, I'd have to look again -- because I only used it that way. I tend to prefer having a DOS-like bootable utility as well as the Windows installation.
 
Last edited:

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
No, that's perfectly alright - I meant it sincerely; I was just trying to make myself clear. I'd greatly appreciate it if you could answer my question now that we've sorted what I meant.

OK, lets try this again. Hopefully I got the jist of what you are asking.

While it is true that the performance of SSDs generally starts getting reduced around the 75% mark (differs with each manufacturer, some are lower than that), over-provisioning is different than just leaving free partitioned space. When the space is unpartitioned, it is not user reserved space, and the SSD will use to boost performance. Once the space is partitioned, it is user reserved space. So, typically most SSDs will have better performance with a OP (how much differs with different models).

So, if you want to "max out" the performance and life of a SSD, you would set aside roughly 10% as an OP. Obviously, you still don't want to fill the drive all the way up as it will reduce its performance, as it needs to do its background tasks (TRIM, garbage, etc). Some people don't feel modern SSDs really need additional OP, but if you look at reviews like Anandtech's, most drives respond well to additional OP space.

EDIT

Here is a quick and dirty article on the difference between free partitioned space, and a unpartitioned OP, and what does what:

http://www.techspot.com/news/52835-...-need-for-trim-overprovisioning-and-more.html
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,724
3,004
146
OK, lets try this again. Hopefully I got the jist of what you are asking.

While it is true that the performance of SSDs generally starts getting reduced around the 75% mark (differs with each manufacturer, some are lower than that), over-provisioning is different than just leaving free partitioned space. When the space is unpartitioned, it is not user reserved space, and the SSD will use to boost performance. Once the space is partitioned, it is user reserved space. So, typically most SSDs will have better performance with a OP (how much differs with different models).

So, if you want to "max out" the performance and life of a SSD, you would set aside roughly 10% as an OP. Obviously, you still don't want to fill the drive all the way up as it will reduce its performance, as it needs to do its background tasks (TRIM, garbage, etc). Some people don't feel modern SSDs really need additional OP, but if you look at reviews like Anandtech's, most drives respond well to additional OP space.

It was my belief that empty space is empty space on an SSD - the internal controller has no notion of the file system or partitions contained on it (all the information that represents that is stored... on the drive where the OS decided to store it and read it from to manage the drive).

It was also my belief that manual overprovision through partition sizing was just to prevent the user from filling up the space by taking away their ability to write to a certain amount of the SSD.

If I am mistaken, I'd love to read an article or something that can explain it in-depth.
 

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
It was my belief that empty space is empty space on an SSD - the internal controller has no notion of the file system or partitions contained on it (all the information that represents that is stored... on the drive where the OS decided to store it and read it from to manage the drive).

It was also my belief that manual overprovision through partition sizing was just to prevent the user from filling up the space by taking away their ability to write to a certain amount of the SSD.

If I am mistaken, I'd love to read an article or something that can explain it in-depth.

Here's a couple that I have read and go off of. Unless there is something newer or something has changed, for the OP to work it has to be unpartitioned free space (or there might be a utility out there I am not aware of that can do it), I just do it the old-fashioned way by leaving a 10% chunk of unpartitioned space:

http://www.samsung.com/semiconducto...nt/Samsung_SSD_845DC_04_Over-provisioning.pdf

Depending on the SSD product, some are already over-provisioned by the manufacturer and users cannot access and control it. However, users can set additional OP areas using several methods - using utility tools (hdparm, etc.), setting unallocated partitions on the operating system (OS) and using Samsung Magician software (SW).


http://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti/

In contrast, when data is written randomly to the SSD, the data that is marked invalid is scattered throughout the entire SSD creating many small holes in every block. Then when garbage collection acts on a block containing randomly written data, more data must be moved to new blocks before the block can be erased. The red line of the Random Writes graph (above) shows how most SSDs would operate. Note that in this case, as the amount of over-provisioning increases, the gain in performance is quite significant. Just moving from 0% over-provisioning (OP) to 7% OP improves performance by nearly 30%. With flash controllers that use a data reduction technology, the performance gains are not as significant, but the performance is already significantly higher for any given level of OP.
 
Feb 25, 2011
16,983
1,616
126
You can have a partition that takes up the entire disk - as long as it's only 75% full, the drive will behave as though it has 25% overprovisioning.
 
  • Like
Reactions: Phynaz

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,724
3,004
146
Samsung said:
"A casual user with a large-capacity SSD may not need to set aside any extra space for OP. The SSD will naturally use any available free space to perform its maintenance algorithms. If you have a small SSD, on the other hand, it is recommended to set aside some OP (between 6.7 and 10% of total drive space) to minimize the risk of accidentally filling the drive to capacity. While filling a drive with data isn’t harmful, it will have a severe impact on performance."

Reference.

Straight from the horses mouth - OP is a user protection and only a user protection from filling up an SSD and degrading performance.
 
  • Like
Reactions: Phynaz

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
I stand corrected. I hadn't seen this before looking just now. I've just been manually setting my OP and not thinking twice about it:

http://www.anandtech.com/show/6489/playing-with-op

It looks like the newer controllers handle the free space much better than the older ones, and do in fact treat free space as an OP. However, by manually setting the OP in unallocated space like I do, there is a slight improvement in consistency (although not a night-and-day difference anymore).
 

Hail The Brain Slug

Diamond Member
Oct 10, 2005
3,724
3,004
146
I stand corrected. I hadn't seen this before looking just now. I've just been manually setting my OP and not thinking twice about it:

http://www.anandtech.com/show/6489/playing-with-op

It looks like the newer controllers handle the free space much better than the older ones, and do in fact treat free space as an OP. However, by manually setting the OP in unallocated space like I do, there is a slight improvement in consistency (although not a night-and-day difference anymore).

I still set 15-20% overprovisioning on any SSD I get. I don't set so much aside for performance reasons but more because I hate having an odd-ball sized partition so I round down my partition size to something I like (On my 1TB 960 Pro with 953GB I rounded down to 800GB for ~19% OP). I do tend to buy Samsung only so none is provided. I wish the drives that did have OP from the factory were more up front about that fact.

Edit: Fixed wrong drive size
 

BonzaiDuck

Lifer
Jun 30, 2004
16,122
1,736
126
I stand corrected. I hadn't seen this before looking just now. I've just been manually setting my OP and not thinking twice about it:

http://www.anandtech.com/show/6489/playing-with-op

It looks like the newer controllers handle the free space much better than the older ones, and do in fact treat free space as an OP. However, by manually setting the OP in unallocated space like I do, there is a slight improvement in consistency (although not a night-and-day difference anymore).
Good to know. I was going to ask what YOU FOUND in the link, and realize I should simply scan through it to see where my SSDs fall on either side of that line.

.. . . later . . . they reference the 840 Pro. I've already overprovisioned it, as with the others.
 
Last edited:

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
I still set 15-20% overprovisioning on any SSD I get. I don't set so much aside for performance reasons but more because I hate having an odd-ball sized partition so I round down my partition size to something I like (On my 1TB 960 Pro with 959GB I rounded down to 800GB for ~20% OP). I do tend to buy Samsung only so none is provided. I wish the drives that did have OP from the factory were more up front about that fact.

Yeah, ever since I started buying SSDs, I would always just set my 10% and not think twice about it. I don't do a lot of writes to my SSDs (except when I do casual photo/video editing), but I haven't really done much of that over the last year.

I have been eyeing the 960 EVO 500 GB myself, but I am kinda leaning towards the MyDigitalSSD BPX 480 GB right now for $187. Although, if I end up getting the Samsung drive, I'll do my standard OP. But if I get the BPX, now that newer controllers can just use the free space as OP, I'll probably just leave it be.

The thing about computers/electronics........there's always something changing.

Edit

It also now makes sense why Samsung removed the option to manually add an OP in Samsung Magician 5.0.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
16,122
1,736
126
I'm just stunned at how a little 250GB EVO is working as a caching-drive for SATA SSDs and HDDs.

Within each OS's Primo cache, 38GB is allocated to an SATA boot disk, and 65 GB to an HDD. The first day of use, there were merely a few gigabytes in each. Now they're full to the brim, between 30 and 100 MB (M . . . B . . . !) free in each allocation. The TBW for the little 960 EVO with the logical volumes divided between Win 7 and Win 10, and maybe 30GB unallocated for OP purposes, is still at 900 GB or .9 TBW. That early-after-purchase total was due to about three repartitions and reformats and unnecessary writes and deletions. It hasn't increased since I created the two cache allocations for use under Win 7. Because of this discussion, I could use the 30GB or so extra, but then, the caches fill up. So I'd better keep it.

This was a great experiment, and I'm repeating myself: it seems totally great and seems like it has great potential. But I'm using 79% (~80%, ok . . . ) of the 16GB 2x8 in the system -- it's doing that right now, with a game at a menu-state in background, Piriform Defraggler defragging a cached disk after putting all 60GB of XPlane9 on there just for the hell of it with now-consumed 150GB of 930 total in the volume.

My regrets are that I'm still not sure what the best cache size is. Perhaps -- it is "any you can use." And I'm at a real crossroads for selection and purchase of (a) 1TB EVO versus PRo, (b) an extra 2x8 kit of 3200 14-14-14 versus a total replacement with 2x16GB of the same model and spec. I'd also wanted to get a 1440p gaming monitor -- a $700 item. It's useful for me to purchase all big items together.

One bundle would be 1TB EVO + 2x8 TridentZ, approximately $480 + $140 or $620. Another bundle would be 1TB EVO + 2x16GB TridentZ or $730. Follow the simple arithmetic and logic about outlays including the Pro, but you'd add about $200+ to either option. The biggest outlay would be $930. Add my monitor, and I almost have the outlay I spent in September for the Skylake system before I bought the 250GB EVO.

I could actually experiment with large cache sizes, since my Os-boot disk has 300GB win7 and 180GB win 10. Right now, the Win 7 boot-system volume is only 1/4 full, and the Win 10 is even more sparse.

So I could give each OS say -- 200GB for each cache, and still expand the OS boot volumes a bit. But I'm installing more and more programs on the spinner. What I might want is a 1TB SATA to replace the current OS-boot disk, having moved the latter to the NVMe with the two caching volumes for four major volumes total. But moving data from the spinner to the 1TB SATA would fit into a volume half its size, since the barracuda is really a 2TB split between the OSes.
 

mv2devnull

Golden Member
Apr 13, 2010
1,519
154
106
The partition table is a tiny list of entries and each entry merely states that "logical sectors from X to Y are partition Z".
Any given logical sector is thus either within some partition or not.

A filesystem is initialized (aka formatted) within logical sectors of a single partition. (Ignoring RAID-stuff here.)
A filesystem obviously writes the data of the files into the volume (consuming some logical sectors) but does also use space for metadata (such as "directories").
Logical sectors all over the partition might thus have something written to them.
Fragmentation used to be a problem. A filesystem might have preferred to use continuous blocks as long as possible, thus ending up writing to most sectors (rather than reusing) as files are created/removed.

The NTFS format utility has "Quick format" option. What does the "not quick" mode do? Does it initialize (write&check) every logical sector within partition?
Does the controller know difference between a "written to as check" and "written to and in use" sectors?

Rather than having a filesystem directly on partition, the partition can be assigned to a "RAID volume" and filesystem is written to such volume.
The R in RAID stands for Redundancy (rAID-0 has none). RAID ensures consistency by initializing every sector within the volume.


In other words, if you do have a logical sector that is not within any (used) partition, then you/system know that it is unused.
Sectors within filesystems are less predictable (although you can use trim to tell the disk controller about unused sectors).
 

zliqdedo

Member
Dec 10, 2010
59
10
81
Thank you all for contributing. It's been very helpful. So older SSDs need unallocated space for overprovisioning to work correctly, and newer ones don't?
Straight from the horses mouth - OP is a user protection and only a user protection from filling up an SSD and degrading performance.
Is it safe to assume that any post-840 SSDs (from any manufacturer) would work this way? I have a SanDisk Extreme Pro 480 GB.
The thing about computers/electronics........there's always something changing.
Very true. I guess we all learned something new. Just to clarify - when you say you set aside 10% do you mean total NAND capacity? For example, my SSD is advertised as 480 GB, but total NAND capacity is 512 GB. Would you set aside 10% from 512 GB or 480 GB or even 10% from less than 480 GB after formatting?

P.S. I haven't been using Samsung for about a year now, and didn't know it no longer provides an option to set aside space in Magician. It does make sense, though.
 

UsandThem

Elite Member
May 4, 2000
16,068
7,382
146
Thank you all for contributing. It's been very helpful. So older SSDs need unallocated space for overprovisioning to work correctly, and newer ones don't?

Is it safe to assume that any post-840 SSDs (from any manufacturer) would work this way? I have a SanDisk Extreme Pro 480 GB.

Very true. I guess we all learned something new. Just to clarify - when you say you set aside 10% do you mean total NAND capacity? For example, my SSD is advertised as 480 GB, but total NAND capacity is 512 GB. Would you set aside 10% from 512 GB or 480 GB or even 10% from less than 480 GB after formatting?

P.S. I haven't been using Samsung for about a year now, and didn't know it no longer provides an option to set aside space in Magician. It does make sense, though.

The Anandtech article that I linked to was from the end of 2012, so if I remember correctly, that would have been the Samsung 830 series. So any after that should be fine.

I have always based my OP on the size of a formatted drive, not the total capacity before formatting. It used to be (it might have changed as well) that drives showed improvement up to a 25% OP. I always thought that was too much for my SSD use because I do not do a lot of writes to my drives. So between the factory reserved OP and my 10% OP, I have been satisfied with the performance and reliability.
 

zliqdedo

Member
Dec 10, 2010
59
10
81
The Anandtech article that I linked to was from the end of 2012, so if I remember correctly, that would have been the Samsung 830 series. So any after that should be fine.

I have always based my OP on the size of a formatted drive, not the total capacity before formatting. It used to be (it might have changed as well) that drives showed improvement up to a 25% OP. I always thought that was too much for my SSD use because I do not do a lot of writes to my drives. So between the factory reserved OP and my 10% OP, I have been satisfied with the performance and reliability.
So in my case that would be 432 GB or about 16% of total NAND which is within Anandtech's minimum recommendation of 15%. I might cap it at 435 GB and then try not to fill it past 384 GB, or just allocate the whole drive and keep that 386 - 435 GB threshold in mind. Thanks.
 
  • Like
Reactions: UsandThem

coercitiv

Diamond Member
Jan 24, 2014
7,112
16,453
136
So in my case that would be 432 GB or about 16% of total NAND which is within Anandtech's minimum recommendation of 15%. I might cap it at 435 GB and then try not to fill it past 384 GB, or just allocate the whole drive and keep that 386 - 435 GB threshold in mind. Thanks.
I would still recommend setting the space aside as "old school" OP. The reason is this technique is more or less failure prone - it does not rely on OS <-> SSD interaction to keep the SSD in good working condition. It's also a nice habit to have when configuring systems for less tech savvy users.

Having said that, it's also a good idea to use different OP guidelines depending on intended disk usage:
  • System disk or app disk with high performance requirement? Set that fat 10-20% OP.
  • Storage disk with moderate performance requirement? Set a minimal OP and make good use of disk capacity, even if only as temporary location.
I have 2x 256GB SSDs in my main system: OS/app disk is set with ~15% OP, storage only has 4%. On another system with low-moderate performance requirements I only went with 7.5% OP for main disk.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,122
1,736
126
I would still recommend setting the space aside as "old school" OP. The reason is this technique is more or less failure prone - it does not rely on OS <-> SSD interaction to keep the SSD in good working condition. It's also a nice habit to have when configuring systems for less tech savvy users.

Having said that, it's also a good idea to use different OP guidelines depending on intended disk usage:
  • System disk or app disk with high performance requirement? Set that fat 10-20% OP.
  • Storage disk with moderate performance requirement? Set a minimal OP and make good use of disk capacity, even if only as temporary location.
I have 2x 256GB SSDs in my main system: OS/app disk is set with ~15% OP, storage only has 4%. On another system with low-moderate performance requirements I only went with 7.5% OP for main disk.

I'm moving forward myself with my NVMe-caching experiment before I move on and buy an NVMe big enough to use as OS-system disk, while still caching to a small volume on it. Even for my own "workstation" it is clear to me that my own workloads don't stress my SSDs that much. I haven't reached 10TBW on any of them, some at least two or three years old. But the caching volume gave me concern, and I'd never checked it. I did just that the other day on the older system, and the SSD-cache had completely filled up. TBW over some two years was something like 6.

It depends on what sort of workloads you have, but I always give these drives a perfunctory 10% OP. You only deprive yourself of that extra storage by 10%, and it shouldn't matter to you. A 500GB SATA SSD these days can cost as little as $120.