How do i do this SSD provisioning?

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
I have a 256GB SSD LiteOn LAT-256M3S + 1TB Seagate Momentus 5400 RPM HDD

The 256 GB partition is split into 2:

C: 50 GB
d : 188 GB

I just read about this SSD provisioning but I don't know how to do it?

can someone provide me with a little step by step guide

will this improve my transfer rates or performance or is it just a gimmick?

please guide meh
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
Also. if over provisioning is as simple as leaving some free space of the partition, should that be before the C: partition or after my D: partition?
 

taq8ojh

Golden Member
Mar 2, 2013
1,296
1
81
I have no idea what provisioning is either, and don't think we need to know at all.
Just plug it in and profit.
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
Interesting reply from a member at the ASUS forums, guess it's time for a format:

over provisioning is just as simple as leaving unallocated free space in a ssd. normally I just leave certain amount at the end of the ssd. like @cheek suggested, 8% to 20% should be good.
 

kbp

Senior member
Oct 8, 2011
577
0
0
You can over provision when installing windows, just do not use the whole drive for the "C" partition.
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
You can over provision when installing windows, just do not use the whole drive for the "C" partition.

But I was under the impression that neither could you use the D: Drive and must leave an empty partition of 25% of your disk doing nothing
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Why would you have to format to leave space at the end of the drive? Couldnt you just shrink the D partition leaving the shrink unallocated? Id think it would be the end of the drive anyways.
 
Last edited:

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
Why would you have to format to leave space at the end of the drive? Couldnt you just shrink the D partition leaving the shrink unallocated? Id think it would be the end of the drive anyways.

I read online that doesn't work for the TRIM purposes and I must delete all partitions and re-do it from scratch

So I did the SSD provisioning by leaving 25% of the SSD disk space after my 2nd partition (also from the SSD

C: 50 GB
d : 128 GB
Unallocated disk space after d : 59.70 GB

I then ran AS SSD Benchmark and the performance is exactly the same.

so I wasted the whole day formatting and reinstalling everything in vain :(
 

VirtualLarry

No Lifer
Aug 25, 2001
56,571
10,207
126
OP doesn't speed up a new drive. It does allow for better steady-state performance, by preventing the drive firmware from being "backed into a corner" as far as the free-block list being nearly empty.

So three years down the road, it may run faster, than if you had never over-provisioned it. Certainly, it should lower write-amplification and increase overall lifespan of the drive.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I just read about this SSD provisioning but I don't know how to do it?

can someone provide me with a little step by step guide
1. Disconnect all drives but the install media, and your new SSD.
1A. If already partitioned, delete all partitions.
2. Install Windows.
3. Connect any other drives, after it's up and running.
4. ...
5. Profit!

Leave some free space, like with a HDD, and quit worrying about it (idle time and TRIM will take care of things).

You're not going to tweak your way into better performance, unless you're using a crappy SSD that desperately needs some tweaks to not suck, or running an OS with no conception of SSDs (such as Win XP). Windows 7 or newer will align the partitions, automatically. It will send TRIM, automatically. If the performance is high enough, it will disable indexing on it, and possibly Superfetch, automatically. It will also, automatically, ignore the SSD for defragging.
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
1. Disconnect all drives but the install media, and your new SSD.
1A. If already partitioned, delete all partitions.
2. Install Windows.
3. Connect any other drives, after it's up and running.
4. ...
5. Profit!

Leave some free space, like with a HDD, and quit worrying about it (idle time and TRIM will take care of things).

You're not going to tweak your way into better performance, unless you're using a crappy SSD that desperately needs some tweaks to not suck, or running an OS with no conception of SSDs (such as Win XP). Windows 7 or newer will align the partitions, automatically. It will send TRIM, automatically. If the performance is high enough, it will disable indexing on it, and possibly Superfetch, automatically. It will also, automatically, ignore the SSD for defragging.

thanks for the info
 

SetiroN

Junior Member
Apr 18, 2012
20
0
66
Over provisioning works and does make a difference: you don't need to wait three years, filling up the drive can take two weeks. Don't forget that over-provisioning also improves the lifespan of your NAND by a fair amount. It doesn't make sense not to do it on a 256GB drive, just don't overdo it, 15-20GB are enough, 30 is ideal, more than that is pretty much a waste of space for normal usage.

Instead, having two partitions seems pointless to me, it's not a mechanical hard drive where the first sectors perform faster than the last ones.
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
Can you (SetiroN) point to a link for why one would want to leave fully 10% of an SSD completely unused and unusable to the OS? It isn't as if the SSD can sense which part of the drive is formatted for C: and which isn't and do something intelligent there, so why are you doing this?
 

Imp

Lifer
Feb 8, 2000
18,828
184
106
Just got my first SSD, didn't bother with anything beyond reinstall Windows, disable defragmenter, superfetch, and prefetch (last two Intels Toolbox did for me).

If you're putting Windows on C drive, I wouldn't use only 50Gb. I used 50 Gb for my C: drive in my last system and part of the reason I went with an SSD was because my C: drive filled up -- despite me putting everything non-essential onto the D: and E: partitions. Some things just auto-install to C: or make it difficult to not install on C:, so they build up.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Can you (SetiroN) point to a link for why one would want to leave fully 10% of an SSD completely unused and unusable to the OS? It isn't as if the SSD can sense which part of the drive is formatted for C: and which isn't and do something intelligent there, so why are you doing this?
Some drives can, if formatted MBR, with space left unpartitioned at the end.

However, (a) TRIM makes the whole thing a futile exercise, except for servers (where TRIM may be unavailable, and where random write workloads can actually benefit a lot from compression and high-QD optimizations): use the space if you need it, leave it free and benefit from free space as added OP if you don't; (b) many small-write workloads won't benefit from additional OP at all; and (c) you still need a drive that uses a WA/GC implementation which can benefit significantly (consumer drives with enterprise variants tend to be good bets for that kind of thing, like Micron, non-SF Intel, and Samsung--SF controller drives seem to need TRIM to do their best, regardless of any other trickery, as a counterpoint).
 

taq8ojh

Golden Member
Mar 2, 2013
1,296
1
81
Over provisioning works and does make a difference: you don't need to wait three years, filling up the drive can take two weeks. Don't forget that over-provisioning also improves the lifespan of your NAND by a fair amount. It doesn't make sense not to do it on a 256GB drive, just don't overdo it, 15-20GB are enough, 30 is ideal, more than that is pretty much a waste of space for normal usage.

Instead, having two partitions seems pointless to me, it's not a mechanical hard drive where the first sectors perform faster than the last ones.
Are you one of those guys who claim a SSD could "die" within a few months if you write xyz amount of data daily by chance?
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
Some drives can, if formatted MBR, with space left unpartitioned at the end.

However, (a) TRIM makes the whole thing a futile exercise, except for servers (where TRIM may be unavailable, and where random write workloads can actually benefit a lot from compression and high-QD optimizations): use the space if you need it, leave it free and benefit from free space as added OP if you don't; (b) many small-write workloads won't benefit from additional OP at all; and (c) you still need a drive that uses a WA/GC implementation which can benefit significantly (consumer drives with enterprise variants tend to be good bets for that kind of thing, like Micron, non-SF Intel, and Samsung--SF controller drives seem to need TRIM to do their best, regardless of any other trickery, as a counterpoint).

Reference to your statement that if space is left unformatted 'at the end' the drive will intelligently use that space instead of the used C: drive if C: drive's data blocks on the SSD go bad?

I think you're referring to the reserve or extra space that the SSD 'hides' from you - you'll never see it, but if it's required, it can be used. ...but that's different from what you're talking about, because you said you could see it and should leave it unformatted by your host OS.

One thing common to nearly all SSDs is the use of over provisioning to help with garbage collection. More flash resides within the SSD than is available to the user — a 64GB SSD may actually contain 80GB of internal NAND, but only 64GB is visible to the user. The other 16GB provides an area that can be used for background processes.

"Every SSD has reserved space for various reasons," said Hong. JEDEC, the leading developer of standards for the solid-state industry, recommends having about 7 percent of reserved space, he said. If the ratio is enlarged, it is called over-provisioning.


http://www.enterprisestorageforum.c...6/Solid-State-Drives-Take-Out-the-Garbage.htm

I suggest taking a look at that, particularly the part about the user not seeing the extra space.

Quick gist: If you can see space on the drive, format it and use it. All of it.

And OP: Make one big C: partition unless you really have a specific use-case for having multiple partitions; it only adds complexity.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Why did you even split your ssd into 2 partitions? There's no more fragmentation issue with ssds - it makes more sense to leave it as a single drive.

As for over provisioning, there's 2 ways to do it:
1. Leave some free space on the drive
2. Don't format the drive to full capacity. Eg. If you have a 250gb Ssd which is, say 238gb of useable storage, format it as 220gb and leave the rest as unpartitioned space. That NAND will still be used for the drive during P/E (program/erase) cycles and that effectively over provisions an extra 18gb of NAND (in this example).
 

SetiroN

Junior Member
Apr 18, 2012
20
0
66
(a)TRIM makes the whole thing a futile exercise, except for servers (where TRIM may be unavailable, and where random write workloads can actually benefit a lot from compression and high-QD optimizations): use the space if you need it, leave it free and benefit from free space as added OP if you don't; (b) many small-write workloads won't benefit from additional OP at all; and (c) you still need a drive that uses a WA/GC implementation which can benefit significantly (consumer drives with enterprise variants tend to be good bets for that kind of thing, like Micron, non-SF Intel, and Samsung--SF controller drives seem to need TRIM to do their best, regardless of any other trickery, as a counterpoint).
(a)partly true, except for SF drives (that benefit more from unallocated space), because having some additional space dedicated to over-provisioning means that the controller won't have to wait for the OS' TRIM command to execute, which can make a difference on mostly full drives. Which brings me to the next thing: what partitioning to a lower size does for everyone, instead, is force you to make a better use of your drive: laziness and/or superficiality are often cause of less-than-ideal usage. This way you can make sure that your drive will stay healthy.
SF drives have additional advantages, due to how they work with data compression and how TRIM behaves differently on them. Most of all, filling up a SF drive completely with incompressible data results in a permanent reduction in performance: not particularly likely, but better safe than sorry; (b) not true, small writes are actually most affected by provisioned space; (c) true, but not having what makes the most out of something doesn't mean we can't make a bit for ourselves anyway. :)

Are you one of those guys who claim a SSD could "die" within a few months if you write xyz amount of data daily by chance?
Haha, no, but I see your concern. Without being over dramatic, drive endurance over multiple years is an issue, especially because SSDs are often re-used on laptops after desktop dismissal. I'm speaking out of a x25-m g1 failure due to bad blocks after 3 years or so; yes, controllers have improved a lot since, but newer processes are bringing us NAND capable of less and less read/write cycles, not to mention TLC drives.

Reference to your statement that if space is left unformatted 'at the end' the drive will intelligently use that space instead of the used C: drive if C: drive's data blocks on the SSD go bad?

I think you're referring to the reserve or extra space that the SSD 'hides' from you - you'll never see it, but if it's required, it can be used. ...but that's different from what you're talking about, because you said you could see it and should leave it unformatted by your host OS.

One thing common to nearly all SSDs is the use of over provisioning to help with garbage collection. More flash resides within the SSD than is available to the user — a 64GB SSD may actually contain 80GB of internal NAND, but only 64GB is visible to the user. The other 16GB provides an area that can be used for background processes.

"Every SSD has reserved space for various reasons," said Hong. JEDEC, the leading developer of standards for the solid-state industry, recommends having about 7 percent of reserved space, he said. If the ratio is enlarged, it is called over-provisioning.
I know exactly what over-provisioning is, that's why I'm calling it by name since the start. -.-

Reserving some space on the drive (never said "at the end", there's practically no beginning nor end on an SSD) adds up to what's already there by design, backing the controller's background operations.
SSD controllers will "intelligently use" all free and TRIM-ed space, regardless of where it is, but keeping it hidden has some practical advantages - read above my reply to Cerb.
 
Last edited:

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
Just got my first SSD, didn't bother with anything beyond reinstall Windows, disable defragmenter, superfetch, and prefetch (last two Intels Toolbox did for me).

If you're putting Windows on C drive, I wouldn't use only 50Gb. I used 50 Gb for my C: drive in my last system and part of the reason I went with an SSD was because my C: drive filled up -- despite me putting everything non-essential onto the D: and E: partitions. Some things just auto-install to C: or make it difficult to not install on C:, so they build up.

I have my C: as 50 GB and have 20 GB free after I've installed all my apps including office 2010 + adobe photoshop CS6 and Audition CS6

the reason I have so much free space is I disabled the page file (I have 16 GB RAM) and I disabled hibernation and deleted the software distribution\download folder as usual which holds all the temp files needed for Windows updates installation.

u never need those files after the updates have been installed

so 50 GB for me is more than enough I am even thinking of going to 40 GB as I dont play games and there is nothing more to be installed
 

Berryracer

Platinum Member
Oct 4, 2006
2,779
1
81
Why did you even split your ssd into 2 partitions? There's no more fragmentation issue with ssds - it makes more sense to leave it as a single drive.

As for over provisioning, there's 2 ways to do it:
1. Leave some free space on the drive
2. Don't format the drive to full capacity. Eg. If you have a 250gb Ssd which is, say 238gb of useable storage, format it as 220gb and leave the rest as unpartitioned space. That NAND will still be used for the drive during P/E (program/erase) cycles and that effectively over provisions an extra 18gb of NAND (in this example).

the reason I make 2 partitions is I store my software installation files on D: for faster re-installation of apps when I do need to reinstall them

if I put everything on C: then when I restore my image or format I lose those files so this is the best way for me I don't do it for performance or anything

** I have 147 GB free space on my D: partition which is 188 GB in total

maybe this is why I didn't notice a diff. when I left a blank unpartitioned space for over provisioning when I benchmarked my SSD transfer rates?

Just to note, with or without over provisioning I can still see TRIM enabled on my SSD
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
(a)partly true, except for SF drives (that benefit more from unallocated space), because having some additional space dedicated to over-provisioning means that the controller won't have to wait for the OS' TRIM command to execute, whichs can make a difference on mostly full drives. Which brings me to the next thing: what partitioning to a lower size does for everyone, instead, is force you to make a better use of your drive: laziness and/or superficiality are often cause of les-than-ideal usage. This way you can make sure that your drive will stay healthy.
Alternatively, you could get a drive, and use it. If it can't stay healthy by itself, it was either not a very good drive (Indilnix or JMicron controllers, FI), or a poor choice for the workload. It is the manufacturer's responsibility not to make a drive that will become unhealthy from being used, but many memory companies throwing together drives haven't thought much about that sort of thing (for that matter, even other companies have been bit by their oversights, like Intel and Crucial, but they improve the next gen, if they can't fix the current ones, instead of doing nothing, or getting out of the market). The manufacturer should make sure that there is enough OP, without any user intervention, to prevent sky-high average WA with any workloads that are not specifically server workloads.

Quite a few Linux RAID users would have been much happier with their SF drives, if they could have just erased them, and given up a few GBs when repartitioning--or, if they knew the drives weren't going to live up to their specs. For whatever reason (maybe most or all just don't treat unallocated space as OP?), they seem to specifically benefit from TRIM, and get slow without it.

And, anyway, the drive doesn't have to wait for TRIM, unless you are constantly filling it all the way up, then freeing a little bit, at which point you should consider using the SSD as an RST cache device, instead. You should leave a good bit of space constantly free, which will benefit both your filesystem and the SSD. If you can't, then either go with RST, or get a big enough SSD that you can leave free space during normal use.
SF drives have additional advantages, due to how they work with data compression and how TRIM behaves differently on them. Most of all, filling up a SF drive completely with incompressible data results in a permanent reduction in performance: not particularly likely, but better safe than sorry;
Better to choose another drive, if worried about it. Those advantages come with major caveats, when the drive isn't used an ideal environment (such as being single-drive volumes with Windows 7 or newer).
(b) not true, small writes are actually most affected by provisioned space;
Please do explain. How is going from say, 8GB spare, to anything higher, really, going to make much difference? It's not. It needs to write that 16KB, and write metadata updates for that 16KB. It almost surely already has blocks ready for such data to be written to. Going from 400,000x the spare space to 1,000,000x the spare space can't do that much.

With many small writes coming in quickly, as a large aggregate write set, such as may be common for some servers, it can (specifically, it should be able to detect the write pattern, and pre-allocate extents, doing clean-up for partly-used extents later on). But when the total is still only several GBs/day, and never will be tens or hundreds of MBs in succession, as many small writes, it's just not going to make much difference. The overhead of remapping is going to be a large portion of the write, and general runtime GC efficacy will make the rest of the differences. How much may vary by drive, and the total writes will still be lower on SF drives, but it's having to keep up with the metadata and new locations that does it, for small writes, much more than OP, simply because the OP is already huge, relative to the write load.

Their numbers aren't all wrong, but cherry picked advertising material, largely based around server benchmarks, with mostly-text data.
(c) true, but not having what makes the most out of something doesn't mean we can't make a bit for ourselves anyway. :)
You need to reboot to change partition sizes on your boot drive, in the best case. With TRIM, you just leave free space, just like with a HDD. Using TRIM only let's you have your cake and eat it, too (on desktops, anyway); using all the space if you need to, right now, but benefiting from leaving it free.

Haha, no, but I see your concern. Without being over dramatic, drive endurance over multiple years is an issue, especially because SSDs are often re-used on laptops after desktop dismissal. I'm speaking out of a x25-m g1 failure due to bad blocks after 3 years or so; yes, controllers have improved a lot since, but newer processes are bringing us NAND capable of less and less read/write cycles, not to mention TLC drives.
And that is why I'm not going to use TLC, at least not until this "DSP" stuff has proven to really be able to increase write life to or beyond ~3K. The reduction in cost is simply too small for the risk, IMO. If it's working, and not PATA, I can find a use for it.
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
One thing common to nearly all SSDs is the use of over provisioning to help with garbage collection. More flash resides within the SSD than is available to the user — a 64GB SSD may actually contain 80GB of internal NAND, but only 64GB is visible to the user. The other 16GB provides an area that can be used for background processes.

"Every SSD has reserved space for various reasons," said Hong. JEDEC, the leading developer of standards for the solid-state industry, recommends having about 7 percent of reserved space, he said. If the ratio is enlarged, it is called over-provisioning.


http://www.enterprisestorageforum.c...6/Solid-State-Drives-Take-Out-the-Garbage.htm

I guess no one actually read this.

I don't see how keeping 10% free unused space on the SSD (at a user-level, NOT what is said above) could possibly help - it isn't as if the SSD itself can know whether you're using that data for something or not. Can someone PLEASE post a source that contradicts what I quoted above, and that agrees with this entire users should leave 10% free (not enclosed in a partition) on their SSD? I've never seen anyone in the industry write that, so I'd really like to know an authoritative source.
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
You need to reboot to change partition sizes on your boot drive, in the best case.

Not in a modern Windows....

http://malwaretips.com/Thread-How-to-Resize-Partition-in-Windows-Server-2008-and-2008-R2-SP1

I just did it, just to confirm no reboot is required for C: and ... no reboot is required. :)

This has been the case for extends since 2003 and I believe has been the case for extends and shrinks since 2008R2, but obviously at least 2012, which was my test subject.
 

dclive

Elite Member
Oct 23, 2003
5,626
2
81
Reserving some space on the drive (never said "at the end", there's practically no beginning nor end on an SSD) adds up to what's already there by design, backing the controller's background operations.
SSD controllers will "intelligently use" all free and TRIM-ed space, regardless of where it is, but keeping it hidden has some practical advantages - read above my reply to Cerb.

1. How does the SSD know that your OS has marked an unused partition/empty space as free? Is there some driver somewhere that no one has mentioned?
2. Can you provide an industry source for any of this?

I don't mean to be rude, but this seems very odd to me that an SSD can figure out what's required and what's not and then make intelligent use of it if it isn't required. More logical is that the manufacturer sets aside XXX% of the drive as 'extra' and uses that for wear-leveling and other GC tasks, which is what the link I posted says....Help!

Another article, stating that it's the manufacturer, not the consumer, that sets aside spare space: http://www.anandtech.com/show/3690/...andforce-more-capacity-at-no-performance-loss

Again not saying anyone's wrong just don't see data yet!
 
Last edited: