Recommended free space on SSD

Keep some space unpartitioned on 600GB SSD?

  • No

  • 5 %

  • 10 %

  • 15 %

  • more


Results are only viewable after voting.

Aleq

Junior Member
Apr 14, 2011
11
1
66
Hello local Gurus,
I'm finally jumping the ship and preparing myself for SSD (Intel SSD 320 600GB, Thinkpad with quad core i7, Win7 x64).


  1. Does the amount of free space influence the performance of SSD? (I assume yes)
  2. How much free space is recommended? And what amount is critical and should be avoided?
  3. Is it working solution to have only slightly smaller partition plus some unused space on the drive? (I usually find myself having quite a little of free space, that's why I'm thinking of this - this would guarantee it)
  4. If 3. is a valid assumption, how big should the unpartitioned area be? As a reasonable compromise between performance and used space...
Thanks a lot for any tips related to this topic, cheers
Aleš
 
Last edited:

bigi

Platinum Member
Aug 8, 2001
2,490
156
106
Nazdar Aleš.

Don't worry about all of this too much. The controller will do it for you and your SSD, really.

Welcome to anandtech
 

996GT2

Diamond Member
Jun 23, 2005
5,212
0
76
Performance does degrade as you start to really fill up the SSD.

Simple guideline: for small SSDs (40-60GB), try to keep more than 5 GB free
For largers SSDs, try to keep more than 10 GB free
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Think of free space like a coffee makers pot size. If you run it down to the bottom of the pot... it slows you down while it makes more. And the smaller the pot(capacity)... the smaller the reserve. Simple as that.

I would never go above 75% with stored data(OS/apps) since the stamina and recovery can be affected more easily, especially with some larger data sets like video, pics, music.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Depends on which SSD you go with. Some have a fair amount of reserve space while others leave very little headroom. It also depends on how you use the drives. If you wan to load them up and then do mostly reads, reserve space is not as important as if you want to continually move files around.
 

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
Most consumer grade SSDs have 5-10% spare area.
Assuming you have TRIM, any unused space is as good as spare area.

I saw an Intel study that showed the lifespan of the drive is tripled by having 17% spare area vs. 7%.

So not only will you speed up your writes by keeping a good chunk of the drive free, but you will make the drive last much longer.
 

snuuggles

Member
Nov 2, 2010
178
0
0
Wow, I was *way* off! Somehow I got it in my head that you should ideally leave 50%(!!) free. I always wondered why that was, and felt really bad when I let my 80 gig get under 20 gig free.

Man, glad I read this. Wotta dummy I am!
 

Aleq

Junior Member
Apr 14, 2011
11
1
66
Hi and thanks to all for your valuable feedback.

It is Intel® SSD 320 Series, Capacity 600 GB; TRIM will be used. Thanks for the your tips about the free space. The usage of the disk will be like average power user + photographer (pictures in, process, pictures out).

The important question remains unanswered yet however - is the unpartitioned space working solution? It would be easier for me not to see some it, only then I can guarantee it is unused at all times :)
 
Last edited:

glugglug

Diamond Member
Jun 9, 2002
5,340
1
81
For an SSD, unpartitioned area is basically the same as free space, performance wise. For non-TRIM situations it is the only way to get that benefit.

For a mechanical drive it would be a bad idea resulting in more fragmentation.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
that is incorrect. free partitioned space can cause additional translational losses associated with an OS volumes overhead in comparison to unallocated space from the controllers perspective.

If the space is left unallocated the controller will make more efficient usage of the nand and some controllers such as Sandfocre based drives can actually use it create larger reserve pools.

Would be like comparing a drive with larger factory overprovisiong to one with less. There are measurable results to be sure.

For a machine that see's power usage and many gigs of data on consistent basis(such as large folders full of pictures), it would be best to give some of the space aside for the controller to maintain top efficiency over time.

Look at it like this. Are you seriously ever going to fill the drive up to capacity at any one time without first deleting some data and moving to the next data set? Of course not(or you shouldn't be anyways) and letting the controller make use of the space without logical babysitting costs from the OS is just plain good all around.

I can't speak for this particular controller(but it surely seems smart enough to mimic the others in this regard).. but in general, increased OP space provides increased stamina, reduced recovery time, and longer overall lifespan due to more efficient data rotation.

With 600 gigs it may not be much of an issue to even mess with extra OP, but then again with 600 gigs why would you need it all as formatted space anyways, right?
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
All drives are already over-provisioned, which means the controller won't expose the whole space to you for partitioning. The only reason you should ever not just partition the whole space the drive offers is if you are running a heavy write workload (think database servers, or the like) or you anticipate actually regularly filling up the whole drive.

Your inquiry is presumably the latter there, you expect at times you will need to completely fill up your drive? This *can* be harmful to the performance of an SSD, but the Intel controllers are some of the best at handling this. If you want to avoid this situation but leave yourself the option available in case you *need* it then just partition the full space and try not to go over 80% full. If you are really worried about it, reduce the partition to 80% of what the drive offers when you install Windows, and then never worry about it again (fill up the drive to your heart's content).
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
that's very true.. BUT.. the thing most don't realize is that consumption of free blocks is going to eat into available capacity and Trim and GC can only do so much as you go along. OP helps in that regard.

Anywho, I was responding to the previous poster who was making blanket statements of free partitioned space being identical to unallocated space from the controllers perspective.

Obviously I was just generalizing in respect to this particular scenario of course, since we both know that even the craziest power user would hardly consume anywhere near 600 gigs of space in any logged on session.

I wrote over 1TB in one sitting once but that was to specifically test degradation/recovery with Sandforce controlled drives and wouldn't be anything like heavy video/pic editing which I do consistently on this array.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
I was writing my post while you were writing the long one right before mine (I hit post and your's appeared), so nothing I wrote was in response to you, groberts.
 

mv2devnull

Golden Member
Apr 13, 2010
1,519
154
106
that is incorrect. free partitioned space can cause additional translational losses associated with an OS volumes overhead in comparison to unallocated space from the controllers perspective.

If the space is left unallocated the controller will make more efficient usage of the nand and some controllers such as Sandfocre based drives can actually use it create larger reserve pools.
Does the controller know the partitioning scheme?
Which partitioning formats does it understand?
Which filesystem formats does it understand?
What if I don't create a partition table at all, but simply create filesystem (or single file) on the raw block device?
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Does the controller know the partitioning scheme? sure it does. when it internally maps logical space it can recognize any unallocated space and use it more efficiently for its own purposes. Less logical to physical mapping is the result.
Which partitioning formats does it understand? Have absolutely no idea on that one but would expect the controller to recognize all the mainstream ones used in Windows and Linux.
Which filesystem formats does it understand? Again, not sure but would have to guess, same as above.
What if I don't create a partition table at all, but simply create filesystem (or single file) on the raw block device?

Now your getting into Linux/HDParm speak and you'll lose me pretty quick here. I would(again with the guessing) speculate that MOST controllers would be able to distinguish data in that case. BUT whether or not they would GC it thinking it's not viable data(trash), I couldn't say. Some of the other guys I've seen do that(GC testing) have said that the data remains viable and the drive will not recycle it so it must map it accordingly regardless of file/partition structure.

I know I probably failed your test, but is about all I got since I didn't have time to study for it. lol
 

mv2devnull

Golden Member
Apr 13, 2010
1,519
154
106
So lets rephrase what you [groberts101] did say:
"Controller maps logical blocks to physical blocks. It does not have to do it before those logical blocks are accessed. It can perform a logical write by writing to unmapped physical block and then changing the logical block to point/map to that new location. After that the previously used physical block could be GC."

From that follows that the only difference between a logical block that is not within any defined partition range and one that is within range allocated to a filesystem is that the latter is more likely to get mapped, i.e. "consume" a physical block. The former logical blocks should never see a write, and thus corresponding amount of physical blocks will always be available for reuse.


A partition is defined by stating two logical block addresses and that tells a range of logical blocks that "belongs" to that partition. In MS-DOS partition table, which is used in most drives (GPT is only needed for 2TB+ volumes), the primary partitions are described within MBR, the very beginning of the drive.

Partition as such has nothing to do with files. OS does look for filesystem metadata from the logical blocks that according to the partition table are starts of partitions. OS can look for filesystem from the first logical block of a disk too, if there is no partition table there. That is what was used in floppies and in some USB pendrives.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Increasing the amount of spare area on an Intel SSD will boost performance and life span of the drive, as per the following:

Intel High Performance SATA Solid-State Drive: Over-provisioning an Intel SSD (Intel)
The SSD Relapse: Understanding and Choosing the Best SSD (anandtech)
Uh... no:
Anand Lal Shimpi in the article you linked said:
More spare area is better for random workloads, but desktop workloads aren’t random enough to justify setting aside more spare area to improve performance;
That article was written when TRIM came out, detailing what TRIM does for SSDs. So the statements you're latching onto only apply to non-TRIM-enabled scenarios.

Consumer drives come with around 7% reserved space that you cannot partition. Anand has done extensive testing and published it in his other articles showing that this is sufficient for most everyone. Unless you are RAIDing your drives, or you have a crazy IO profile (server, actual workstation loads), then the default over-provisioning plus TRIM will deliver top performance from your SSD throughout its life as long as you don't completely fill the drive. (Completely filling a drive has been anathema from long before when SSDs showed up.) See my earlier post in this thread for more.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
groberts101 said:
"Does the controller know the partitioning scheme? sure it does. when it internally maps logical space it can recognize any unallocated space and use it more efficiently for its own purposes. Less logical to physical mapping is the result."
That would mean that the controller would understand the way the filesystem partitions the disk. Considering that the controller is completely agnostic to which filesystem is above it, that just can't work.
The controller can use any unused space, but it can't know for sure that the FS won't address any particular LBA.
 

Arp_

Junior Member
Mar 26, 2011
15
0
0
That would mean that the controller would understand the way the filesystem partitions the disk. Considering that the controller is completely agnostic to which filesystem is above it, that just can't work.
The controller can use any unused space, but it can't know for sure that the FS won't address any particular LBA.

All partitions are defined in partition table on disk. Controller can just read it and see what space is unpartitioned. It doesn't have to know anything about file systems used in partitioned space.
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
Am I the only one that thinks the 15 or 20 new accounts created in Mar and Apr 2011 all seem to sound really similar to one another?
 

Arp_

Junior Member
Mar 26, 2011
15
0
0
Well but it would need to know stuff about BSD disk labels, APMs and whatnot wouldn't it?
It's not really a problem if it doesn't support some of the weird and rarely used partitioning schemes. If controller cannot recognize partition table, it would just assume that all space is used, and won't interfere with anything.
Also, most of these schemes anyway contain an MBR partition table for compatibility that has a dummy partition allocated just to tell other OS'es that certain area on HDD is used by something. So controller usually can know anyway, which areas on disk are used and which are free.
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
I can tell you without doubt that Sandforce in particular keeps data maps and actively/redundantly compares to logical space used on the SSD.

Lets say you have a raid array(no trim) and the full available space is partitioned on the volume. You fill up the drive until it throttles(this occurs when ALL physical space is hit) with benchmark trash and idle the machine for 10 hrs straight to let GC do its thing.

Now do the same thing with with an extra 10 gigs of unallocated space. What happens?

The drive with greater amounts of unallocated space will have slightly more stamina during the next write session due to larger fresh block pools(this controller in particular uses unallocated space more efficiently that empty logically asigned space) and overall the recovery is quicker during the next logged off idle session.

The more this OP is increased the greater the result. Indilinx is the same although the benefit is not as great as Sandforce in these regards. Reducing the translational losses down the stack between the logical to physical space does in fact help efficiency. This would be still be fact whether Anand or anyone else thinks it's necessary or not. Diminishing returns?.. no doubt. But that certainly doesn't mean that the benefits don't exist.

I can only assume that other controllers have caught up in this regard as firmware has progressed. Static data rotation and many other advancements have now been implemented into firmware and I can't see Sandforce as the only one implementing much of these tricks and can only assume the others have implemented them as well.

If you don't run you SSD heavily filled with minimal capacity left for future write sessions?.. then don't use it. But if you're the hardcore power user that never goes beyond 50% filled to maintain top speed and efficiency?.. extra OP may help if your controller can make sense of it. Out of 280 gigs available to me.. I only stripe 80 gigs and the results can be measured, felt, and seen with longer times until Sandforce/Durawrite throttling in implemented. And as much as I hate Durawrite throttling algorithms?(I just SE/reimage to bypass it altogether).. it's an excellent built in measuring stick for the ability of additional OP to change stamina and recovery speed of my array. Is specifically why a standard 100GB V2 will have better stamina and lifespan than a 120GB V2 extended edition due to the larger free block pool to work with.

To simplify it even more for all the non-believers?.. factory OP space is nearly identical to user implemented unallocated space in that the controller makes much more efficient use of it internally with little regard to logical partition structure. More OP is good. Whether you need or want it is entirely up to the user and shouldn't be knocked unless it's tested to give firsthand experience. Just keep in mind that the world used to be flat too. lol