How full does a hard drive need to get before perfromance suffers?

TechnoPro

Golden Member
Jul 10, 2003
1,727
0
76
Take XP as an example. It warns you that 15% free space is required to properly defragment. And when free space becomes less than 1GB, the system complains.

I've also read that the hard drives used in file servers are intentionally left with a certain amoutn of free space (I once read 15-25%).

Are there any guidelines to this practice? Is this line of thinking OS dependant?
 

ShaneDOTM

Member
Jul 25, 2005
44
0
0
Basically, the reason that you need 15%fs for defragment is that windows needs the extra space to displace data that is being defragmented. Windows also complains when free space dwindles because it uses some of that space for virtual memory, which of course if the space is minimal for virtual memory, performance can suffer(depends on what ram your running. if your ram is fairly good, you will most likely be better off turning virtual memory off.) The more data there is on a hard drive, the longer seek times are. This is more adversely affected by fragmentation tho as the more spread out the data is, the further the hard drive head has to move to get to it.

It's usually better to worry about fragmentation and keep you drive defragmented than to worry about filling up the space.

Conversely, you can pick up drives >100GB fairly cheap nowadays, so adding space is pretty easy also.

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
It's OS and filesystem dependent.

For instance, with ext3 a set number of block groups is spread out on the disk and when a file is accessed, added, removed, etc the block group holding the file's metadata must be accessed. Generally files in one directory tree are kept in the same block group to minimize seeking. But as the filesystem gets full it might not be possible to group files like that and the same thing happens with datablocks. Normally when a file is written to enough blocks to hold the data plus a few more are allocated so that if you expand the file it won't fragment, but if the filesystem is extremely full the chance of finding enough contiguous blocks is pretty low so fragmentation grows.