Originally posted by: LeonarD26
Tell you thr truth, I didn't know if disk defragmentation is an issue on Unix/Linux servers. We have jobs setup for our Windows servers..... is this normally not a problem in the Linux world?
Originally posted by: n0cmonkey
Originally posted by: LeonarD26
Tell you thr truth, I didn't know if disk defragmentation is an issue on Unix/Linux servers. We have jobs setup for our Windows servers..... is this normally not a problem in the Linux world?
The filesystems used in the *nix world are generally smart enough to not fragment much.
The only time it becomes a issue is when your running less then around 5-10 percent of free disk space.
Originally posted by: Nothinman
Man, why can I never get an attach code button when I want one?
The only time it becomes a issue is when your running less then around 5-10 percent of free disk space.
That's a bit of an exaggeration. Even with just regular use and plenty of free disk space, you can get fragmentation. But Linux is much better at avoiding it and the affects are much smaller than on Windows.
# xfs_db -r -c frag /dev/sdb2
actual 31793, ideal 23260, fragmentation factor 26.84%
#df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 8.2G 2.2G 6.0G 27% /home
How'd you manage that?
Originally posted by: Nothinman
How'd you manage that?
Hell if I know, just 'normal' usage over the past 4 or more years. For all I know it could be 1 really bad fragmented file throwing off the numbers =)
I've filled up my /home a number of times, cleaned it out, moved stuff around, etc. and I get 0.1% frag. I don't know if I'm jealous or not.
# xfs_db -r -c frag /dev/hda8
actual 20437, ideal 19100, fragmentation factor 6.54%
Originally posted by: Nothinman
I've filled up my /home a number of times, cleaned it out, moved stuff around, etc. and I get 0.1% frag. I don't know if I'm jealous or not.
It might be the cleaning and moving that's doing a pseudo-defrag by creating new files with contiguous extents/blocks.
Here's my notebook drive, I've only had it less than a year but I transfer a lot of semi-large (300-500M) files between home and work.
# xfs_db -r -c frag /dev/hda8
actual 20437, ideal 19100, fragmentation factor 6.54%
Originally posted by: n0cmonkey
Maybe XFS is just crap, with regards to fragmentation...
If you go to SGI's XFS website in the FAQ they contiplated making a defragging utility for it, but haven't.
Originally posted by: Nothinman
If you go to SGI's XFS website in the FAQ they contiplated making a defragging utility for it, but haven't.
Then what is xfs_fsr?
XFS is for working with large files IIRC.