• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Linux hard drive defragmentation

LeonarD26

Senior member
We have a bunch on RH servers at work, and was wondering if there are any tools I can use to setup automatic disk degragmentation.

Thanks
 
Nope. It's not normally necessary. Infact the only defrag tool that I know of for ext3 (assuming you're using ext3) was more of a proof-of-concept than anything else and I wouldn't trust it on a production system. If you're curious about individual files you can run 'filefrag <filename>' and it'll tell you how many extents the file is using, but AFAIK there's no way to actually defrag the files.

If you're using XFS there is xfs_fsr, but again it's not normally necessary.
 
Tell you thr truth, I didn't know if disk defragmentation is an issue on Unix/Linux servers. We have jobs setup for our Windows servers..... is this normally not a problem in the Linux world?
 
Originally posted by: LeonarD26
Tell you thr truth, I didn't know if disk defragmentation is an issue on Unix/Linux servers. We have jobs setup for our Windows servers..... is this normally not a problem in the Linux world?

The filesystems used in the *nix world are generally smart enough to not fragment much.
 
Originally posted by: n0cmonkey
Originally posted by: LeonarD26
Tell you thr truth, I didn't know if disk defragmentation is an issue on Unix/Linux servers. We have jobs setup for our Windows servers..... is this normally not a problem in the Linux world?

The filesystems used in the *nix world are generally smart enough to not fragment much.

Yep. Disk fragmentation hasn't been a problem for many many years. The only time it becomes a issue is when your running less then around 5-10 percent of free disk space.
 
Man, why can I never get an attach code button when I want one?

The only time it becomes a issue is when your running less then around 5-10 percent of free disk space.

That's a bit of an exaggeration. Even with just regular use and plenty of free disk space, you can get fragmentation. But Linux is much better at avoiding it and the affects are much smaller than on Windows.

# xfs_db -r -c frag /dev/sdb2
actual 31793, ideal 23260, fragmentation factor 26.84%
#df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 8.2G 2.2G 6.0G 27% /home
 
Originally posted by: Nothinman
Man, why can I never get an attach code button when I want one?

The only time it becomes a issue is when your running less then around 5-10 percent of free disk space.

That's a bit of an exaggeration. Even with just regular use and plenty of free disk space, you can get fragmentation. But Linux is much better at avoiding it and the affects are much smaller than on Windows.

# xfs_db -r -c frag /dev/sdb2
actual 31793, ideal 23260, fragmentation factor 26.84%
#df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 8.2G 2.2G 6.0G 27% /home

How'd you manage that?

738630 files, 73964714 used, 6439536 free (86920 frags, 794077 blocks, 0.1% fragmentation)

/dev/wd0l 153G 141G 4.6G 97% /home
 
Originally posted by: Nothinman
How'd you manage that?

Hell if I know, just 'normal' usage over the past 4 or more years. For all I know it could be 1 really bad fragmented file throwing off the numbers =)

I've filled up my /home a number of times, cleaned it out, moved stuff around, etc. and I get 0.1% frag. I don't know if I'm jealous or not. 😛
 
I had 70% (something like that) fragmentation on a XFS partition, once. I was using it as a combination MythtV backend, home directory. Lots of small text files, lots of multi-gigabyte files, lots of 700+ meg encodings being made while downloading lots of crap and playing aroudn with scripts and many small text files and image file. All at the same time.

Note that I say that it doesn't become a "problem" untill your getting around 5-10% disk free. I know that it still happens. 😉
 
I've filled up my /home a number of times, cleaned it out, moved stuff around, etc. and I get 0.1% frag. I don't know if I'm jealous or not.

It might be the cleaning and moving that's doing a pseudo-defrag by creating new files with contiguous extents/blocks.

Here's my notebook drive, I've only had it less than a year but I transfer a lot of semi-large (300-500M) files between home and work.
# xfs_db -r -c frag /dev/hda8
actual 20437, ideal 19100, fragmentation factor 6.54%
 
Originally posted by: Nothinman
I've filled up my /home a number of times, cleaned it out, moved stuff around, etc. and I get 0.1% frag. I don't know if I'm jealous or not.

It might be the cleaning and moving that's doing a pseudo-defrag by creating new files with contiguous extents/blocks.

Here's my notebook drive, I've only had it less than a year but I transfer a lot of semi-large (300-500M) files between home and work.
# xfs_db -r -c frag /dev/hda8
actual 20437, ideal 19100, fragmentation factor 6.54%

I've never had more than 0.2% frag though. It has to be XFS that's the culprit.
 
Originally posted by: n0cmonkey
Maybe XFS is just crap, with regards to fragmentation...

I don't think that it does as good as a job as other filing systems, but I wouldn't call it crap. I didn't notice any slowdowns at all...

If you go to SGI's XFS website in the FAQ they contiplated making a defragging utility for it, but haven't. They are willing to make one if there is a demand for it.

I think that XFS is taylored for a specific use (large media files) more so then other filing systems. I don't have any real reason to think that, but it's the impression that I get when I think of SGI and such.
 
XFS is for working with large files IIRC.

My ~75GiB XFS filesystem:
actual 21452, ideal 16581, fragmentation factor 22.71%
Of course, it only has 15MiB free.
 
Back
Top