• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Defrag Single File Linux ext3

lambchops511

Senior member
Sorry if wrong section.

I have a relatively (50G) large file on ext3, I need good linear access / bandwidth times on it. Is there a way to "defrag", the way it was copied over to the server I am pretty sure was fragmented pretty badly. I don't need perfect defrag, but is there a good way to make the file "less fragmented"?

Would a something simple like

cp my_file /tmp/garbage
rm my_file
mv /tmp/garbage my_file

do the magic?
 
WOW! THANKS! I did not know this command. Is 86 extents good / bad for a 30 G file? I am guessing its pretty good? That probably means if I want better IO perf I need to hit SSDs?
Yes. Generally, anything above around 50MB/fragment is, "good enough," and that's over 300MB/fragment. Even with some tiny fragments mixed in there, that's good enough to just not worry about it. Any newish HDD (500GB/platter or denser) aught to be able to read such a file at 100MB/s, no sweat.
 
Yes. Generally, anything above around 50MB/fragment is, "good enough," and that's over 300MB/fragment. Even with some tiny fragments mixed in there, that's good enough to just not worry about it. Any newish HDD (500GB/platter or denser) aught to be able to read such a file at 100MB/s, no sweat.

Agree.

OP, the -v output from filefrag lists all the file's extents along with their length (5th column). Double-check to make sure that there aren't a bunch of really tiny extents, but you are most likely OK.
 
Back
Top