• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Can HDD Defrag damage it , if Defrag takes 6-8 hours ?

phpdog

Senior member
Hi ,

I have several 500GB/1TB/1.5TB Hard Drives that are approx 70%/80% full and theve not been defragmented for months .

The drives run 24/7 and some are over 2 years old . I ran a defrag on one of the 1TB Drives and it took nearly 8.5 hours .

I dont want to risk over stressing the drives and have them fail . Is this possible ?
 
if they're fragmented that read/write head is moving around all over the place looking for your scattered files anyway. a defrag with a good bit of space left should use fewer read/writes than a heavily fragmented drive. a defrag with nearly no space left can use an order of magnitude more reads/writes than a defrag on a drive with plenty of space, and that can be excessive wear.
 
Well the answer should be no, it won't hurt the HD to do a prolonged defrag, BUT, if the HD is in a laptop that has poor cooling or if the vents are partially blocked as may happen if sitting on cloth, then the laptop could get too hot and that's not good for any component and particularly bad for HD's. So, if you need to defrag a laptop HD make sure the laptop is running reasonably cool.


Brian
 
Both of the above comments combine to make the total answer.

Basically defragmentation itself doesn't damage a drive, but if your drive is highly fragmented the drive is going to have to perform many seek/butterfly mode operations which is highly stressful on the drive. You drive could be on the verge of failure without you even knowing it (even SMART can read ok) and a defrag can reveal the weakness in a not so kind way.

That is why it is important to defrag consistently so that it is not as stressful on your drive.
 
Hi ,

I have several 500GB/1TB/1.5TB Hard Drives that are approx 70%/80% full and theve not been defragmented for months .

The drives run 24/7 and some are over 2 years old .
so large disk, so mang files, not been defragmented for months, of course it will cost a long time. Don't use FAT file system, which will produce many fragment. Save the files regularly changed to one partition, it is more convenient for you to defrag
 
If they are data drives I would just leave them alone. If you want to know if constant RWing to a drive affects it just put your hand on it. You'll find that it is coolist at idle, just slightly warmer after reading a lot, but warmest when doing long writes or constant RWing. Again, unless there is some unusual need to defrag I wouldnt. Also on a large drive, it probably is easier on the drive to just copy/move the data off onto another drive, then copy/move it back. That effectively makes the information contiguous.

OS partitions should be defragged. This is one of the reasons for separating data & programs/OS plus the OS partition can be made smaller to facilitate maintenance (defrag & backup/imaging).
 
Thanks for all the advise . The drives are all Desktop Drives using NTFS , and about 80% of the content is Movie / TV Show Rips / TV Show recordings .

Since from what you have told me, that not Defragging them will mean the drives internals will be under needless stress, I'm going to take the risk and Defrag the newest drives [ Drives that are under 12-18 Months ] .

With the other drives, I think i'll buy a couple of new 1.5 TB drives and consolidate the older drives onto them .

Just hope the older drives can hold out and get through the Data transfer without failing .

BTW - I'm definitely setting up an automated weekly Defrag schedule
 
I'm not an expert on hard drives by any means, but I don't think that a two-year-old drive is in that much risk of failure. Also, you stated that the drives run 24x7, but it sounds like while the drives are powered up they aren't actually being read from/written to 24x7.

Of course this is anecdotal, but I've defragged hard drives that are 4-6 years old without any problems. If you're that concerned about it, perhaps it would take less of a toll on your drive if you ran a defrag analysis, found the largest file or two that are severely fragmented, copied them to another drive, and then deleted them off the original drive. This might do two things: it would access those fragmented files when the drive is nice and cool, and it would create a lot of free space for defrag to work in.
 
Hi ,

I have several 500GB/1TB/1.5TB Hard Drives that are approx 70%/80% full and theve not been defragmented for months .

The drives run 24/7 and some are over 2 years old . I ran a defrag on one of the 1TB Drives and it took nearly 8.5 hours .

I dont want to risk over stressing the drives and have them fail . Is this possible ?
If the drive is more than 50% full, then defragmentation takes a very long time, and the closer to 100%, the longer. Damage is possible as defragmentation requires a lot of read/write, but no data is lost during defragment unless the drive has bad sector to begin with. That means, if your drive is about to die, don't defrag it. However, reading/writing through a fragmented drive cause far more damage than to defragment it.

6-8 hours is normal if the drive is 80% full. Before defragment begins, the system always check for errors first. Don't skip it. If data is important, than back it up.
 
Last edited:
Thanks for all the advise . The drives are all Desktop Drives using NTFS , and about 80% of the content is Movie / TV Show Rips / TV Show recordings .

Since from what you have told me, that not Defragging them will mean the drives internals will be under needless stress, I'm going to take the risk and Defrag the newest drives [ Drives that are under 12-18 Months ] .

With the other drives, I think i'll buy a couple of new 1.5 TB drives and consolidate the older drives onto them .

Just hope the older drives can hold out and get through the Data transfer without failing .

BTW - I'm definitely setting up an automated weekly Defrag schedule

Your posts have paranoia written all over them. Quit being a pansy and click defrag. Try Auslogics Disk Defrag. It's much faster than Windows Defrag.
 
Defragging is largely pointless and you shouldn't worry about it regardless. But the chances of a defrag working the drive to death are extremely low, the MTBF of modern drives is long enough that if it does die during a defrag it was probably going to anyway.
 
Defragmenting can cause lower performance due to coherency of the files disappearing. Files that 'belong' to eachother may become more scattered; forcing the HDD to seek more. So even without any fragmented files, your performance will suffer after a defrag.

It must be said that NTFS fragments very quickly, and thus the advantage of defragmenting may be bigger than the disadvantage.
 
Defragmenting can cause lower performance due to coherency of the files disappearing. Files that 'belong' to eachother may become more scattered; forcing the HDD to seek more. So even without any fragmented files, your performance will suffer after a defrag.

It must be said that NTFS fragments very quickly, and thus the advantage of defragmenting may be bigger than the disadvantage.

NTFS may fragment quicker than other filesystems, but the affects of fragmentation are so small that it's largely irrelevant. Everyone reading this thread has probably already wasted more time thinking about defragging than they'll ever save by defragging.
 
For SSDs that's true, HDDs have to seek. That's why booting/loading apps on HDD takes so long and is 99% bottleneck. As long as you can prevent the HDD from seeking, reducing it to a minimum, HDDs can yield very high speeds. That's why many optimizations are in place to transform random I/O to (contiguous) sequential I/O.
 
For SSDs that's true, HDDs have to seek. That's why booting/loading apps on HDD takes so long and is 99% bottleneck. As long as you can prevent the HDD from seeking, reducing it to a minimum, HDDs can yield very high speeds. That's why many optimizations are in place to transform random I/O to (contiguous) sequential I/O.

Yes, hard disks have to seek but with page sharing, prefetching, readahead, etc the affect is usually very small. And with the virtual randomness of reads during most people's normal usage, mostly due to demand paging, having files be contiguous is little to no help performance-wise.
 
Well if you have large files that you have recorded like TV and you stream them, maybe they should be read only. I am just thinking out loud. This might keep them from being accidently deleted or what-not. Maybe even some virus may be less likely to damage them. It has been a while since I have heard of those ornrey virus types that just rewrite the beginning and ending of files making them difficult to recognize or access.

Most large files are probably not very contiguous. It would make sense for NTFS to store them in a way that several read/write heads can read different parts of them. That way they could be read faster. This is probably one instance where an ARRAY might make sense. An array purposely splits up a file over 2 or more drives to make it faster. I dont much like arrays from a standpoint of needing to back them up, and the problem with 1 in 5 drives being more likely to fail than just a single drive.
 
Cool, our weekly defragmenting conversation and debate. Now to wait on a new should I have a page file or not with all this ram debate and my week will be complete. 😀
 
It would make sense for NTFS to store them in a way that several read/write heads can read different parts of them. That way they could be read faster. This is probably one instance where an ARRAY might make sense. An array purposely splits up a file over 2 or more drives to make it faster. I dont much like arrays from a standpoint of needing to back them up, and the problem with 1 in 5 drives being more likely to fail than just a single drive.


HDD can only Read or Write at any given time, and also, only one head may be used at any given time. The HDD will switch back and forth between heads rapidly.

Data is typically stored in a "serpentine" on the platter surface, changing between heads every so often. Even a few MB file will likely end up on multiple surfaces (depending on the manufacturer, and firmware, etc)
 
when i'm dumping two or 3 items at a time i can go back to an 80gb file and see it has 6000-10000 fragments - if i run the 3 jobs one at a time - about 20-30 fragments (because the drive has other data on it). certainly requires a bit more pointers to deal with 10K fragments than 20-30 eh? if i had 3 partitions i spose this might not be a problem - which is not a bad idea 🙂

make sure you align your partitions - it helps raid just as much as it helps ssd ..
 
when i'm dumping two or 3 items at a time i can go back to an 80gb file and see it has 6000-10000 fragments - if i run the 3 jobs one at a time - about 20-30 fragments (because the drive has other data on it). certainly requires a bit more pointers to deal with 10K fragments than 20-30 eh? if i had 3 partitions i spose this might not be a problem - which is not a bad idea 🙂

make sure you align your partitions - it helps raid just as much as it helps ssd ..

Why are you worried about how many pointers are required for a file? That's minutia that no one but the filesystem developers should be thinking about. And more partitions is usually a bad thing, you trade flexibility for virtually no gains.
 
ntfs does get sketchy with its memory leaks and very large sizes. mostly 32bit. if you play around with 100-500gb files you will see what a i mean. very well known issue affecting xp to 2008 server
 
Back
Top