Why does defragging reduce drive life?

wjgollatz

Senior member
Oct 1, 2004
372
0
0
I have read alot of reference that repeated and defragging of drives can reduce their lifetime expectancy. How is that different than scanning your hard drives for virus and such? I know that the defragging would involve writing, but isn't one also scanning much more often than defragging?
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Originally posted by: wjgollatz
I have read alot of reference that repeated and defragging of drives can reduce their lifetime expectancy. How is that different than scanning your hard drives for virus and such? I know that the defragging would involve writing, but isn't one also scanning much more often than defragging?

Turning on the HD lowers the life expectancy.

As for your Q, I guess it depends. Some people say the more stress the heads have, the lower the life the heads have.
Scanning is a read only, and defrag is read then write to a new location, so it takes multiple trips to defrag a file. Then again, if a file is not defragged, then scanning makes multiple trips to read all of the file.

However, if you believe the manufacturers, then it shouldn't be that much of a issue in either case, just look at the MTBF 'raiting'...

Clear as mud right? ;)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
technically ANY use decreases the life of ANY component... even anti virus scanning... its just that the read only action of anti virus is insignificant, especially compared to a read-erase-write action of defrag. (some consider it an insignificant loss as well... well it depends on how often you defrag too)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
... but if your drive is defragmented, less stress will be put on it when accessing files

You can't just say that, you need to know the usage patterns of the filesystem before making any claims.
 

ElBurro

Member
Feb 27, 2009
56
0
0
while any use of the drive will decrease the life of it what you may actually be talking about is defragging of an SSD which decrease the life of it because each cell only has a finite amount of writes that can be done to it. They are kinda like rechargeable batteries. Once you've recharged so many times they can longer be charged. Also since seek time is practically a non factor in SSDs defragmenting is completely useless.
 

hanspeter

Member
Nov 5, 2008
157
0
76
Originally posted by: Nothinman
You can't just say that, you need to know the usage patterns of the filesystem before making any claims.
So it was wrong to take for granted that the talk was about normal harddrives? And of course, any defragmenting done should be optimized to any given system you run it on.

Originally posted by: ElBurro
Also since seek time is practically a non factor in SSDs defragmenting is completely useless.
The uselessness also comes from the fact that the OS doesn't know about the physical mapping of sectors.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
So it was wrong to take for granted that the talk was about normal harddrives? And of course, any defragmenting done should be optimized to any given system you run it on.

No, it was wrong to take for granted the usage patterns of the filesystem and drive. There's no guarantee that having all of the files contiguous is the most optimal layout for every usage pattern. Hell, it's not even the most optimal for normal usage because of the way data is paged in from disk.
 

Krynj

Platinum Member
Jun 21, 2006
2,816
8
81
I used to have an 80GB drive that I'd defrag every single day. I'd had it for a few years, and a few weeks after intense defragging, it was dead.

Ever since, I hardly ever defrag.
 

hanspeter

Member
Nov 5, 2008
157
0
76
Originally posted by: NothinmanNo, it was wrong to take for granted the usage patterns of the filesystem and drive. There's no guarantee that having all of the files contiguous is the most optimal layout for every usage pattern. Hell, it's not even the most optimal for normal usage because of the way data is paged in from disk.

If you are talking about the page file then normal defragmentation won't affect that much anyways, since the file is fragmented internally.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Originally posted by: FetusCakeMix
I used to have an 80GB drive that I'd defrag every single day. I'd had it for a few years, and a few weeks after intense defragging, it was dead.

Ever since, I hardly ever defrag.
I don't see how you can correlate the two. Unless you know from what the HD died from. It could have been wear & tear on the motor, heads, spindle, circuitry, or something else. I have seen HDs die in under 1 week of use, and others still going strong after 7 years of heavy use.

It is fine to defrag when you start to notice longer & longer access times. I don't think I would do it everyday though, unless I am writing lots of files. But if that was the case, I would use a better file system, if that was a option.


 

Krynj

Platinum Member
Jun 21, 2006
2,816
8
81
Well, the drive worked for years, then I decided I was going to start defragging it daily. A few weeks after that, it was dead.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If you are talking about the page file then normal defragmentation won't affect that much anyways, since the file is fragmented internally.

I'm not, the pagefile is only one small part of the total paging going on while a system is being used.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Defragmentation shouldn't reduce the life of a drive in any significant way that I can see.
I leave my HDs spinning 24/7, and they last a good long time.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: hanspeter
... but if your drive is defragmented, less stress will be put on it when accessing files

writing puts significantly more stress then reading. And defragging saves you from a little reading by reading the entire drive, and then erasing and writing all the data back to the drive at different locations.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Elixer
Originally posted by: FetusCakeMix
I used to have an 80GB drive that I'd defrag every single day. I'd had it for a few years, and a few weeks after intense defragging, it was dead.

Ever since, I hardly ever defrag.
I don't see how you can correlate the two. Unless you know from what the HD died from. It could have been wear & tear on the motor, heads, spindle, circuitry, or something else. I have seen HDs die in under 1 week of use, and others still going strong after 7 years of heavy use.

It is fine to defrag when you start to notice longer & longer access times. I don't think I would do it everyday though, unless I am writing lots of files. But if that was the case, I would use a better file system, if that was a option.

the whole point is that there is significant wear and tear going on during defragging. you read every single byte, and rewrite a major portion of it back to the drive. You do weeks worth of "work" in a few hours. And you do it above and beyond the regular work that you are already doing.
 

hanspeter

Member
Nov 5, 2008
157
0
76
Originally posted by: taltamirwriting puts significantly more stress then reading.

I would very much like to see a source on that.

Originally posted by: NothinmanI'm not, the pagefile is only one small part of the total paging going on while a system is being used.

I am aware of that, thus my "if". When you page from what ever file that has been mapped is no different than just reading it normally. If all the sectors lies contiguous, less seeking will be needed.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I am aware of that, thus my "if". When you page from what ever file that has been mapped is no different than just reading it normally. If all the sectors lies contiguous, less seeking will be needed.

And pretty much no files are paged into memory in their entirety in one fell swoop so it's a moot point.
 

hanspeter

Member
Nov 5, 2008
157
0
76
It is not that random as you make it out to be.

Think about it: the real speed-killer in such harddrives is the seek time. The OS designers knows this. So jumping back and forth, reading a cluster here and there is not very efficient.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
It is not that random as you make it out to be.

Sure it is, when a file is paged in only the amount that was asked for plus a small amount (up to 64K IIRC) of readahead is actually read from disk. And since binaries have lots of dependencies in shared libraries, data files, registry access, etc you typically end up jumping around between dozens of files.

Think about it: the real speed-killer in such harddrives is the seek time. The OS designers knows this. So jumping back and forth, reading a cluster here and there is not very efficient.

And neither is wasting memory by reading every file fully into memory just because you wanted 5 pages from it. It's a tradeoff they have to make and demand paging only does a little bit of read ahead. When lots of memory is available it's less of an issue and if you're running Vista SuperFetch will make it even less of an issue, but it's still an issue that isn't easily fixed.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Defragmenting if done right does not reduce the life of the hard drive. If I have files that never change like a EXE of a program, and that file has 120 fragments, then fragmenting that file to make it contiguous is going to increase HD life. The heads can move to the start sectors of the file, move across to the end sectors and it is done. Without defragmenting, the heads would have to move around the platter 120 times.

It is the people who think that they need to defragment every day and every file that are adding wear to the HD.


I recommend Wincontig to people as it lets you just defragment folders that you use often and not every temp, junk file on the pc.
http://wincontig.mdtzone.it/en/
 

hanspeter

Member
Nov 5, 2008
157
0
76
Sure it is, when a file is paged in only the amount that was asked for plus a small amount (up to 64K IIRC) of readahead is actually read from disk. And since binaries have lots of dependencies in shared libraries, data files, registry access, etc you typically end up jumping around between dozens of files.

Have you actually checked which and how much of each section of an exe is actually loaded during initialization? And do you think when loading the exe and x number of dlls that this is being done by

loading cluster 1 from exe
loading cluster 1 from dll1
loading cluster 1 from dll2
loading cluster 2 from exe
loading cluster 2 from dll1
loading cluster 2 from dll2
etc...
?

And neither is wasting memory by reading every file fully into memory just because you wanted 5 pages from it. It's a tradeoff they have to make and demand paging only does a little bit of read ahead. When lots of memory is available it's less of an issue and if you're running Vista SuperFetch will make it even less of an issue, but it's still an issue that isn't easily fixed.

So you don't believe in concecpts like "disk cache" and "superfetch"?
 

bryanl

Golden Member
Oct 15, 2006
1,157
8
81
Don't defrag your 20MB Seagate ST-225 every day or the H driver chip for the head positioning system will burn out in less than a year. Promise?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If I have files that never change like a EXE of a program, and that file has 120 fragments, then fragmenting that file to make it contiguous is going to increase HD life. The heads can move to the start sectors of the file, move across to the end sectors and it is done. Without defragmenting, the heads would have to move around the platter 120 times.

Except that's not how it works, when you load that binary all of it's dependencies are also mapped into the new processes address space and demand paged into memory as different parts are executed. None of them are fully loaded into memory in one read.

Have you actually checked which and how much of each section of an exe is actually loaded during initialization? And do you think when loading the exe and x number of dlls that this is being done by

loading cluster 1 from exe
loading cluster 1 from dll1
loading cluster 1 from dll2
loading cluster 2 from exe
loading cluster 2 from dll1
loading cluster 2 from dll2
etc...
?

Kinda but it's not that simple. The initial binary and it's dependencies are mapped into the new process's adress space, then the portions needed to setup initial data are loaded then the main function is paged in and executed. As other functions and data are needed from the binary or shared libraries are used they'll be paged into memory as necessary.

So you don't believe in concecpts like "disk cache" and "superfetch"?

That's not what I said at all, I even said that they help.