How often do you defrag?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

nweaver

Diamond Member
Jan 21, 2001
6,813
1
0
Originally posted by: spyordie007
Originally posted by: Nothinman
Norton's defrag is cool. It puts the swapfile at the head of the disk as one contiguous file so it doesn't get fragmented, and other files don't fragment around it.

That's counter-productive. The biggest performance killer with regard to disks is seek time, if you move the pagefile to the front of the disk while all of the files are in the middle you cause extra seeks every time the pagefile must be accessed. Ideally you want the pagefile in the middle of your filesystem because you want to keep the read heads in the same general area so that seek time is minimized.
And of course it would be even better to just buy more RAM so that you didnt need to make frequent trips to the paging file.... :roll:
I'm sorry, but *stability* improvements from defragging?

Pardon me for asking, but that is a pretty hefty claim! Do you have anything to back up your claim? Performance problems stemming from a heavily fragmented pagefile on a memory starved system -- I can buy that, but *stability* problems?
Defragging shouldnt effect stability at all, anyone who states this is just spreading FUD.

I have seen this. Older (Dell 6350? 4U Server) running 2k3 server and exchange 2k3. The mail store dismounted and would not remount. My co worker knows exchange very well, and bet me lunch it was a framgented PF. 1200ish frags on the page file. A quick defrag and trip to the chinese place for lunch and it was back online.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
I have seen this. Older (Dell 6350? 4U Server) running 2k3 server and exchange 2k3. The mail store dismounted and would not remount. My co worker knows exchange very well, and bet me lunch it was a framgented PF. 1200ish frags on the page file. A quick defrag and trip to the chinese place for lunch and it was back online.
This is more of an application issue than an OS issue. Besides if you're getting a highly fragmented page file it means you should be starting with it larger so it isnt constantly expanding into additional drive area (or better yet adding RAM).
 

eatmyshorts

Junior Member
Jul 13, 2005
4
0
0
Originally posted by: Nothinman
Linux doesn't really get fragmented, so yes, there is not really a point in defragmenting it

Sure it does, it's just that Linux is smart and the fragmentation has very little affect on performance in day to day use.
Well very little actually. As far as I understand it, whereas NTFS shoves any new programs, etc. right at the front of the drive, and if there is not enough space for the whole program, NTFS tells it to just take the next available space after that, splitting -fragmenting- the file. Whereas with Linux, the hdd tells you to put that program whereever you can fit the entire thing, and if there isn't enough space, to take the next biggest. So if you have a 80gig -or whatever- hdd and 60gig is taken, not very much is likely to be fragmented. Or if you have a relatively new one, you won't be likely to have to defragement within the first year and a half, I would guess.

I could be wrong, but that's how I understand it. That's why Windows needs regular defrag.
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
fragmenting- the file. Whereas with Linux, the hdd tells you to put that program whereever you can fit the entire thing, and if there isn't enough space, to take the next

That sounds very impressive, specially considering that the OS has no knowledge of just how big that file is going to get. I.e. if your app starts writing a stream of data, how can the OS guess that you're going to write 100K or 1GB? If it were a bad OS, it would cache the entire write operation until the application stops writing, but only a really bad OS would do that. (what if there's a power failure? what if the user has less than several GBs worth of memory?)

Another approach would be to always start writing to the biggest block available. But that too leads to fragmentation. It could defrag on the run, but then it would need to defrag even temporary files... Sounds like a magic bullet to me.

NTFS does come with some overhead though. There's the journaling thing, but all filesystems these days have journaling, so it isn't a unique feature. (I trust the Linux filesystems you guys are using supports journaling too) Then there's Windows' 8.3 mechanism (which can be disabled btw). By default every filename has an associated 8.3 filename. That can hamper performance quite a bit in directories containing lots of long filenames.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: OptyMyStix
No doubt regular defrag is recommended but you shouldn't neglect the importance of these Tips & Tricks of Computer Maintainance.

OptyMyStix

That is horrible.

5) Delete cache files to increase download speed

Regularly check Internet explorer's history and cache files. Delete the cache files and clear the history files. Deleting the cache files helps to speed up the download. As there is more space available to store the temporary files.

:confused: The cache is there to speed up some internet actions.

7) Delete old .zip files

After unzipping files users leave the zipped file on their computer. If you don't need it, delete it. You can save it on CDR or CD-RW disc.

Hard drive space is cheap.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
Originally posted by: BikeDude
fragmenting- the file. Whereas with Linux, the hdd tells you to put that program whereever you can fit the entire thing, and if there isn't enough space, to take the next

That sounds very impressive, specially considering that the OS has no knowledge of just how big that file is going to get. I.e. if your app starts writing a stream of data, how can the OS guess that you're going to write 100K or 1GB? If it were a bad OS, it would cache the entire write operation until the application stops writing, but only a really bad OS would do that. (what if there's a power failure? what if the user has less than several GBs worth of memory?)

Another approach would be to always start writing to the biggest block available. But that too leads to fragmentation. It could defrag on the run, but then it would need to defrag even temporary files... Sounds like a magic bullet to me.

NTFS does come with some overhead though. There's the journaling thing, but all filesystems these days have journaling, so it isn't a unique feature. (I trust the Linux filesystems you guys are using supports journaling too) Then there's Windows' 8.3 mechanism (which can be disabled btw). By default every filename has an associated 8.3 filename. That can hamper performance quite a bit in directories containing lots of long filenames.



Well, in linux its a little more complicated then that, the old fat file systems just put the file in the next spot it can find. And that causes fragmentation. Modern file systems (reiser, ext2, ext3, xfs, ntfs (to some extent) Use algorithms to find the perfect spot to fit the follow and allow it some room to grow. I've done some reading on how reiser works, and I can tell you that it is a lot more complicated then just stick the file here like the old dos/win98 days were. NTFS is light years better then FAT32, but even it still has some of the old faults. Reiser, XFS, ext2, etc all seem to use better choices on where to put the files.

Although I suspect that is really not the case, I think some of the fragmentation problems windows users have is do to their constant habbit of defraging. I noticed this with my wife's PC. I had never defragged her box for the 2 years she had it. Then I sat down to do some work and found it slightly (18%) fragmented. (Mind you she doesn't install and uninstall apps all the time, but she does copy dvd's, download music, burn cd's, play tons of games, etc) So I ran windows defrag. Problem solved. A few months later I check it again, now its 25% fragmented. Ever sense my first defrag her system would grow to 18-25% rather quickly in a few months then slow down again. I think this may be because ntfs allows room for files to grow and defrag jams them all tight to the front of the drive, causing the same old problems we had in the fat 32 days.

By comparison, my linux server has never been defraged, i'm at about 3% fragmentation. This box is nearing 3 years without a wipe (I just upgraded to the new debian stable). It gets constantly used as temp storage for moving things between my friends and I, and for sometime was even a mail server.

Another thing that may help keep linux file systems from becoming majorly fragmented is our use of partitions. Most of my file system changes are in /home. But /home is its own partiton, the core of the OS sits in its own partitons as well. This means my constant moving of data around in my home drive has no effect on the system applications.

Just a few thoughts.
 

RadSoftware

Junior Member
Jul 18, 2005
4
0
0
Once a month unless I do a LOT of install/UNinstalling for whatever reason.

As a programmer, sometimes I creates dozens of test installations in a day or so. When I'm finished, I defrag the mess.
 

OptyMyStix

Member
Jul 18, 2005
29
0
0
My Dear n0cmonkey,

The cache is also known as the Temporary Internet Files. The Temporary Internet Files (or cache) folder contains Web page content that is stored on your hard disk for quick viewing. This cache permits Internet Explorer or MSN Explorer to download only the content that has changed since you last viewed a Web page, instead of downloading all the content for a page every time it is displayed.

Temporary Internet Files or cache Use More Disk Space Than Specified, hence you must delete the cache files regularly to speed up the download. If you don't believe me my dear you may refer following sites for more information that justify my statement .

1) http://support.microsoft.com/default.aspx?scid=kb;en-us;260897

AND

2) http://support.microsoft.com/kb/301057/EN-US/

N:B Both above sites are Microsoft's official websites.

Now regarding deleting .zip files as you said, "Hard drive space is cheap." But it's not as cheap as you think my dear because many people still use old PC with hardly 2 GB or 4 GB disk space. And remember Nothing is cheap in this world everything has its own value. You must have ability to evaluate it.
Keep writing....

Thank you.

OptyMyStix.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: OptyMyStix
My Dear n0cmonkey,

The cache is also known as the Temporary Internet Files. The Temporary Internet Files (or cache) folder contains Web page content that is stored on your hard disk for quick viewing. This cache permits Internet Explorer or MSN Explorer to download only the content that has changed since you last viewed a Web page, instead of downloading all the content for a page every time it is displayed.

Temporary Internet Files or cache Use More Disk Space Than Specified, hence you must delete the cache files regularly to speed up the download. If you don't believe me my dear you may refer following sites for more information that justify my statement .

1) http://support.microsoft.com/default.aspx?scid=kb;en-us;260897

AND

2) http://support.microsoft.com/kb/301057/EN-US/

N:B Both above sites are Microsoft's official websites.

Now regarding deleting .zip files as you said, "Hard drive space is cheap." But it's not as cheap as you think my dear because many people still use old PC with hardly 2 GB or 4 GB disk space. And remember Nothing is cheap in this world everything has its own value. You must have ability to evaluate it.
Keep writing....

Thank you.

OptyMyStix.

My dearest OptyMyStix,
Hard drives are cheap.
Love,
n0cmonkey

EDIT: Neither of those articles mentions how clearing out the cache can increase download speeds.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
Originally posted by: OptyMyStix
My Dear n0cmonkey,

If you are still not enlightened plz refer to yet another Microsoft's official weblink to clear your doubt. I'm ready to tour you to explore the entire Microsoft's website to satiate your doubt.
http://support.microsoft.com/kb/263070/EN-US/

Regards,

OptyMyStix
This article is not stating that clearing your cache will speed up your downloads, nor do any of the others you have posted. All they indicate is that generally pages will load faster out of cache rather than downloading them.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: spyordie007
Originally posted by: OptyMyStix
My Dear n0cmonkey,

If you are still not enlightened plz refer to yet another Microsoft's official weblink to clear your doubt. I'm ready to tour you to explore the entire Microsoft's website to satiate your doubt.
http://support.microsoft.com/kb/263070/EN-US/

Regards,

OptyMyStix
This article is not stating that clearing your cache will speed up your downloads, nor do any of the others you have posted. All they indicate is that generally pages will load faster out of cache rather than downloading them.

Exactly. :beer:
 

TechnoPro

Golden Member
Jul 10, 2003
1,727
0
76
A little anecdote that seems pertinent:

I was working with a home user a few weeks ago. The owner was complaining of sluggishness. I did my usual pokign and prodding - no spyware, no viruses, few unneccesary startup entries. The computer was as old as dirt, but the client told me it had been okay at one point.

I discovered that spread between the 4 user accounts were more than half a million (500,000+) Temporary Internet Files. They had simply never been removed since the OS was reloaded, whenever that was.

You better believe that once they were gone and the HDD defragged, the system proved more responsive. Yes, this was a smaller hard drive relative to the current generation. And I could not say with certainty if the act of purging the cache or the defrag process was the ultimate enhancer of speed, but the combination proved very effective for this old system.
 

Phoenix86

Lifer
May 21, 2003
14,644
10
81
Originally posted by: BikeDude
Originally posted by: Harvey
But over 90% of the world's desktops are running Windows, and the performance and stability improvements aren't just cosmetic.

I'm sorry, but *stability* improvements from defragging?

Pardon me for asking, but that is a pretty hefty claim! Do you have anything to back up your claim? Performance problems stemming from a heavily fragmented pagefile on a memory starved system -- I can buy that, but *stability* problems?
I'll chime in here and give my experience, but I have seen a simple defrag *prevent* errors, so it increased stability. This was due to a specifc cause, I'm not saying running defrag in general will increase stability in general.

The systems I was supporting a few years back (NT4) had horribly low memory, and thus used the page file quite a bit. Often over time the PF would become very fragmented and the systems would generate errors. Defragging the HDD would stop these errors. Eventually we worked it out that removing the PF, and re-adding it on a clean partition would solve it as well.

The end conclusion was that it was PF fragmentation that caused the instability.

The situation isn't common, esp these days with the standard amt. of RAM being what it is (512MB-1GB).
 

Valkerie

Banned
May 28, 2005
1,148
0
0
Originally posted by: corkyg
Originally posted by: Harvey
I defrag my drive daily. More often if I make serious changes such as installing a new program.

Defragging often can cut the time it takes from hours to minutes, and it definitely cuts the chances of system errors. :cool:

Same here - it just takes a few minutes each night pefore shutting down, and it keeps things in goodshape. I have done this for the past 19 years, and it has become a routine. Maybe that's why I only have to do a clean install when I get a new system or mobo? :)

The best I have found to do the job quicky with a good, understandable GUI/metaphor is PerfectDisk by Raxco (Canadian.)

19 years.... Even when processors were less than 100 MHz !? I couldn't stand waiting for HD's to do something that long.

 

WildHorse

Diamond Member
Jun 29, 2003
5,006
0
0
I have 4 volumes & defrag the two with the OS & the data at least daily, usually more.
The other 2 volumes are mirrir backups & I rarely defrag those 2.
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: Phoenix86
The systems I was supporting a few years back (NT4) had horribly low memory, and thus used the page file quite a bit. Often over time the PF would become very fragmented and the systems would generate errors. Defragging the HDD would stop these errors. Eventually we worked it out that removing the PF, and re-adding it on a clean partition would solve it as well.

One look at driver verifier (verifier.exe) will tell you that a lack of system resources (memory) is one of the main culprits when it comes to crashes in device drivers (-> BSODs). Specially NT4 was plagued by this, which is why Windows 2000 was touted as a much more stable platform in the first place; MS had extended their QA work to third parties and put lots of drivers through vigorous testing.

So sure, defragging the pagefile probably helped, but I very much doubt that the fragmentation was the root cause; I think you guys only accomplished to mask the problem (a faulty device driver).

A colleague of mine had stability problems until he pulled one of his DIMMs thereby disabling dual-channel mode for his P4 (i865 motherboard). He is happy now, but do I think dual-channel is really to blame? No, he probably just masked the real culprit, whatever it was. :/ (I'm not going to conclude anything either way)
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: TechnoPro
I discovered that spread between the 4 user accounts were more than half a million (500,000+) Temporary Internet Files. They had simply never been removed since the OS was reloaded, whenever that was.

Windows' default (10% of available hard disk space used as cache or something?) is ridiculous.

AFAICT IE doesn't clean temp files continuously, but once every day or so. In any case, mine would sometimes stall for a good 5-10 seconds, presumably because it was cleaning those files (1GB+ worth). I reduced the cache size to 20MB and the problem went away.

So, to a certain degree, I'll buy into the "clean temp files" idea, but I never (repeat: never) manually delete that folder. There has never been a reason to, because after reducing the size it simply works. (on my three machines)