Mule,
<<You guys sure have alot of time to waste on the holidays!>>
Isn't that the definition of a holiday?
Remnant2,
<<Having to boot to safe mode and switch off virtual memory to defrag a single partition is a serious pain in the ass.>>
Well call it what you like, but if you want to see the most benefit from defragmentation, not to mention optimize your permanent swap file, you'll have to. Partitioning or not. So it's a moot point.
<<With multiple partitions, I can defrag without any special worries, since I know windows won't try to write to the drive I'm defragging>>
Not if the one you're defragging is C: (and since partitioners seem to be obsessed with defragging, that's what they'll want to defrag the most.)
<<A defrag of extended partitions has no problem with this; a single partition often does, as you cannot get windows to completely quiet down its virtual mem swapping.>>
I don't know what's wrong with your system (perhaps too many background apps running) but the vast majority of Windows machines can and do get through a defrag of their OS partition in reasonable time, regardless of its size. If you can't even complete a defrag, you need to take a serious look at your system and figure out what's constantly thrashing the disk when it shouldn't be, because that's not normal.
Pariah,
<<Your formula can potentially be correct if by some miracle you find a drive that has an average of half of every cluster filled, which you will never find in the real world.>>
OK, listen up, I'm going to try to explain this from the ground up. Please, please, try to wrap your mind around it as I don't feel like explaining it again. First, you have to understand how any random access file system (including FAT32 and NTFS) stores a file using clusters. The procedure is pretty simple. Take this analogy:
Imagine pouring water from a jug into many cups. No cup can hold all the water, but if you have enough cups, you can do it. So you pour the water into the first cup. Now, if the jug doesn't have much water in it, you won't even fill up the first cup, leaving some wasted space in the first cup. Fine. But usually, your water will require dozens of cups. So you fill up each cup to the brim, until the jug is emptied. More often than not, the water will finish without filling up a cup perfectly, leaving some empty, wasted space,
but only in the last cup. All the other cups are perfectly full, it's just the last cup that has some empty space.
Now, how much space will the average jug of water waste? Half a cup. Why? Probability. Probability dictates that the last cup will sometimes be almost empty and sometimes be almost full and sometimes be half full (half empty?

) but
on average, the water from the jug will wind up filling the last cup halfway. It's a basic mathematical principle that is the basis for everything from coin tosses to quantum theory.
Now, to apply this to cluster slack. Imagine the water in the jug is your file data, and the cups are clusters. Windows fills up the first cluster completely, fills up the second completely, and so on, until it runs out of file data. At that point, the last cluster is filled with the remaining data. Sometimes it will be mostly full and sometimes it will be hardly full. But, again, probability dictates that
the last cluster, on average, will be half full and half wasted. Therefore, the average file will waste half a cluster.
So my forumla is nothing but common sense. It simply says that, if you know your cluster size, and you know how many files you have, you can give a very reasonable estimate of how much space you waste:
Space Wasted = (Cluster Size / 2) * Number of Files
Or we can assume that the Number of Files is equal to the Data Size divided by the Average File Size, yielding
Space Wasted = (Cluster Size / 2) * (Data Size / Average File Size)
Ask anyone who knows anything about hard disk structures and file systems and they'll confirm that this formula is statistically sound.
<<The formula I posted calculates the exact amount of slack under any condition without error. . . The "real" equation to calculate wasted space is: (cluster size - average file size per cluster) * number of files>>
Oh really. Right, then show us. Plug some numbers in that formula and let's see it calculate "the exact amount of slack under any condition without error."
Look, no formula can do that. It's literally impossible. You can't calculate *exact* slack waste without knowing the size of every single file on the drive. You can only estimate, and your forumla is not even a good estimate. In fact, it doesn't make sense at all. I don't care where you got it (though, typical for this kind of "accepted wisdom", the hole-in-the-wall source claims the email address of the formula's author is
me@myself.com, credible indeed), it's result is completely useless in every way. Let me explain why:
"Average file size per cluster" is meaningless. There is no such thing. Now, Average File Size is a significant figure. Cluster Size is a useful figure. But "average file size per cluster" doesn't refer to anything at all. That immediately destroys your forumla.
So listen to BoberFett. You haven't the foggiest idea what you're talking about.
Radboy,
<<it's futile to convince another person what works best for *them* if they weighed the pro's and con's for themselves .. & have plenty of pwersonal experience w/ hard drives. Anybody can say what works best for them, and they can explain their reasoning to the newbie, but no more.>>
Yes of course, but the problem is, the "reasoning" of the partitioning crowd doesn't add up. They cannot show any tangible benefit from partitioning. Every one of their pro-partitioning reasons has been shot down:
- Defrag and Scandisk times are irrelevant because an entire drive still takes just as long to process as the same drive split into many pieces, and more importantly because Defrag is a walk-away job done only twice a month during normal system downtime.
- Imaging is no easier with multiple partitions because a restored image of an intital clean OS partition will necessitate reinstallation of all applications, drivers, and settings added since the image was made, meaning that the only time saved by the image was the time needed to copy in the OS itself (which is really no slower when done by its own SETUP program) and the person's data files (which must always be backed up externally for proper precaution, making a data backup partition redundant).
- Cluster slack is no longer an issue. With FAT32 and the relatively large digital media files that fill up today's large hard drives, it typically hovers below 2% for normal mass storage -- insignificant. This has been proven both theoretically and by an informal survey in this thread.
- File organization is not assisted by multiple volumes and a slew of drive letters. In fact, it is hindered, because the best solution is a simple nested folder tree with descriptive names, which never needs to be FDISK'ed or Partition Magic'ed at all. Just drag and drop.
- Performance gains from putting your OS and swap file near the beginnning of the drive while exiling your data and applications to the end, are dubious at best and have never been proven by any sort of real world benchmark. In fact, they could actually decrease performance by forcing the drive to seek back and forth much farther across the disc surface during normal multitasking than what would be necessary with a typical single-partition configuration.
<<If I'm doing it wrong, I really wanna know>>
The problem is not that you're doing it wrong, it's that you're doing it at all. It's like asking if you're using the right method to toothbrush the floor -- you might be great with a toothbrush, but why not grab a mop?
Modus