Large Cluster Size

isaacmacdonald

Platinum Member
Jun 7, 2002
2,820
0
0
As a result of heavy P2P traffic (about 10 gb a day) my storage drive is ridiculously fragmented. I usually run diskkeeper a couple times a week, but can rarely stay on top of it (trying to cut avg file fragments down below 1000). Anyway, in my quest for less fragmentation, I came across an article about cluster size that indicated rule of thumb was to select a cluster size that was slightly less than your avg file size. Sadly, XP seems to limit cluster size to 64k.

Can anyone explain exactly what the performance gains/limitations are of cluster size variations?
 

Buddha Bart

Diamond Member
Oct 11, 1999
3,064
0
0
if your file fits in one (64K) cluster, then only one cluster has to be read to load it.
if your file fitis in two (32K) clusters, then both have to be read.

If the two 32K clusters are back-to-back, then theoretically the performance will be identical.

However, the very fact that you've broken it up means that they will not neccisarily be back-to-back, thus weakening performance (fragmentation).

The disadvantage of larger cluster sizes is you waste hard drive space. Theoretically memory and disk IO too, but with paging/caching/etc thats not really going to be seen unless you had a gigantic cluster size (which it seems your OS is blocking you from doing).

bart
 

isaacmacdonald

Platinum Member
Jun 7, 2002
2,820
0
0
Ohh, I thought the HD had to read that mtb table thing everytime it wanted to know where a cluster was... so if there were 2 32k clusters it would read one, then check where the other was and read that one. Perhaps the info in the table is cached