ElenaP
Member
Nothinman,
in practice, the thing just gets damn slow if there are 10,000 fragments of the file and you happen to need a full read. This does not affect the overall performance significantly, but certain operations (like a full-text search in a large email base) make you wonder "what is it doing now". If you need a copy of such a file, read speed can be like one megabyte per second, sometimes even less. In general, I'd agree that fragmentation is overblown, but this specific scenario is really bad.
Consider that reading 10,000 fragments costs 1/5400/2*10,000 ~ a minute on a 5400 RPM drive in rotational delay alone.
taltamir,
NTFS never writes more than 64KB (16 clusters) compression block. If the compression cannot gain at least one cluster, the block is written uncompressed. A compressed file may have alternating compressed/plain blocks physically stored.
in practice, the thing just gets damn slow if there are 10,000 fragments of the file and you happen to need a full read. This does not affect the overall performance significantly, but certain operations (like a full-text search in a large email base) make you wonder "what is it doing now". If you need a copy of such a file, read speed can be like one megabyte per second, sometimes even less. In general, I'd agree that fragmentation is overblown, but this specific scenario is really bad.
Consider that reading 10,000 fragments costs 1/5400/2*10,000 ~ a minute on a 5400 RPM drive in rotational delay alone.
taltamir,
NTFS never writes more than 64KB (16 clusters) compression block. If the compression cannot gain at least one cluster, the block is written uncompressed. A compressed file may have alternating compressed/plain blocks physically stored.
