So I've been thinking, we all know that defragmenting an SSD is so write-heavy that it should be avoided at all costs, due to the reduction in SSD life-time; but... I keep wondering what the impact of a fragmented NTFS file-system is on: 1) the persistent re-mapping data structures 2) the clean-up algorithms My guess is that, for all controllers out there, it is always better to have large sequentially allocated files (logical LBAs, not physical NAND placement but file-system placement). This should help because re-mapping meta-data would be easily coalesced by the garbage collection routines, which would be otherwise impossible if the file is fragmented. So, if a perfectly de-fragmented file-system indeed reduces the size of the re-mapping meta-data, this would potentially have at least 3 other benefits: 1) Faster look-ups for accessing data blocks, easier read-ahead for reads and buffering for writes 2) More efficient TRIM handling as most unallocated areas should also be de-fragmented 3) Faster wear-leveling/garbage collection thanks to less meta-data To conclude, I'm arguing that a perfectly defragmented file-system on an SSD would be consistently faster for all operations and possibly provide better NAND longevity in the long run. So, maybe it is worth it to erase the SSD once in a while, and copy back all the files from a back-up to the SSD for perfect file placement?