In most (ALL?) modern OSes, memory pages newly assigned to a process are zero pages, both for performance (can use COW from a single reserved zero page) and for security (don't want to reuse a physical page containing data another process deallocated).
Newly created files (if created with non-zero size) are by default filled with zeroes just like newly allocated memory.
A full format writes zeros to the bulk of the drive.
IMHO a good SSD controller should be checking the data in each cluster before writing to see if it contains all zeros. If it does, it should be treated like a TRIM, and just remove the logical cluster from the logical -> physical mapping; don't really do the write. And to keep compatibility, whenever reading a cluster that has been trimmed in this manner, just return all zeros instead of an error.
I realize this would use more SATA bandwidth than an explicit TRIM command, and add a tiny bit of extra overhead for the controller to do the check - although we are talking about a really tiny amount of extra overhead - if the cluster isn't all zeroes this should usually be detected on the first few bytes. But there would be several advantages:
1. Blocks not to write would get naturally detected in situations where the OS would not normally be issuing a TRIM command, i.e. new memory mapped file created, or file pre-allocation turned on in Azureus.
2. It becomes trivial to write an app to TRIM all available space on an OS without explicit TRIM support just by filling the drive with a file full of zeros. On NT based OSes, you don't need to fill the whole drive at once to do this, just use the NtFsControlFile API with a control code of FSCTL_MOVE_FILE to move a medium sized file full of zeros through each range of free clusters.
Moved to appropriate forum - Moderator Rubycon
Newly created files (if created with non-zero size) are by default filled with zeroes just like newly allocated memory.
A full format writes zeros to the bulk of the drive.
IMHO a good SSD controller should be checking the data in each cluster before writing to see if it contains all zeros. If it does, it should be treated like a TRIM, and just remove the logical cluster from the logical -> physical mapping; don't really do the write. And to keep compatibility, whenever reading a cluster that has been trimmed in this manner, just return all zeros instead of an error.
I realize this would use more SATA bandwidth than an explicit TRIM command, and add a tiny bit of extra overhead for the controller to do the check - although we are talking about a really tiny amount of extra overhead - if the cluster isn't all zeroes this should usually be detected on the first few bytes. But there would be several advantages:
1. Blocks not to write would get naturally detected in situations where the OS would not normally be issuing a TRIM command, i.e. new memory mapped file created, or file pre-allocation turned on in Azureus.
2. It becomes trivial to write an app to TRIM all available space on an OS without explicit TRIM support just by filling the drive with a file full of zeros. On NT based OSes, you don't need to fill the whole drive at once to do this, just use the NtFsControlFile API with a control code of FSCTL_MOVE_FILE to move a medium sized file full of zeros through each range of free clusters.
Moved to appropriate forum - Moderator Rubycon