Hello guys. Nice discussion going on here! May I chime in, just because I have some spare time?
First things first. How do you check whether your storage setup supports TRIM? You CANNOT! The 'fsutil disabledeletenotify' stuff is a bunch of crap. This is for debugging purposes only, if you want to DISABLE TRIM which nobody ever wants to do. Windows NTFS driver generates 'TRIM' requests whether you run a harddrive or SSD, unless you manually set that debugging setting to a setting of 1. This can be useful to test the impact of missing TRIM in careful benchmarks and other special circumstances. Otherwise; NO REASON to touch it, so don't.
You cannot know whether the TRIM request generated by NTFS driver, reaches the SSD and even when that happens, the host doesn't know anything about what the SSD will do with it. It delivered the TRIM command, have fun with it! Bye! No confirmation. What this means is that you cannot simply 'see' whether TRIM is enabled or not. Utilities like SSDlife that
suggest otherwise are confusing you!
TRIM cannot sensibly be used on RAID with redundancy, because it defeats the parity calculation. The RAID driver would need to keep track of TRIMmed sectors and require additional storage to keep track of this, breaking backwards compatibility and making a simple system very complex.
This should not be the case. RAID drivers don't care about what space is in use, they act on an LBA level. If the host tells the RAID driver to TRIM LBA 40 through 60, then the RAID driver
can do so.
I think what you mean is that a partial TRIM request inside a stripe block or a TRIM request spanning multiple stripe boundaries, require the RAID driver to handle this somehow. But this is the primary function of a RAID engine or disk multiplexer; translate logical LBA into physical LBA. If a stripe boundary exists, the RAID driver has to issue multiple requests while the host just sends one requests and just honours its own virtual storage. This is known as I/O segmentation and is a normal part of any RAID engine. TRIM doesn't complicate this and should follow the same path as read requests.
The additional parity also doesn't complicate this too much. Unless the specific implementation updates in whole stripe blocks instead of snippets within stripe blocks which of course is much more elegant.
Under the FreeBSD server operating system, both geom_raid5 and ZFS RAID-Z (raid5) and RAID-Z2 (raid6) and RAID-Z3 in modern installations should support TRIM on SSDs on AHCI controller. As claimed on I believe Wikipedia, this is a worlds first. But I am not so sure about that. Many proprietary software implementations exist.
It is potentially possible to use RAID 1 on drives that guarantee to zero-out a TRIMmed LBA immediately (not all drives do this [some don't zero the sectors, some have a delay, some might ignore TRIM under certain circumstances...]
I'm afraid I didn't understand this part. Please tell me if I misunderstand, but TRIM is not the same as zero write! The SSD can do with the TRIMed LBA whatever it wants, it can write random data, it can write zeroes, that would not matter for the correct operation of the RAID. However, SSDs do NOT write to the physical NAND locations that are TRIMed. Instead, the 'mapping table' part of every modern SSD is updated to reflect the change and the TRIMed physical NAND cells may be recycled in garbage collection or subject to full erase block rewrite in the near future. If you TRIM the entire SSD LBA, not much is written to the physical NAND at all! The mapping tables will be updated, just like an index.
RAID 1 support for TRIM cannot be guaranteed to work correctly
Why?
😉
The algorithm for using TRIM in RAID 0 is utterly trivial, so I remain constantly surprised that this isn't universally supported, and that even where it is supported, it took so long.
That is simple. On Windows, a design limitation exists where all RAID volumes are considered to be SCSI harddrives. This also means a SCSI protocol interface exists between the storage driver and the Windows API. You can check this with AS SSD saying it is an 'ATA storage device' or 'SCSI storage device'. The latter means it follows SCSI protocol in software path.
Why is this important? TRIM is an ATA protocol, it works on ATA (also incorrectly known as IDE) and AHCI controllers following ATA8-ACS2 protocol specification. SCSI does not support this feature. However, there is a SCSI 'unmap' command which acts as equivalent to ATA TRIM. Windows 8 is said to use SCSI command path and translating SCSI UNMAP to ATA TRIM before sent to the SSD. I cannot verify whether this is correct, but under Windows 7 you should only be able to have TRIM support if you have a suitable driver and ATA/AHCI disk interface.
Setting your Intel onboard 'RAID' controller to 'RAID' mode in the BIOS will still mean separate SSDs not part of an array will interface with AHCI including TRIM support, however this depends on the RAID drivers in question. Both AMD and Intel should support this for some time. Other RAID drivers like nVidia, Silicon Image, Marvell and the likes do NOT support TRIM for the simple reason they interface as SCSI, not as ATA.
This is a Windows design limitation. UNIX has implemented this much more elegantly, and offers superior software RAID engines and superior filesystems and supports TRIM on those just fine.