• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The skinny on TRIM

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
How does one know any feature in your computer is working? Did you test to verify that turbo boost for your CPU is working? Did you test to ensure your ram is running in dual channel mode?

One learns about the features, their intended workings, and ways to go about verifying that they are working as intended. Yes, I did test turbo boost and dual channel, indirectly of course through benchmarks and reporting utilities.

taltamir said:
If you are using windows 7+, have an SSD that supports it and a controller that supports it then its working.

I'm not sure how this response applies to any of my questions. If it works it works? Great! 😛

It's not like I am paranoid about TRIM and need to check up on it every week. I just want to learn about the ins and outs of how and when it works, or doesn't.
 
4. How does one really know if TRIM is working, can it only be tested by doing a benchmark with a fresh drive and then comparing it months later?
There is a very quick and easy TRIM test available without stressing your storage drives. All you need is a Hex Editor.
You can find the description of this method >here<.
 
TRIM doesn't just affect speed, it also affects write amplification. And it still matters even with modern more aggressive GC algorithms.

Technically true, but a typical persons usage will not have a significant effect here.

You really have to have a pretty insane amount of write + delete in a short period of time to have a significant effect here. The algorithms used by Anand and other reviewers to demonstrate GC issues and TRIM benefits are other-worldly compared to most usage profiles.

Yes, there are usage scenarios that will experience significant write amplification increase in the absence of TRIM, but based on the OP's questions, I'm willing to bet money he does not have one of those scenarios.

The reality is that if you're not someone who is looking at P/E cycle specs because know you have a usage scenario that will potentially run out of P/E cycles before you'd otherwise replace the drive, then you don't need to worry about TRIM, garbage collection and related SSD issues. For the vast majority of people buying consumer level drives, it's simply a non-issue altogether.
 
@Concillian:
I have yet to see a single review, from anandtech or otherwise, that showed GC without TRIM getting back to the same performance level of GC with TRIM. Even after giving it hours and hours of off time to recover. IIRC its typical to get 99.X% of new drive speed with TRIM, and ~90% of new drive speed without TRIM but with competent GC and several hours of downtime.
 
Those drives were junk before you put them into RAID so that is not a fair test to compare against current generation drives. I had 2x Samsung 830's in RAID0 for a while and although I didn't think it was worth the effort and thought the other drive was better used elsewhere, I did not experience any slow down over time. You could say my system idles a lot as I don't really task it these days so enough GC time would have eliminated any problems a lack of TRIM would have caused.

Agreed.

Those 30GB first 1gen SSD's are crap.

As as other have posted even without trim you take a hit with writing speed only.

So unless you are using a Raid 0 setup for a storage disc Trim is just a bonus.

Also agreed with the other poster on a OS drive depending on your workload writing on a daily basis is so small from the OS itself that you won't feel the difference only see it in benchmarks.

i'm going to Raid 0 my G2's which don't have Trim cause its not supported on the X58 chipset in December and let you know how it goes.
 
These do:
http://www.anandtech.com/show/6371/micron-p320h-pcie-ssd-700gb-review-first-nvme-ssd

It is more a function of spare area and how aggressive the GC is and copy on write techniques. Anything in "spare" can be deleted with impunity. TRIM may help grow the spare area which means incoming writes can be fast "longer" but how many people typically push say 56GB at full tilt (when looking at a 200GB Intel drive with 256GB of NAND.) Even at that point there is 56GB of blocks waiting to be deleted via GC. The controller can purge that space as fast as the NAND will allow erase cycles because there is nothing worth saving in those blocks. Basically if you are looking at pushing that much data on a regular basis you start looking at those Micron cards @ 700GB when they 1.5TB of NAND (i think that was amount from memory)

TRIM of course still helps drives like the Samsung drive that expose all the NAND ie 256GB / 256GB etc even with copy on write.
 
Last edited:
Hello guys. Nice discussion going on here! May I chime in, just because I have some spare time?

First things first. How do you check whether your storage setup supports TRIM? You CANNOT! The 'fsutil disabledeletenotify' stuff is a bunch of crap. This is for debugging purposes only, if you want to DISABLE TRIM which nobody ever wants to do. Windows NTFS driver generates 'TRIM' requests whether you run a harddrive or SSD, unless you manually set that debugging setting to a setting of 1. This can be useful to test the impact of missing TRIM in careful benchmarks and other special circumstances. Otherwise; NO REASON to touch it, so don't.

You cannot know whether the TRIM request generated by NTFS driver, reaches the SSD and even when that happens, the host doesn't know anything about what the SSD will do with it. It delivered the TRIM command, have fun with it! Bye! No confirmation. What this means is that you cannot simply 'see' whether TRIM is enabled or not. Utilities like SSDlife that suggest otherwise are confusing you!

TRIM cannot sensibly be used on RAID with redundancy, because it defeats the parity calculation. The RAID driver would need to keep track of TRIMmed sectors and require additional storage to keep track of this, breaking backwards compatibility and making a simple system very complex.
This should not be the case. RAID drivers don't care about what space is in use, they act on an LBA level. If the host tells the RAID driver to TRIM LBA 40 through 60, then the RAID driver can do so.

I think what you mean is that a partial TRIM request inside a stripe block or a TRIM request spanning multiple stripe boundaries, require the RAID driver to handle this somehow. But this is the primary function of a RAID engine or disk multiplexer; translate logical LBA into physical LBA. If a stripe boundary exists, the RAID driver has to issue multiple requests while the host just sends one requests and just honours its own virtual storage. This is known as I/O segmentation and is a normal part of any RAID engine. TRIM doesn't complicate this and should follow the same path as read requests.

The additional parity also doesn't complicate this too much. Unless the specific implementation updates in whole stripe blocks instead of snippets within stripe blocks which of course is much more elegant.

Under the FreeBSD server operating system, both geom_raid5 and ZFS RAID-Z (raid5) and RAID-Z2 (raid6) and RAID-Z3 in modern installations should support TRIM on SSDs on AHCI controller. As claimed on I believe Wikipedia, this is a worlds first. But I am not so sure about that. Many proprietary software implementations exist.

It is potentially possible to use RAID 1 on drives that guarantee to zero-out a TRIMmed LBA immediately (not all drives do this [some don't zero the sectors, some have a delay, some might ignore TRIM under certain circumstances...]
I'm afraid I didn't understand this part. Please tell me if I misunderstand, but TRIM is not the same as zero write! The SSD can do with the TRIMed LBA whatever it wants, it can write random data, it can write zeroes, that would not matter for the correct operation of the RAID. However, SSDs do NOT write to the physical NAND locations that are TRIMed. Instead, the 'mapping table' part of every modern SSD is updated to reflect the change and the TRIMed physical NAND cells may be recycled in garbage collection or subject to full erase block rewrite in the near future. If you TRIM the entire SSD LBA, not much is written to the physical NAND at all! The mapping tables will be updated, just like an index.

RAID 1 support for TRIM cannot be guaranteed to work correctly
Why? 😉

The algorithm for using TRIM in RAID 0 is utterly trivial, so I remain constantly surprised that this isn't universally supported, and that even where it is supported, it took so long.
That is simple. On Windows, a design limitation exists where all RAID volumes are considered to be SCSI harddrives. This also means a SCSI protocol interface exists between the storage driver and the Windows API. You can check this with AS SSD saying it is an 'ATA storage device' or 'SCSI storage device'. The latter means it follows SCSI protocol in software path.

Why is this important? TRIM is an ATA protocol, it works on ATA (also incorrectly known as IDE) and AHCI controllers following ATA8-ACS2 protocol specification. SCSI does not support this feature. However, there is a SCSI 'unmap' command which acts as equivalent to ATA TRIM. Windows 8 is said to use SCSI command path and translating SCSI UNMAP to ATA TRIM before sent to the SSD. I cannot verify whether this is correct, but under Windows 7 you should only be able to have TRIM support if you have a suitable driver and ATA/AHCI disk interface.

Setting your Intel onboard 'RAID' controller to 'RAID' mode in the BIOS will still mean separate SSDs not part of an array will interface with AHCI including TRIM support, however this depends on the RAID drivers in question. Both AMD and Intel should support this for some time. Other RAID drivers like nVidia, Silicon Image, Marvell and the likes do NOT support TRIM for the simple reason they interface as SCSI, not as ATA.

This is a Windows design limitation. UNIX has implemented this much more elegantly, and offers superior software RAID engines and superior filesystems and supports TRIM on those just fine.
 
Typo (maybe I should stop using the numpad...that's twice this week, on AT forums!). RAID 1 will have the same deterministic needs as other RAID levels, even w/o parity.


Mirrors could work just fine with TRIM, though it would require SSDs that would behave exactly the same.

Parity though, could only work with a new parity calculation each time a TRIM function occurred on a block. While not impossible, it's unlikely.
 
Hello guys. Nice discussion going on here! May I chime in, just because I have some spare time?

First things first. How do you check whether your storage setup supports TRIM? You CANNOT! The 'fsutil disabledeletenotify' stuff is a bunch of crap. This is for debugging purposes only, if you want to DISABLE TRIM which nobody ever wants to do. Windows NTFS driver generates 'TRIM' requests whether you run a harddrive or SSD, unless you manually set that debugging setting to a setting of 1. This can be useful to test the impact of missing TRIM in careful benchmarks and other special circumstances. Otherwise; NO REASON to touch it, so don't.

Source please?

You cannot know whether the TRIM request generated by NTFS driver, reaches the SSD and even when that happens, the host doesn't know anything about what the SSD will do with it. It delivered the TRIM command, have fun with it! Bye! No confirmation. What this means is that you cannot simply 'see' whether TRIM is enabled or not. Utilities like SSDlife that suggest otherwise are confusing you!

Sure it does, the device is required to respond with an ATA error frame, abort bit set, and "ATA command not supported."


This should not be the case. RAID drivers don't care about what space is in use, they act on an LBA level. If the host tells the RAID driver to TRIM LBA 40 through 60, then the RAID driver can do so.

I think what you mean is that a partial TRIM request inside a stripe block or a TRIM request spanning multiple stripe boundaries, require the RAID driver to handle this somehow. But this is the primary function of a RAID engine or disk multiplexer; translate logical LBA into physical LBA. If a stripe boundary exists, the RAID driver has to issue multiple requests while the host just sends one requests and just honours its own virtual storage. This is known as I/O segmentation and is a normal part of any RAID engine. TRIM doesn't complicate this and should follow the same path as read requests.

The additional parity also doesn't complicate this too much. Unless the specific implementation updates in whole stripe blocks instead of snippets within stripe blocks which of course is much more elegant.

Not exactly. TRIM is a filesystem request that needs to be translated in to LBA blocks. The problem is a RAID stripe set rarely lands on the logical NAND block. For example a 64k stripe raid and an 8k NAND block: A 4k NTFS block is TRIMed. The array controller at that point needed to read the 64k stripe, modify, XOR and write it back. At this point what happens depends on the SSD operates. It writes it back to the same LBA (rare now a days) which erases the entire set of 8 x 8k blocks, copy on writes the entire set of blocks (SSD's in the last few years that don't use spare areas do this) and marks the old block as empty for garbage collection to deal with, or it simply does a copy on write and utilizes the spare area and lets GC handle the now vacant blocks. Copy on write drives gain much less on TRIM that older drives.

Random example: Samsung 830 has 7% spare so a you need to write at least 17gig continuous to see a performance drop in writes. Since the consumer market rarely has any way to actually push 17gig to the disk without any break in the flow (mostly would be SSD to SSD / benchmarks / cp /dev/random /sda etc) you can expect that copy capacity to be higher. Move in to enterprise and things like 50% spare areas are more common so the times are far longer. There is also the extra I/O added...


Under the FreeBSD server operating system, both geom_raid5 and ZFS RAID-Z (raid5) and RAID-Z2 (raid6) and RAID-Z3 in modern installations should support TRIM on SSDs on AHCI controller. As claimed on I believe Wikipedia, this is a worlds first. But I am not so sure about that. Many proprietary software implementations exist.

This is misleading. ZFS, "RAID-Z2", "RAID-Z3" are not RAID techniques but not "RAID" as be the standard that define things like RAID1. ZFS and its component RAID like parts has the file system address the disks as separate entities. "Real RAID [if you can call it that]" isolates the disk-subsystems from the file system layers. Since TRIM is a filesystem call and ZFS is acutely aware of the underlying disks, it would be able to use TRIM easily. RAID subsystems mask all this. This doesn't mean the RAID system couldn't do the translation however.

I'm afraid I didn't understand this part. Please tell me if I misunderstand, but TRIM is not the same as zero write! The SSD can do with the TRIMed LBA whatever it wants, it can write random data, it can write zeroes, that would not matter for the correct operation of the RAID. However, SSDs do NOT write to the physical NAND locations that are TRIMed. Instead, the 'mapping table' part of every modern SSD is updated to reflect the change and the TRIMed physical NAND cells may be recycled in garbage collection or subject to full erase block rewrite in the near future. If you TRIM the entire SSD LBA, not much is written to the physical NAND at all! The mapping tables will be updated, just like an index.
Your response here doesn't make a whole lot of sense. TRIM is only indirectly related to the mapping tables. TRIM is a method for the filesystem to tell the disk what blocks are not in use. From there it is up to the disk to do what it wants with it. Typically this involves marking the block as "erasable" in block table and then letting garbage collection erase the entire block. The SSD is going to remap (at least most current drives will) whether is gets a TRIM request or not. TRIM just makes GC more effective. Also GC resets the block to all 1's in current NAND tech by the way. To clarify, a TRIM request writes nothing to the NAND [some drives do place mapping tables in the NAND.] Garbage collection handles that. The drive wouldn't waste NAND gate life writing random trash to the cell...
Why? 😉

That is simple. On Windows, a design limitation exists where all RAID volumes are considered to be SCSI harddrives. This also means a SCSI protocol interface exists between the storage driver and the Windows API. You can check this with AS SSD saying it is an 'ATA storage device' or 'SCSI storage device'. The latter means it follows SCSI protocol in software path.

Why is this important? TRIM is an ATA protocol, it works on ATA (also incorrectly known as IDE) and AHCI controllers following ATA8-ACS2 protocol specification. SCSI does not support this feature. However, there is a SCSI 'unmap' command which acts as equivalent to ATA TRIM. Windows 8 is said to use SCSI command path and translating SCSI UNMAP to ATA TRIM before sent to the SSD. I cannot verify whether this is correct, but under Windows 7 you should only be able to have TRIM support if you have a suitable driver and ATA/AHCI disk interface.

Setting your Intel onboard 'RAID' controller to 'RAID' mode in the BIOS will still mean separate SSDs not part of an array will interface with AHCI including TRIM support, however this depends on the RAID drivers in question. Both AMD and Intel should support this for some time. Other RAID drivers like nVidia, Silicon Image, Marvell and the likes do NOT support TRIM for the simple reason they interface as SCSI, not as ATA.

This is a Windows design limitation. UNIX has implemented this much more elegantly, and offers superior software RAID engines and superior filesystems and supports TRIM on those just fine.

This is also misleading. Windows will happily use TRIM in software RAID such as built in RAID 1 from disk management. RAID devices in Windows showing up as SCSI is a driver design decision by the RAID subsystem designer [as you mentioned but seem to infer is "windows fault" which is false.] Actually, any "RAID" device that shows up as SCSI in Windows is going to appear as SCSI to Linux, just as and AHCI / ATA device Linux see's will appear as an AHCI / ATA device to Windows.
 
Last edited:
Why? 😉
TRIM makes no guarantee of when it will clear the block. To do it well would also take metadata being written to note the TRIM action, so that each member could verify their contents as being correct or not without being the same as each other, which is a change from regular assumptions.

This is a Windows design limitation. UNIX has implemented this much more elegantly, and offers superior software RAID engines and superior filesystems and supports TRIM on those just fine.
In a stable kernel? Last I knew, they were still considered experimental, and kept getting rejected from mainline.
 
TRIM makes no guarantee of when it will clear the block.

It doesn't need to clear the block, it just needs to return 0s (which it can do even if the block is not cleared).
The issue is only with parity keeping raid. TRIM does not explicitly specify a requirement for what to do when an OS utility/driver specifically attempts to read a specific block that has been trimmed but had not had new data written to it since.

The SENSIBLE thing to do would be for the controller to notice that it is a junk area and just return all 0s.
However it was never specified by anyone due to a colossal muckup by the spec writers. So a drive could hypothetically return all 0s, all 1s, or read off the last block that this trimmed area pointed to (which could be either all 1s if it has been cleared already, or actual data if not).

Rather than quickly ratify an amendment requiring all future drives to return all 0s (which could have been done YEARS ago) which allows them to carry a "RAID safe TRIM" label (and potentially firmware updates to many existing drives ensuring that; or an official proclamation if they already did that), the fools went and started ratifying a new trim derivative standard that will use an entirely new command that basically means "trim, but make sure to return all 0s if asked specifically" to ensure the drive returns all 0s when something tries to read a trimmed area.

As for the "what if someone tries to RAID an ancient SSD that does not do it correctly"? Well... Should have done their research. But if you really want to idiot proof it you could add a new smart entry for "RAID safe trim" and warn the user if he tries to use a drive that doesn't have it set to 1 rather than to make a new ATA command to replace trim.
 
Last edited:
Re point 4 - I'm a bit surprised that Microsoft didn't have some logging in the Application Log just like there is with the auto-defrag system. How difficult would that have been? "TRIM operation completed"?

I would have thought considering that Win7 was the first MS OS to support TRIM that the event log entries for the TRIM operation would have been quite verbose so that if it isn't going quite right, more information can be gleaned without having to load in diagnostic systems after a problem has occurred.
 
Re point 4 - I'm a bit surprised that Microsoft didn't have some logging in the Application Log just like there is with the auto-defrag system. How difficult would that have been? "TRIM operation completed"?

I would have thought considering that Win7 was the first MS OS to support TRIM that the event log entries for the TRIM operation would have been quite verbose so that if it isn't going quite right, more information can be gleaned without having to load in diagnostic systems after a problem has occurred.

I suspect that it has to do with the fact that a TRIM event isn't just one command. In theory every 4k NTFS cluster (default size at least) could generate 8 TRIM commands. Delete some rather large files and the event log might have a few hundred thousand new events. Also MS might have abstracted it so the same "command" from NTFS.sys could be used on future tech.
 
I suspect that it has to do with the fact that a TRIM event isn't just one command. In theory every 4k NTFS cluster (default size at least) could generate 8 TRIM commands. Delete some rather large files and the event log might have a few hundred thousand new events. Also MS might have abstracted it so the same "command" from NTFS.sys could be used on future tech.

Windows Update doesn't involve just one command either and yet the System Log on Vista/7 is probably 50% full of its messages.
 
It doesn't need to clear the block, it just needs to return 0s (which it can do even if the block is not cleared).

Rather than quickly ratify an amendment requiring all future drives to return all 0s (which could have been done YEARS ago) which allows them to carry a "RAID safe TRIM" label (and potentially firmware updates to many existing drives ensuring that; or an official proclamation if they already did that), the fools went and started ratifying a new trim derivative standard

Which is what is happening. The upcoming deterministic trim standard is NOT a new command. It will just be a flag in the drive capabilities record stating "drive guarantees deterministic trim". If the RAID driver sees that flag, then it should be able safely to use a trim-safe RAID algorithm.
 
Windows Update doesn't involve just one command either and yet the System Log on Vista/7 is probably 50% full of its messages.

I am not quite sure I follow your point here. That or you don't see the scope of your comment. The say 126 Update messages x 3 for each update barely compares to 100,000+ messages a day TRIM can generate on system? TRIM would build (on most default NTFS implementations using 4k cluster) at least 256 TRIM requests be 1MB of deleted data assuming the 4k delete stacks the TRIM commands. If not is it is 2048 requests. That is a lot of event logging.
 
Which is what is happening. The upcoming deterministic trim standard is NOT a new command. It will just be a flag in the drive capabilities record stating "drive guarantees deterministic trim". If the RAID driver sees that flag, then it should be able safely to use a trim-safe RAID algorithm.

Then I was misinformed. It is good to hear this will be done correctly.

Re point 4 - I'm a bit surprised that Microsoft didn't have some logging in the Application Log just like there is with the auto-defrag system. How difficult would that have been? "TRIM operation completed"?

I would have thought considering that Win7 was the first MS OS to support TRIM that the event log entries for the TRIM operation would have been quite verbose so that if it isn't going quite right, more information can be gleaned without having to load in diagnostic systems after a problem has occurred.

What is the benefit of that?
You cannot track them beyond the fact that they were generated. The controller's driver does not reply, neither does the physical controller nor the drive it goes to.
So all you would know is that it was generated and sent to the driver. That is something MS already has figured out is happening correctly.
 
Last edited:
Then I was misinformed. It is good to hear this will be done correctly.



What is the benefit of that?
You cannot track them beyond the fact that they were generated. The controller's driver does not reply, neither does the physical controller nor the drive it goes to.
So all you would know is that it was generated and sent to the driver. That is something MS already has figured out is happening correctly.

You can track them at least as far as "failed" and "accepted", TRIM is part of DATA SET MANAGEMENT opcode 0h6 (only TRIM is defined at the moment.) It is expected to return a "Write Log Ext Error" frame on error. Specifically bit 7 = CRC error, bit 4 "ID Not found" and 2 "Abort" being defined. Bit 0 is also defined as "Obsolete" but I don't think a drive would use that yet.

TRIM is also somewhat identified in the device identity command as it used to be expected that after a write, any read would return the same data until another write is accepted. TRIM devices set this value to indicate that the drive may make changes with out a write command.

Whole ton of stuff here:

http://www.t13.org/documents/UploadedDocuments/docs2009/d2015r1a-ATAATAPI_Command_Set_-_2_ACS-2.pdf
 
Accepted by what? the driver? the controller? the SSD Drive itself?


That article is titled "Working Draft Project"

As you can see by reading the document, the response is expected to come from the drive. t13.org is the standards committee responsible for the ATA standards. It is not an article, it is the specification document that all ATA devices are required to follow to be called "ATA." Feel free to buy the completed doc from them. It is doc 452-2008 / D1699 / INCITS 397-2004. Which happens to be the standard Windows 7/8 are written to for ATA drivers. ATA has been a "working draft" since 1994 and "ATA1."
 
I am not quite sure I follow your point here. That or you don't see the scope of your comment. The say 126 Update messages x 3 for each update barely compares to 100,000+ messages a day TRIM can generate on system? TRIM would build (on most default NTFS implementations using 4k cluster) at least 256 TRIM requests be 1MB of deleted data assuming the 4k delete stacks the TRIM commands. If not is it is 2048 requests. That is a lot of event logging.

If Windows Update can be effectively summarised to what one sees in the Windows Update log (in Control Panel > Windows Update), and dumbed down even further to say "Updates were installed successfully", then I am sure that *something* can be done for TRIM to give some sort of feedback as to whether it is being run on a storage device or not.

There was something else I wanted to say here but I left this post half-cooked to cook a meal.
 
It is not an article

A spec is an article. Just one that specifies something.

As you can see by reading the document, the response is expected to come from the drive.
It is a 496 page document, do you mind referring to a specific page number wherein this is stated?

ATA has been a "working draft" since 1994 and "ATA1."
That sounds highly unusual... can someone confirm or refute this?
 
It doesn't need to clear the block, it just needs to return 0s (which it can do even if the block is not cleared).
The issue is only with parity keeping raid. TRIM does not explicitly specify a requirement for what to do when an OS utility/driver specifically attempts to read a specific block that has been trimmed but had not had new data written to it since.
If two drives in a RAID 1 both get told that a given set of blocks is unused, unless it is noted in array metadata somewhere, so that those blocks are ignored, they could decide when to return zeroes at different times, or even not do so at all until the block is re-used for something. Between those two points in time, the drives aren't mirrors.

In that case, even a RAID 1 will need an implementation that is specifically either aware of TRIM, or specifically resistant to its unpredictability. That could, of course, occur by happy accident, but the drives need to be able to mirror each other, with no knowledge of the file system involved (software RAID should be trivial to add TRIM support to, FI), even though there's no parity.

Now yes, what should have been done is pretty clear, but the standard doesn't make any such guarantees, right now, AFAIK.
 
Last edited:
Back
Top