• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

News: RAID0 TRIM official with latest RST driver

garikfox

Senior member
When the 11.5.0.1109 RST driver first came out they indicated that TRIM was not enabled with this driver but will be enabled in the next 11.5.x driver.

The next 11.5.x driver was just released and it is driver version 11.5.0.1149

Alot of people are wondering if this driver in fact does pass the TRIM command while using a RAID 0 array as noted in the previous release.

Well It does !, But it seems its only for Windows 8 since this driver makes the device a SCSI device and Win7 can not pass the TRIM command with a SCSI device but Win8 can.

I was looking inside the 11.5.0.1149 RST GUI Help button Icon and found this below 🙂

3eaced8b_RTRIM.jpeg


FYI: It is recommended to use RAID OROM 11.5.0.1347 with this driver
 
Last edited:
Does it pass on TRIM when using any other RAID levels, or just the non-RAID that people call RAID0?

There are very serious theoretical problems with passing TRIM in any other RAID level because it breaks the assumption of deterministic data storage which is an inherent assumption of redundancy.

Passing TRIM in RAID0 is a trivial issue - I'm amazed it has taken this long.
 
I'm not in a hurry, as long as I can give the GC enough time to clean up I can go indefinitely without TRIM, anyway. I think I'll wait for the official final release instead of alpha/beta.
 
There are very serious theoretical problems with passing TRIM in any other RAID level because it breaks the assumption of deterministic data storage which is an inherent assumption of redundancy.
First, you are forgetting about RAID1, 10, and 01.
Second, for RAID5, 6, etc. the controller is already taking an active role, generating parity and and deciding what goes where rather then performing a simple split or duplication like in RAID0 or 1. It is a little more work to account for TRIM but its still not that difficult; the crux is that you don't just "pass" TRIM. Rather you have the controller generate TRIM commands to member drives as appropriate.

Passing TRIM in RAID0 is a trivial issue - I'm amazed it has taken this long.
Yep, it is quite odd. Also for RAID1, 10, and 01.
 
Last edited:
RAID0 is for performance nuts, but SSDs are so fast I don't see how RAID0 is going to make things noticeably faster. After a certain amount of "fast" you see diminishing returns. After all, how much faster than "almost instantly" can you get?

The important part is going to be to get TRIM going in the actual RAID levels, the ones that provide redundancy. That's why people generally go RAID, for redundancy. Especially the ones worried about some ghostly "higher failure rate" of SSDs. RAID0, as was mentioned, was the "easy one."

It probably would be better for the controller to generate the TRIM commands, but no one's working on that either.
 
First, you are forgetting about RAID1, 10, and 01.

No. I'm not.

There's no guarantee that a drive, when it receives a TRIM command will zero out the data, either immediately or later.

If one drive in a RAID1 fails to honor a TRIM request (as is permitted) or it is of a design where TRIM does not guarantee an LBA to be zeroed, then the array will be in an inconsistent state - and a consistency check will fail.

The only way around this is for a drive to have firmware which guarantees that a correctly issued TRIM command will zero the relevant LBA - it is then up to the OS and RAID driver to ensure that the TRIM commands are correctly formed so that they are guaranteed to be honored. Additionally, the RAID chunks have to be aligned with the drive's TRIM chunks.

There is a recent specification known as "deterministic TRIM" which modern drives should comply with. A drive which supports this, will support the above guarantees.

Even then, there is a difficulty, because if you TRIM an entire stripe in a RAID5 or RAID6 - you can't issue a TRIM for the parity drive, because the parity won't be all zeros.
 
Last edited:
RAID0 is for performance nuts, but SSDs are so fast I don't see how RAID0 is going to make things noticeably faster. After a certain amount of "fast" you see diminishing returns. After all, how much faster than "almost instantly" can you get?

The important part is going to be to get TRIM going in the actual RAID levels, the ones that provide redundancy. That's why people generally go RAID, for redundancy. Especially the ones worried about some ghostly "higher failure rate" of SSDs. RAID0, as was mentioned, was the "easy one."

It probably would be better for the controller to generate the TRIM commands, but no one's working on that either.

You should try it. I just went from an x25m g2 80gb to 2x256gb m4's in RAID 0. I ran for a couple days overall with a single m4, and that was quite a bit faster than the x25m, but it wasn't really worth the upgrade expense imho. However, in RAID 0 there is a very noticeable difference, everything I do is just a LOT faster. It is enough faster that I'm happy that I spent the $430 for the ssd's, but if it had just been a single m4 I would have been disappointed.
 
That depends on your laptop. My wife's dq7t has dual hard drive bays, and many others have a cd drive bay that can be removed to add a 2nd hdd or ssd. Regardless, if you get the chance then you'll be glad you did it.
 
No. I'm not.

There's no guarantee that a drive, when it receives a TRIM command will zero out the data, either immediately or later.

If one drive in a RAID1 fails to honor a TRIM request (as is permitted) or it is of a design where TRIM does not guarantee an LBA to be zeroed, then the array will be in an inconsistent state - and a consistency check will fail.

The only way around this is for a drive to have firmware which guarantees that a correctly issued TRIM command will zero the relevant LBA - it is then up to the OS and RAID driver to ensure that the TRIM commands are correctly formed so that they are guaranteed to be honored. Additionally, the RAID chunks have to be aligned with the drive's TRIM chunks.

I disagree. This is relevant on a RAID5 or 6 where there is PARITY to calculate.

In RAID1 and its derivatives the data is simply duplicated across both drives.
Having different data on DELETED sectors is irrelevant. If you are rebuilding the array only one drive is read so it doesn't matter that the two drives had different data on deleted space (either nothing or leftovers)...

The only time this could possibly come into play is if you are rebuilding a degraded double (or more) parity RAID1 (3 drives in RAID1) with 2 or more good drives.
In such a case it is still DOABLE, you just have an SSD toggle which means the controller assumes mismatches are due to free space TRIM occurring at different rates and ignore them (after perhaps reading each sector twice to ensure it is consistent and making sure the drive's internal CRC check passes).

There is a recent specification known as "deterministic TRIM" which modern drives should comply with. A drive which supports this, will support the above guarantees.

Even then, there is a difficulty, because if you TRIM an entire stripe in a RAID5 or RAID6 - you can't issue a TRIM for the parity drive, because the parity won't be all zeros.
Well that is stupid, so rather then having the controller track free space (by keeping track of trim commands it receives) and then issue its own TRIM commands as appropriate, they are going to basically force immediate erasure of sectors in a method that nullifies all gains from wear leveling and GC/TRIM and gives you 128x write amplification? This is pants on head retarded!

PS, a deleted SSD is all 1s, not all 0s.
 
Last edited:
RAID0 is for performance nuts, but SSDs are so fast I don't see how RAID0 is going to make things noticeably faster. After a certain amount of "fast" you see diminishing returns. After all, how much faster than "almost instantly" can you get?

RAID0 has the benefit of not only adding performance, but also adding capacity, a pretty big deal considering the restrictive sizes of SSDs. Heck, my own RAID0 SSD setup is done just as much (if not more so) for creating ~512GB of space for my OS/apps as it was for improving performance.
 
RAID0 has the benefit of not only adding performance, but also adding capacity, a pretty big deal considering the restrictive sizes of SSDs. Heck, my own RAID0 SSD setup is done just as much (if not more so) for creating ~512GB of space for my OS/apps as it was for improving performance.

Except that 512 GB SSDs are now reaching per-gig price parity with theri 256 gig counterparts. Sure the cheaper models still aren't available in 512, but people caring about performance shouldn't be buying those in the first place.

Sent from my SAMSUNG-SGH-I927 using Tapatalk 2
 
Last edited:
Except that 512 GB SSDs are now reaching per-gig price parity with theri 256 gig counterparts. Sure the cheaper models still aren't available in 512, but people caring about performance shouldn't be buying those in the first place.

Sent from my SAMSUNG-SGH-I927 using Tapatalk 2

Disagree. When you can get 256 Crucial M4's for $189 (or less), I doubt that you can find a 512GB for anywhere near the $380 price tag. Even at $235 (or less) for the Samsung 256GB's, you wouldn't find a 512GB Samsung for the $470 price point. The 128 and 256GB drives, even higher end drives, are much cheaper per GB than the 512GB and larger drives.

Edit: A search for the Samsung 512GB on froogle came in at $670. The Crucial came in at $571.
 
Last edited:
In RAID1 and its derivatives the data is simply duplicated across both drives.
Having different data on DELETED sectors is irrelevant. If you are rebuilding the array only one drive is read so it doesn't matter that the two drives had different data on deleted space (either nothing or leftovers)...

But that's the point. RAID doesn't know what space is free. Current controllers don't track unused space. If future controllers do, then how do they track it - where do they store it - and how do they do so in a stable and transferrable way? If it's stored on flash in the controller - then how does the controller deal with the situation where the drives are temporarily moved to a 2nd controller, and then moved back?

This is not a simple problem. Almost any practical implementation will have corner cases where data loss is likely - or else will have additional hardware cost or performance penalties (e.g. by using a dedicated drive for free space mapping, or recording the free space map on the RAID drives).

Because the RAID layer doesn't know what LBAs contain data or not, all it can do is trigger an alarm condition if the data on 2 drives is discrepant. This is a significant advantage of integrated FS/multi-device software, such as ZFS.

Well that is stupid, so rather then having the controller track free space (by keeping track of trim commands it receives) and then issue its own TRIM commands as appropriate, they are going to basically force immediate erasure of sectors in a method that nullifies all gains from wear leveling and GC/TRIM and gives you 128x write amplification? This is pants on head retarded!

PS, a deleted SSD is all 1s, not all 0s.

It's not stupid. It's very sensible - the specification doesn't prescribe what the drive does to the flash. It only prescribes that following a TRIM command, that LBA return all zeros. The flash does not have to be erased immediately - just that the firmware recognises that LBA as unmapped.

The data presented by a deleted SSD is implementation dependent, but I've never seen one that doesn't present such a sector as all 0s.
 
This is not a simple problem. Almost any practical implementation will have corner cases where data loss is likely - or else will have additional hardware cost or performance penalties (e.g. by using a dedicated drive for free space mapping, or recording the free space map on the RAID drives).

That's not that farfetched, it could be stored on the first stripe on all drives. You would have to store it on the drives, since you need a way to maintain the array config when swapping out a bad RAID controller. Since it's on more than one drive it'll survive the loss of any drive. Given the massive transfer speeds SSDs are capable of, I doubt any performance penalty will be significant.
 
Update: The driver 11.5.0.1149 is for Windows 8 since this driver makes the device a SCSI device and Win7 can not pass the TRIM command with a SCSI device but Win8 can.
 
But that's the point. RAID doesn't know what space is free. Current controllers don't track unused space. If future controllers do, then how do they track it - where do they store it - and how do they do so in a stable and transferrable way? If it's stored on flash in the controller - then how does the controller deal with the situation where the drives are temporarily moved to a 2nd controller, and then moved back?
Obviously new controllers would need to be made. And I still disagree about this being an issue for mirroring. There is no reason for the controller to degrade the array for a mismatch, an intelligent RAID controller like ZFS or unRAID will not. Its just that traidtional raid is really really stupid and makes stupid assumptions that are unnecessary. This is why rebuilding a RAID5 array is so difficult with large drives, the rebuild process will abort if any of the drives has a single corrupt sector. But this is not an issue with more intelligent implementation of RAID5 derivatives (unRAID, ZFS...)

Yes, the mismatch between two drives on RAID1 HDDs means that a file has been corrupted (on SSDs it means that it MIGHT have been corrupted but most likely its a GC mismatch)... but if the RAID array does more then throw a warning then it is doing something stupid and unnecessary... and if SSDs just get false warnings then so what, just ignore the warnings and know that it cannot internally track corruption and that your FS must handle that. Its not like it can actually RECOVER data in the case of corruption anyways. Only full implementations like ZFS can.

The bottom line is that its not an issue of this being a fundamentally difficult / wrong / problematic to implement. But that traditional RAID controllers are inherently incompatible with it unless their firmware is modified for them to make different assumptions.

It's not stupid. It's very sensible - the specification doesn't prescribe what the drive does to the flash. It only prescribes that following a TRIM command, that LBA return all zeros. The flash does not have to be erased immediately - just that the firmware recognises that LBA as unmapped.

The data presented by a deleted SSD is implementation dependent, but I've never seen one that doesn't present such a sector as all 0s.
That does not in any way require a new trim command, merely a change in how an SSD handles all existing TRIM commands (there is absolutely no reason for it to not always respond in such a manner; and the fact that it doesn't already is stupid). And as a bonus, this solves the issues you raised (aka, compatibility issue with the stupid assumptions traditional RAID controllers make). The only "downside" is that you cannot recover deleted files as they will immediately return all 0s when read due to being marked as trimmed. And that is not a bug its a feature (data security... I don't WANT people to undelete my stuff). SSDs that do that could be denoted as "RAID compliant TRIM" or some marketing BS and all existing drives can be modified to do TRIM that way (extremely simple change; and honestly, I will not be surprised if some drives ALREADY behave that way)

That's not that farfetched, it could be stored on the first stripe on all drives. You would have to store it on the drives, since you need a way to maintain the array config when swapping out a bad RAID controller. Since it's on more than one drive it'll survive the loss of any drive. Given the massive transfer speeds SSDs are capable of, I doubt any performance penalty will be significant.

that... is a good idea, much simpler then on the controller itself.
AFAIK drives ALREADY store RAID data on the drives themselves now that I think about it. So keeping track of it in the controller level is no issue.
 
Last edited:
RAID0 has the benefit of not only adding performance, but also adding capacity, a pretty big deal considering the restrictive sizes of SSDs. Heck, my own RAID0 SSD setup is done just as much (if not more so) for creating ~512GB of space for my OS/apps as it was for improving performance.

Agreed this is the reason I bought a second 160GB G2. Just waiting for this official driver to setup it all up. For me it is more about capacity than the speed increase which I will gladly accept 🙂

I do regular acronis backups onto a ESATA drive.
 
Back
Top