• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Raiding two Intel X25-M 80gb - good idea?

Trajan

Member
It looks from a quick peek at NewEgg that I can get two X25-M 80GB drives for just about the same price as one 160 GB. I read the earlier article about RAIDing the 40 GB drives and getting outstanding performance. Is this a good idea for the 80s?

I'm currently using a G1 Intel 80GB for my boot and games, but I'm running seriously low on space. I don't want to have this problem again by getting a 120gb Sandforce drive, so the 160 gb intel looks really attractive. So the question for me (I think?) is to get the 160 Intel or two 80s with RAID.

I know TRIM won't work. Any other down sides? How long before the drives get clogged up from no TRIM? (my own G1 drive, I believe, is slowing down noticeably).

Thanks for any advice at all!
 
Any performance benefit from 2x80GB might be overshadowed by not having TRIM support in the long run (just a guess on my part). Also, how much would the improved performance be noticed?

You have close to 80GB in OS & game data?
 
I have two 80GB G1s that worked just fine with 32bit Vista but just couldn't stay in a RAID0 array with W7.

Either drive would error out and I thought one was going bad so I ordered an 80GB G2 to replace the bad drive but it seems like my Gen1 and Gen2 drives don't get along.

To make a long story short just buy the 160GB unit and be done with it. 🙂
 
The drives will be fine with the Intel driver release. There are countless examples littered all over the web of the 'garbage collection' results which return the performance of the raid volume almost immediately after deletions. Its doing exactly what TRIM is supposed to. Personally, I believe TRIM is working except Intel just won't concede this for some reason.

"Looks like a duck, smells like a duck"
 
Write performance will degrade slightly due to no TRIM, but it does seem the newer drivers do wonders to make it a smaller decrease.

There's always secure erase if you crave the highest possible benchmark numbers.
 
I cannot afford any more SSD purchases right now due to budget, but if I could, I would get a 160GB drive and migrate the 80GB to a laptop that desperately needs a faster HD.
 
Write performance will degrade slightly due to no TRIM, but it does seem the newer drivers do wonders to make it a smaller decrease.

There's always secure erase if you crave the highest possible benchmark numbers.

Thanks (to everyone) for the advice above. It sounds as if RAIDing actually won't result in a significant performance loss due to TRIM not being active. I'm not really after crazy benchmark numbers, I just want a speedy system for actual usage.

A single 160 GB has the speed I want, but since it seems like doing two 80 GB RAID and one 160 GB are about price equivalent right now, I wonder if maybe RAID is the smart way to go if TRIM is not a concern. I'd end up with genuinely (noticable) faster read times, or is that an exaggeration?


I cannot afford any more SSD purchases right now due to budget, but if I could, I would get a 160GB drive and migrate the 80GB to a laptop that desperately needs a faster HD.

Well, since my current drive is a G1, I was thinking of getting two new X25-M 80GBs and raiding those, and then as you suggested, either selling my current G1 or (more likely) putting it in my laptop.
 
This is sort of a bump, but I thought it was worth it before I pull the trigger in a couple of days. It really sounds as if I don't need to worry too much about performance degradation do to not (technically) having TRIM, assuming I raid two 80GBs.

On the flip side of things.. Does anyone know if there's any reason that I wouldn't get a performance boost by RAIDing? I saw Anand's article on RAIDing the smaller Intel drives, and I don't want to just assume that I'll get the same thing with two 80GB drives. Although.. it should?

I'm after real world performance really, not just benchmark results, so maybe this is overthinking the whole thing and a 160 GB is the more reasonable option 🙂
 
I think the biggest benefit will be the miniscule access times along with the already higher transfer rates. The extra transfer rates from RAID0 would just be gravy.

I RAIDed three of the 40GB drives (Kingston, flashed to Intel firmware). They test out to 555Mb/s average. In real world performance they don't "feel" that much faster than a single 60GB OCZ Agility. This was a "seat of pants" playing WoW on two systems side-by-side, LOL. Both were definately faster than running off a VelociRaptor 300GB.
 
To make a long story short just buy the 160GB unit and be done with it.

I've played with the single units and units in RAID.

The performance increase that RAID brings is minuscule and not worth the hassle.

Buy the 160GB and let TRIM do it's thing.
 
I raided 2x X-25m g1s and am thrilled. 540 mb/s reads and equivalent random performance to a single drive. The loss of trim is not a concern because

1) i have g1s
2) its only something like a 15 percent drop in random write performance. Its undetectable in normal use (vs a secure erased drive). Remember OSes are built to function with conventional hard drives's random performance and SSDs outperform them in that regard by what 1000x? I know you claim your g1 is slowing down, but honestly I've never noticed any sort of slow down. I mean read speeds don't really drop off even if the drive is 100 percent full... but then again, I've never even used more than 40 percent of my max storage space.

I'd go 2x in Raid0 everytime at least also incase you have a drive fail, you still will have one working SSD whilst you RMA the other. But really its all about price. if equivalent, i'd go 2x SSDs.
 
Having no TRIM is no issue as long as you increase the default 7% spare area to something like 20% or even higher. This would sacrifice usable space, but be faster in the long run.

SSDs with enough spare space do not need TRIM at all.

separatedataplacement.jpg


This graph plots the spare factor (default = 7% or 0.07) against write amplification, which you can see increases with few spare area left. The resulting erase block fragmentation will cause much higher write amplification.

Ultimately, performance and lifespan will suffer. Reserving more space by not partitioning it and never write to it, is therefore close to mandatory or Intel SSDs in RAID.

Also, RAID0 on Windows tends to scale much less well than on Linux/BSD systems, where RAID0 scales almost linear with a proper setup.

I myself run 5 Intel X25-V 40GB SSDs in RAID0 on FreeBSD, resulting in ~1250MB/s of random I/O read performance; 250MB/s per drive. So it's certainly possible. 🙂
 
Hey, thanks for the tip! Just to clarify one thing.. the way that I would increase the spare area on the disk would be to simply set the volume size to smaller than the maximum? And the controller will automatically use the unformatted area as spare?

I'm happy to give up an extra 13% or so if it means I don't have to worry so much about TRIM or wear.
 
Hey, thanks for the tip! Just to clarify one thing.. the way that I would increase the spare area on the disk would be to simply set the volume size to smaller than the maximum? And the controller will automatically use the unformatted area as spare?
As far as I understand the whole thing the controller has no notion about filesystems or partitions at all, so the only thing you've got to do is to leave some space free, how you do that is unimportant. So I see no advantage of using a smaller partition - if you need the space you'll be happy to have it and if not you get the same advantages without the hassle.
 
As far as I understand the whole thing the controller has no notion about filesystems or partitions at all, so the only thing you've got to do is to leave some space free, how you do that is unimportant. So I see no advantage of using a smaller partition - if you need the space you'll be happy to have it and if not you get the same advantages without the hassle.

You could be right, I'm certainly no expert. The explanations I have read elsewhere are that the controller does, in fact, use all free space as spare space. However, in the absence of TRIM working, the controller might confuse any space that you have ever used (and not secure erased) as being "used" and will therefore stop using that space as spare. Therefore, in a RAID situation (where TRIM may not be fully working) or on older drives, if you simply never format some of the space, you can ensure it will never get written on, and will therefore be available forever as spare space.

Again, this is me rehashing what I've read elsewhere, though, I don't pretend to understand it myself.
 
You guys are on the right track.

Intel SSDs would use all space as spare, until you write to it. So at that stage, the SSD knows it cannot use that area. Over time, if you have a C-partition that spans 100% of the capacity, due to all installs and reinstalls, at least all of these blocks have been written to at least once. It doesn't matter your filesystem was never more than 50% full; you will have occupied 100% of the visible capacity.

Now if you do not have TRIM, that would mean the only spare area the SSD knows about is its internal 'guaranteed' 7% spare space; the difference between GiB ('real' gigabyte) and GB (what harddrive makers use). This is illustrated in this graph:

sparearea.png


The 7% is not enough to stop erase block fragmentation after actual/realistic use over time; you need more! How can we do this? Two ways:

1) Able to use TRIM and keep a good portion of your filesystem empty (i.e. never more than 60% full)
2) Without TRIM: reserve some dedicated space that you will NEVER write to so the SSD still has more than its default 7% spare space.

Even with TRIM, you may want to reserve some dedicated space. This is due to erase block fragmentation. Even with TRIM working, if you regularly go over 60% capacity, you may be affected by erase block fragmentation as TRIM will only give the SSD small fragmentated spots and not whole erase blocks that are free. To cope with this, use TRIM and never make the filesystem fuller than 60%; or reserve dedicated space just like when you did not have TRIM.

So the general advice is:

TRIM capable: two options:
1) create a C-partition spanning across 100% of the visible capacity; never let the filesystem become more full than 60%
2) create a C-partition spanning across ~90% of the visible capacity; and leave the 10% unallocated and never use it. In addition, do not let your filesystem become more full than 80%. As you can see, this might lead to actual more storage space while still keeping performance degradation very low.

Without TRIM: dedicate a good 20-25% spare area to the SSD. Some existing and future Enterprise SSDs will have between 25% and 50% of spare capacity by default; invisible from the operating system. These SSDs do not need TRIM at all and may even opt to ignore TRIM as the dedicated space works better due to having full empty erase blocks without any user data to consider.
 
Thanks again for the expert info, Mesa! I expect I am regularly going to exceed 60% usage of my Intel RAID array, so maybe I will bump spare area up to 25% and hope that's enough to prevent noticable slowdowns.

One (last?) question for you if you are feeling generous. In the absence of TRIM (or even with TRIM) what are effective ways of manually resetting all the spare space? People talk about secure erasing, and I assume they are talking about a particular utility.

From my perspective.. If I image my two drive RAID array, back that up on another disk, reformat the RAID drives, and then reimage from my backup, is that sufficient to reset everything as far as the SSDs are concerned? Or do I need to do something special (perhaps at the reformating step?). Thanks again!
 
As modern SSD controllers use 'remapping', they actually deviate from how the Operating System thinks data is stored. Where Windows may think the first sector is located at the beginning of the SSD, it may well be located elsewhere if a small write happened to that location and the SSD decided to write to a free block instead because that is much faster and saves on write cycles.

All very well, but it has to remember this somewhere. This is what we call the Remapping Table and it is stored in the Host Protected Area (HPA) that exists in all modern HDDs and even SSDs. By erasing the HPA, we destroy any reference to where data was located. Even though all data in the physical flash cells stay the same, it would be like a brand new disk since the SSD thinks oh hey my table is totally empty; the OS has not written a single byte to me yet. What is actually stored in the flash cells is irrelevant; it will be overwritten and the OS will not read portions that are not in use.

To reset the HPA Remapping Table, you can use the utilities SecureErase or HDDErase; google on this. You have to set the controller mode to IDE to make this work.
 
Oh for those interested, i posted some basic benchmark results of five Intel X25-V 40GB in RAID0 over here:
http://hardforum.com/showthread.php?t=1515243

Got up to ~1250MB/s random read performance, which is pretty slick. Still, i feel its a shame the big userbase on Windows does not have any advanced software RAID that really scales well in terms of random IOps. It all seems to go about sequential speeds; as that is the only thing the casual consumer knows about. They don't know what Random IOps means; not that i blame them, this stuff is just rather difficult.
 
Thanks again (and again) Mesa. I'm sorry I keep having new questions 🙂 I hope I can impose on you one more time!

If the HPA is cleared out, I get that that will effectively wipe the drive perfectly. Will this also clear out the the SSD's tracking of wear & tear? I am groping a little bit here with my ignorance, but I do remember reading that the controller tracks usage of each data segement to spread out the usage and extend longetivity. I don't know if the occasional reset of that data would really have a serious impact or not but I'm curious if that information is stored in the same place that I might be wiping out now and then.

Thanks again!
 
hello there,
ive got some xtreme noob questions on the subject,
does the trim issue remains unresolved
if im going to RAID0 two 80GB X25s (given we r now in 08/23/2010) ?

do intel mobos have raid controllers that TRIM the SSDs in RAID0 ?
or can i do it in an AM3 by just using intel's new ssd drivers ?

(im going to buy a new rig,
was flirting w/ ocing an X6,
but given this thread and intel chirping about RST,
http://www.intel.com//support/chipsets/imsm/sb/CS-022304.htm,
am wondering if i should consider going all intel (mobo/chip)...

sorry for the rambling,
id appreciate any input or suggestions on what to get.

p.s.
following is the xchange i had w/ intel:

me:
since TRIM wont work, what should i do in order not to get the performance decay ?

Intel Rep :
in this case what you would do is back up your data whenever you notice a slow down of the performance (depending on the use, after 2 to 6 months) then run a low level format, then add your data again.

me:
what about making a smaller partition ? if so, how much smaller ? any advice on how to partition it ? will it work on windows XP64 ?

Intel Rep :
the partition does not influence in what happens with the SSD every time data is moved or erased, since what happens is that the data is erased from that part, but the space where the data was, is still held. this 'space' is what you need to get rid of, trim detects these spaces and flags them as available again.

me:
so, there's no maintenance-free form of RAID0 on the X-25Ms ?

Intel Rep :
not at the moment. This is due to no RAID controllers can work with TRIM yet.
 
Last edited:
Back
Top