SSD for LSI Cachecade?

bradcollins

Member
Nov 19, 2011
49
0
0
http://www.lsi.com/products/storagecomponents/Pages/MegaRAIDSAS9271-8iCC.aspx

At work we purchase a number of these as IBM M5110's and usually put the 1gb flash and raid 5 module on them when building servers for 10-25 people. The current X3500 and X3650 ship with 8 hdd bays and these controllers have 8 ports on them so they are a perfect match. Normally we are using 6-8 drives in them, in different types or raid arrays to suit the use of the system.

I've been thinking for a while about using Cachecade, but the IBM prices on SSD's are very expensive and appear to be Micron p400e based which simply isn't very quick any more at only 7000 iops write - the cachecade allows both reading and writing to be cached.

Now the question comes down to what normal consumer SSD makes sense. Considering they would be in raid 1, I don't beleive that absolute reliability is required. If a drive fails, it wouldn't be hard to turn off Cachecade until another drive is installed and the array is rebuilt. I would only consider a Raid 1 of two drives, allowing 6 normal spinning drives to go onto the same Raid card without expanders or buying additional drive bays for the server.

Most of these servers would have an exchange database on them, being SBS 2011 based, probably with a line of business app or two on them as well as the usual of windows, page file and data.

The Samsung 840 Pro is overall the fastest SSD out there at the moment, however it does appear to allow performance to drop to quite low levels and as this environment wouldn't allow TRIM, I'm guessing that a Sandforce drive is possibly a good option? The Plextor M5 Pro may also be a good option, but I'm not thinking about the Vertex 4 as even though it may run on similar silicon, it has that huge slow down when data is actually on the drive.

Assuming Sandforce vs 840 Pro, I think the Intel 520 is probably the best option due to reliability compared to others, but I can't help to wonder about SF-2282 based (Force GT) or Toggle nand (Force GS and others). I did note I couldn't find an SF-2282 and Toggle combination, they don't seem to exist, but the gains of any of those other types of configuration over a standard Intel 520 are low and I doubt they are worth the reliability problem.

Size wise, 480gb Sandforce is quite a bit slower than the 240gb models, but I wonder if 480gb is a better option as it would simply be caching twice as much data, or could be dropped down to a 400gb cache for longer life and more spare area, which would allow for greater performance?

Or does anyone have another SSD in mind that would offer great performance, not just for the first week, but over the course of a year or so, where it is likely to be absolutely hammered with random data and without TRIM support?
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
OUCH that is not good. Watch your money running whatever SSD you get cuz its not going to have TRIM according to you so,,,,,, who knows what will happen. TRIM is there for a reason and plays a big big role from my experience....
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Or does anyone have another SSD in mind that would offer great performance, not just for the first week, but over the course of a year or so, where it is likely to be absolutely hammered with random data and without TRIM support?
Intel's 520, and Crucial's M4 come to mind, but I really do wonder if the best option might just be to use a faster non-consumer SSD. It seems just about everybody else's is way faster than the P400e. Consumer SSDs are not optimized for heavy random loads, and none of the major makers are going to guarantee crap for their consumer SSDs, when they can point you to their enterprise line, which has been made with those cases in mind.

IIRC, Rubycon has used several such SSDs in RAIDs, so might be able to offer some useful insight.

OUCH that is not good. Watch your money running whatever SSD you get cuz its not going to have TRIM according to you so,,,,,, who knows what will happen. TRIM is there for a reason and plays a big big role from my experience....
If it made that much difference, they wouldn't have put off supporting it for so long. In fact, the SATA standards people probably would have bothered to make it work right in the first place, if it were so importnt (being non-deterministic in the first place was stupid, in the common case where hardware people don't think about how their hardware will actually be used).

TRIM is fundamentally extra. If you don't give the drive some idle time, or use all SF-based drives, or write a lot and then benchmark it, TRIM will be great. For the rest of the world, it's a crutch for drives with poor real-time GC performance. Good real-time GC, without performance drops or WA spikes, is going to be hard to do without more over-provisioning than consumer SSDs have, though.
 

bradcollins

Member
Nov 19, 2011
49
0
0
ignoring how good or bad TRIM is, it isn't relevant here, so...

Cerb, why do you think the M4 would be a good option?

I've been looking into some of the SLC Sandforce drives out there that Comay and Superspeed offer, they don't drop in performance when filled up as much as other Sandforce drives do, as per this site http://www.rwlabs.com/article.php?cat=&id=701&pagenumber=12

They only go up to 128gb which would be ok, but for many of these servers, exchange and the page file is going to be above 128gb, so I was thinking that another SLC based drive might be a better bet, but they are hard to find and end up being near the cost of the IBM ones anyway.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Cerb, why do you think the M4 would be a good option?
It's been not uncommonly used, with success, as a cheapskate's server SSD. A random write cache scenario might still be on the tough side for it, though. Even if it wouldn't kill it quickly, most consumer drives expect to have periods of little to no writing, to perform GC, and not giving it that could hurt performance significantly.

Even MLC enterprise drives are not cheap, being commonly 4x or more that of desktop/mobile ($800 for a 200GB Intel 710, FI).
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Look at it this way, pretty much regardless of what typical SSD's you are using, you are going to be bumping up the server application/database performance a LOT at the server level. That being said, a LOT is a relative term. Users will not be able to quantify the performance difference besides "much better" than it was from their perspective as there are still other things (like network bandwidth, etc, etc) that impact how their computers respond. As "much better" is all the users will notice, it is in your best interest to shoot for more endurance/reliability rather than more "drive-i/o-faster" in a server app/database environment. Users won't be able to tell the difference between 70% storage response benchmarks increases and 95% storage repsonse benchmarks increases (just made-up numbers), but they will be able to immediately notice "back-to-much-slower" if the SSD's fail. Even at degraded speeds, the SSD's ought to provide "much better" as far as the end users are concerned when compared to pure HD response.

That being said, ASFAIK Cachecade is currently limited to 512GB of SSD caching size at the moment, whether you use one big, or multiple smaller SSD's. If you are going to use a consumer drive, then get bigger ones and overprovision the heck out of them. Like two 512GB partitioned at 50%. This should extend the life of the SSD's greatly with the current 512GB max caching size limitation of Cachecade.

With this in mind for "consumer class" SSD's, I would simply stick to a good name brand known for reliability (which anything Sandforce is NOT) as the ultimate performance differences between say a Samsung, Crucial, or Intel are going to be invisible to the end user. Pure advertised i/o numbers for the SSD's are not really meaningful since everything is still dependent on the RAID controller and whether the storage ends up 70% faster or 95% faster (just percentages pulled out of air for example) means little to the end-user besides the "it feels much faster" response they will give you.

Basically, the end users will think "damn it's faster" when driving a Corvette over their Camry. They will say the same thing if they went from a Camry to a Porsche. However, they generally won't notice any difference than if they went from the Porsche to a Corvette, even if the Corvette is capable of 20MPH faster speeds than the Porsche. Once they get used to the faster speeds, they WILL notice if they have to drive the slow Camry again... It behooves you to not worry about top MPH, but get the one most reliable for the longest time to put off any bitching about returning to the Camry speeds even if they have to lose a potential 20MPH off the top end...
 
Last edited:

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Intel's 520, and Crucial's M4 come to mind, but I really do wonder if the best option might just be to use a faster non-consumer SSD. It seems just about everybody else's is way faster than the P400e. Consumer SSDs are not optimized for heavy random loads, and none of the major makers are going to guarantee crap for their consumer SSDs, when they can point you to their enterprise line, which has been made with those cases in mind.

IIRC, Rubycon has used several such SSDs in RAIDs, so might be able to offer some useful insight.

If it made that much difference, they wouldn't have put off supporting it for so long. In fact, the SATA standards people probably would have bothered to make it work right in the first place, if it were so importnt (being non-deterministic in the first place was stupid, in the common case where hardware people don't think about how their hardware will actually be used).

TRIM is fundamentally extra. If you don't give the drive some idle time, or use all SF-based drives, or write a lot and then benchmark it, TRIM will be great. For the rest of the world, it's a crutch for drives with poor real-time GC performance. Good real-time GC, without performance drops or WA spikes, is going to be hard to do without more over-provisioning than consumer SSDs have, though.


Well it needs to be a quality SSD. For example my dads cheap A-DATA 128GB sata 3 only 375mbps , now after 3000 hours of usage hes had it a year... Crystal shows 315mbps and my system is soo much faster then his alto he has a Sandy 2600k @ 4.2Ghz. gl His Crystal shows 97 percent now,,, and the error is wear and lifetime ... His boot up is pretty slow my PS CS6 launches faster then his, mine 2 seconds ,, his 4 seconds overall I enjoy sitting behind my system any day. Plus hes got a cheapo viewsonic 22" 1680res..... :wub:
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
If your going to do a lot of copying files etc. With No TRIM I think even the quality SSD will slow down imo Maybe they keep their speed without TRIM hmmm we need a TRIM expert here.

Also if you want stability and quality imo 2 choices

Crucial or Samsung
 

bradcollins

Member
Nov 19, 2011
49
0
0
Cerb, there would be periods of little activity, most would get ~8 hours a day of idle time in between normal use and the backup running, good GC would be important though, which is why I'm leaning towards Sandforce based drives.

mrpiggy, you are correct of course, any current reasonably quick SSD will be fine for the role, but in a situation where Trim won't be working, I'm thinking that some SSD's would completely fall over, for instance the Samsung 840 allows write speeds to drop all the way down to ~25mb/s in Anand's test, while the Corsair Neutron only goes down to about 60mb/s with a much higher average. The Crucial M4 drops down to about 15mb/s and has a very low average

http://www.anandtech.com/show/4712/the-crucial-m4-ssd-update-faster-with-fw0009/6

http://www.anandtech.com/show/6058/corsair-neutron-gtx-ssd-256gb-review/7

http://www.anandtech.com/show/6328/samsung-ssd-840-pro-256gb-review/6

The M5 Pro also seems to do quite a good job after 20 minutes

http://www.anandtech.com/show/6153/plextor-m5-pro-256gb-review/8

Strangely I couldn't find any of those graphs for Sandforce based drives, instead I found pages like this:

http://www.anandtech.com/show/5817/the-intel-ssd-330-review-60gb-120gb-180gb/7

All of the SF based drives have a page like that, Vertex 3, Intel 520, Kingston HyperX, etc. I don't understand why?
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Cerb, there would be periods of little activity, most would get ~8 hours a day of idle time in between normal use and the backup running, good GC would be important though, which is why I'm leaning towards Sandforce based drives.

mrpiggy, you are correct of course, any current reasonably quick SSD will be fine for the role, but in a situation where Trim won't be working, I'm thinking that some SSD's would completely fall over, for instance the Samsung 840 allows write speeds to drop all the way down to ~25mb/s in Anand's test, while the Corsair Neutron only goes down to about 60mb/s with a much higher average. The Crucial M4 drops down to about 15mb/s and has a very low average

http://www.anandtech.com/show/4712/the-crucial-m4-ssd-update-faster-with-fw0009/6

http://www.anandtech.com/show/6058/corsair-neutron-gtx-ssd-256gb-review/7

http://www.anandtech.com/show/6328/samsung-ssd-840-pro-256gb-review/6

The M5 Pro also seems to do quite a good job after 20 minutes

http://www.anandtech.com/show/6153/plextor-m5-pro-256gb-review/8

Strangely I couldn't find any of those graphs for Sandforce based drives, instead I found pages like this:

http://www.anandtech.com/show/5817/the-intel-ssd-330-review-60gb-120gb-180gb/7

All of the SF based drives have a page like that, Vertex 3, Intel 520, Kingston HyperX, etc. I don't understand why?


If you go by the standard 20/80 w/r ratio and having 8 hours of idle time per day, with "50%" overprovision on two 512GB drives in RAID0 512GB cachecade cache, the large overprovision buffer and GC will never let your drives get to the same point under normal usage as the drives in the state that reviewers like Anand intentionally places drives. Reviewed drives are not overprovisioned being factory standard levels (like 7%). There is HUGE difference in endurance, and performance over time when you pump up the overprovisioning levels past the 30% levels. Also figure that the algorythms that LSI builds uses for cachecade is smart enough to not "easily" get into this overabused state (not counting the fact that the RAID card has it's own caching buffers independent of the drives used).

You are more likely to have the consumer level SSD's just drop out of being recognized by the controller for no apparent reason.
 

bradcollins

Member
Nov 19, 2011
49
0
0
Thanks for that, I'd prefer the drives to be in Raid 1 in case either of them drop off, in which case allocating around 300gb out of 480/512 is probably ok? Any further ideas on model of SSD?
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Thanks for that, I'd prefer the drives to be in Raid 1 in case either of them drop off, in which case allocating around 300gb out of 480/512 is probably ok? Any further ideas on model of SSD?

I'd recommend the biggest non-Sandforce Intel, M4, or Samsung as far as a commercial level SSD for this useage. The more overprovision the better beyond the 512GB max cachecade size.

If you are still at the experimental versus deployment stage, then you might want to try a couple of cheap 128GB's just to test end user performance over a few weeks/months time. You don't need to go all in at once as the cachecade caching can be enabled/disabled on the fly after it is enabled (double check, but I think it can, but haven't played with it lately) so if you like what 2 cheap small SSD's do, you can simply upgrade later without bringing down the system. The smaller/cheap SSD's should at least give you a good indication of end-user performance benefits and/or other-than-storage-response bottlenecks. The large overprovisioned SSD's are more about longer term reliability as a caching device.
 

tweakboy

Diamond Member
Jan 3, 2010
9,517
2
81
www.hammiestudios.com
Getting SandForce is a death wish. "old ones"

Alto I don't know about the new SandForce gen controllers.....Their more stable but would you trust it fully ? SandForce on that Gigabyte mother is a recipe for trouble.
 

mrpiggy

Member
Apr 19, 2012
196
12
81
Simply put, SSD's based on Sandforce controllers have track record for poor reliability. While others may have had some growing pain weird firmware issues (like the Crucial 5000 hour bug), the issues have been more about inconvenience (i.e have to update firmware, but data stayed intact) as opposed to complete data loss and dead drives as is often the case with Sandforce based drives..

I'm not saying that Sandforce drives wouldn't work perfectly fine in a normal PC environment 90% of the time without issue, but once you start adding RAID's with multiple drives and higher levels of required access, any base bugs get exponentially worse and more likely to occur. You don't need the highest benchmarking SSD's for your caching purpose. You want track record over time reliability within a particular lower-end consumer price range.

You are using consumer level drives for a purpose it is not guaranteed to work in well. Modifying the drive to different parameters (i.e. lots of overprovision) as stock will make up some of the differences between SSD's purpose-built for this usage (re: expensive). Now chances are you won't have any issues at all, but since you're playing in the low-end market consumer price range, stick with drives that suffer the least "anecdotal" reports of problems. Sandforce based drives simply have a deserved reputation for not as reliable as other name brand SSD's not using the controller.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Simply put, SSD's based on Sandforce controllers have track record for poor reliability.
They can also exhibit fragile performance over time, as well, especially with a lack of TRIM (<1 WA takes trade-offs).
 

bradcollins

Member
Nov 19, 2011
49
0
0
Funny you say that, while I'm aware that Sandforce drives can be backed into a hole that they just can't get out of, I thought with a reasonable amount of over provisioning that they would be ok, although I understand the other points about their reliability and so on.
 

johny12

Member
Sep 18, 2012
109
0
0
Getting SandForce is a death wish. "old ones"

Alto I don't know about the new SandForce gen controllers.....Their more stable but would you trust it fully ? SandForce on that Gigabyte mother is a recipe for trouble.

I have been using Intel 520 at home for some time now, after the firmware update, i have had no issues at all. My workplace has an IBM DS 5000 storage system. It provides upto 700,000 IOPS & 6400 mb/s functionality. It uses sandforce as well. I am comfortable with sandforce and am ok with its performance
 

mrpiggy

Member
Apr 19, 2012
196
12
81
I have been using Intel 520 at home for some time now, after the firmware update, i have had no issues at all. My workplace has an IBM DS 5000 storage system. It provides upto 700,000 IOPS & 6400 mb/s functionality. It uses sandforce as well. I am comfortable with sandforce and am ok with its performance

So you're using "consumer-level" Sandforce SSD's in your network storage system? The OP is specifically asking about cheap consumer-type SSD's. There's a large difference between using (and paying the difference for) fully vender-qualified/tested SSD's on an enterprise platform compared to consumer-level SSD's you can get at Best Buy.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
samsung 830 for cachecade @ 20% OP for cheap (no capacitor)

or intel 320's at 40-60% OP for damn reliable.

Cachecade 2.0 pro is write-back

You might consider doing read-only cachecade if you want to stress the ssd a lot less. Besides sync writes are faster to 15K spindles.

Those new pci-e 3.0 cachecade controllers can do over 700K IOPS and 6400mb/s but don't expect much out of the M5014 $130 ebay cheapies with a cachecade key.

the 9265 is dual core ddr3/1gb and the pci-e 3.0 use the same cpu but have a slightly higher clock and much faster bus.

Someone sold me a cachecade 1 key for $50 and it enabled everything (fastpath/encryption/etc) on my lsi card except write-back cachecade.

If you look at the lsi nytro , it tapes on a few ssd's.

What I want to find out is: Who makes a full length board, x8 that I can use for power draw to 4 ssd's and pipe them direct into my caching controllers? Can't be that hard to draw 20 watts off the bus and make a spot to mount long wise 2x2 ssd and let me handle the i/o to my own controller.
 

johny12

Member
Sep 18, 2012
109
0
0
I'd recommend the biggest non-Sandforce Intel, M4, or Samsung as far as a commercial level SSD for this useage. The more overprovision the better beyond the 512GB max cachecade size.

Sandforce has an interesting durawrite feature, that allows user to increase the over provisioning but not decrease it. When it is used along with TRIM, it further expands the Over provisioning free space. I bet even samsung 840 doesnt have this feature
 

mrpiggy

Member
Apr 19, 2012
196
12
81
I'd recommend the biggest non-Sandforce Intel, M4, or Samsung as far as a commercial level SSD for this useage. The more overprovision the better beyond the 512GB max cachecade size.

Sandforce has an interesting durawrite feature, that allows user to increase the over provisioning but not decrease it. When it is used along with TRIM, it further expands the Over provisioning free space. I bet even samsung 840 doesnt have this feature

Huh? And this is different from manually overprovisioning other SSD's how? You don't need special tools to overprovision a SSD (unless you want to change factory provided internal amounts). Simply not using (partitioning) all the available space on an SSD provides the same unused block availability functions for the SSD's controller.

The only difference between a factory built-in overprovision space versus, simply using less-than-full partitioning (user overprovision), is that you generally cannot change the amount of factory provided built-in overprovision space. The SSD controller will use user-created non-partitioned space (at least on any decent SSD) the same way it uses the built-in amount of overprovisioning and increase the SSD's long-term durability/performance while reducing write amplification.

Simply put, if you have an SSD with 250GB of available size (not counting any built-in amount of factory overprovision), by simply only partitioning only half the available space to say only 125GB and leaving the rest as unused/non-partitioned space, this non-partitioned space will be used by the SSD's controller the same manner as it uses the built-in overprovision (generally around 7%-10% on consumer SSD's). And when you get to these large amounts of overprovisioned space (40+%) relative to the actual used size of of the SSD it is MUCH (orders of magnitude) more difficult to back an SSD into a bad-state, poor-performance corner as the OP is concerned about.

Think of it this way (although obviously a non-perfect analogy), you have two (insert your favorite team sport) teams. One has twice as many players as the other. If there are no restrictions on swapping players on the field, the larger team can swap out more players more often than the other so the players actually on the field will always be less tired than the smaller team who has less member to swap and needs to keep the same players playing longer.
 

ViviTheMage

Lifer
Dec 12, 2002
36,189
87
91
madgenius.com
Without reading other comments in the thread, I have a bunch of these running in our cachecade 2.0 :

Intel 320 Series 120gb

I tried OCZ Vretex 3's and they did NOT work, they kept dumbing down to 1.5gbps, and performance was trash...so we went with the intel 320's.
 

johny12

Member
Sep 18, 2012
109
0
0
Simply put, SSD's based on Sandforce controllers have track record for poor reliability. While others may have had some growing pain weird firmware issues (like the Crucial 5000 hour bug), the issues have been more about inconvenience (i.e have to update firmware, but data stayed intact) as opposed to complete data loss and dead drives as is often the case with Sandforce based drives..

I'm not saying that Sandforce drives wouldn't work perfectly fine in a normal PC environment 90% of the time without issue, but once you start adding RAID's with multiple drives and higher levels of required access, any base bugs get exponentially worse and more likely to occur. You don't need the highest benchmarking SSD's for your caching purpose. You want track record over time reliability within a particular lower-end consumer price range.

You are using consumer level drives for a purpose it is not guaranteed to work in well. Modifying the drive to different parameters (i.e. lots of overprovision) as stock will make up some of the differences between SSD's purpose-built for this usage (re: expensive). Now chances are you won't have any issues at all, but since you're playing in the low-end market consumer price range, stick with drives that suffer the least "anecdotal" reports of problems. Sandforce based drives simply have a deserved reputation for not as reliable as other name brand SSD's not using the controller.

I disagree with your statement that Sandforce has a poor reliability record. I define reliability as predictablity. If all Sandforce SSDs can die at any time, then you would have a reliability issue, but I have not seen that. The problem I saw for them in the past were BSOD for some small percentage of the users and those did not result in lost data. There were problems from some manufacturers with lower quality standards that might have been using Sandforce, but that is not a fault of Sandforce. Like any technology product, you need to review their manufacturing reputation. There are plenty of high quality manufacturers who use Sandforce and have no problem at all.

As for the RAID comment, I have never seen any reports of problems at all. Do you have any links to evidence that would support your claim?
 

mrpiggy

Member
Apr 19, 2012
196
12
81
I disagree with your statement that Sandforce has a poor reliability record. I define reliability as predictablity. If all Sandforce SSDs can die at any time, then you would have a reliability issue, but I have not seen that. The problem I saw for them in the past were BSOD for some small percentage of the users and those did not result in lost data. There were problems from some manufacturers with lower quality standards that might have been using Sandforce, but that is not a fault of Sandforce. Like any technology product, you need to review their manufacturing reputation. There are plenty of high quality manufacturers who use Sandforce and have no problem at all.

As for the RAID comment, I have never seen any reports of problems at all. Do you have any links to evidence that would support your claim?

Problem is that OCZ is/was the biggest pusher for Sandforce and OCZ sucks. Unfortunately regardless of whether it's deserved or not, the implicit association is that Sandforce is unreliable because OCZ is unreliable. Obviously not likely true across all brands/models, but most people don't have time to separate the wheat from the chaff due to all the brands using the controller. Hence whether it's too broad a brush stroke or not, it's much easier to simply recommend staying away from Sandforce controllers as a recommendation.

Feel free to buy a bunch of different brands with Sandforce controllers and report on the long term, however IMHO, there is not enough of a performance, endurance, or price benefit of using a Sandforce controller SSD over many other excellent SSD's using other controllers that do not have a history (deserved or not) of failures.

As to Sandforce failures in higher-end RAIDs, this is not a forum where much high-end RAIDing with cheaper SSD's is discussed nor am I going to go search and link them for you. I can tell you that the most common "cheap consumer" Sandforce drives attempted for this purpose appear to be the OCZ's Vertex3's (mainly due to price and advertised speeds, and unfortunately, guilty by association with OCZ) and they have exhibited off-the-wall failures like only working at SATA2 speeds, not being recognized by high end controllers, dropping off, and/or randomly bricking over the longer term with total data loss at a higher rate than other comparable consumer level SSD's that do not use the Sandforce controller.

As I mentioned, things are different on the single drive workstation side of things, however when you start having to fork out money for multiple SSD's for a RAID where the supposed benefits of the Sandforce controller in an SSD is no longer a big benefit (i.e. when used with an expensive and smart caching RAID controller) and data loss is catastrophic, people tend to be much more conservative in giving and taking recommendations; even those based on light or even weak anecdotal evidence.
 
Last edited: