Best RAID SSD setup for gaming, workstation duty

eric.kjellen

Member
Oct 4, 2010
30
0
0
What kind of SSD RAID array would be the best for minimizing boot, application startup and level load times and for workstation applications such as CAD, 3D modeling, rendering and physics computing? The ones I've been looking at (4x X25-E in RAID 0) and products such as the OCZ Z-drive, Revodrive etc. have all been let-downs in precisely those fields that are related to random reads and instead they have excelled in sequential performance which is something I don't care much about.

I don't need the level of performance offered by the FusionIO products but I want the same kind of fast 4K reads. I thought that was exactly what you would get if you put X-25Es in RAID 0 but the Techreport review apparently showed otherwise.
 

HendrixFan

Diamond Member
Oct 18, 2001
4,648
0
71
SSD RAID wouldn't really help gaming too much at least not any better than HDD RAID. Sequential read speeds are quite good with SSDs, but not amazingly better than HDD. It is random and 4k speeds as well as access time that SSDs shine over HDDs.

For the price of a 4X SSD RAID setup you will saturate the onboard RAID of the ICHR10 and at a tremendous cost. A 4X HDD will be considerably cheaper and just as fast in sequential reads for game load times. The ICHR10 maxes out around 650 MB/s.

Pair that with a single smaller SSD drive for OS and other apps and you will save money and get just as good performance.
 

Baasha

Golden Member
Jan 4, 2010
1,997
20
81
OP, I'm in a similar situation as you in terms of finding a great SSD Raid array for my system for those exact tasks; 3D modeling, video rendering, photo editing, and of course, gaming.

I currently have two SLC (server class) SSDs in RAID-0 and they serve as my OS drive. I have two 15k RPM SAS drives, also in RAID0, for my programs and games and they function beautifully and are very very quick. As the person above stated, the current choices for great SSD RAID arrays are few unless you're looking to spend thousands of dollars on server-class storage (Z-drive etc.).

I am thinking of getting a solid 4-8 port interal PCI-E RAID controller (3ware etc.) that does SATA 3.0/6Gbps and put 4 of the new generation MLC SSDs in RAID0. This should push the timings close to 1Gbps. The question is, which SSDs to get? I'm not sure myself with so many companies now joining the SSD fracas and making choices for the consumer ever more ambiguous. Intel Gen. 3 SSDs should be top notch and of course, with the new SandForce 1200+ chips, Crucial should be coming out with their new generation SSDs as well.

Long story short, my advice is wait till summer. Once several of these companies release their new generation SSDs and they are thoroughly tested/reviewed, jump in and go for it. At least, that is what I am planning to do.
 

eric.kjellen

Member
Oct 4, 2010
30
0
0
My build is still far in the future so we can ignore real-world considerations like cost and time frame, but if it ends up being more expensive than the FusionIO PCI-E cards then I guess that defeats the purpose of finding a reasonable alternative to those. =)

I looked at both the Z-Drive and other PCI-E SSD cards (basically just integrated RAID SSD products anyway) but in the reviews that I could find they did not at all offer stellar performance and were wildly inferior to the FusionIO when it comes to random reads. They were very fast in large file copy throughput though.

I'm curious about why SSD RAID arrays aren't working out as a solution to the IO bottleneck, at least as far as the Techreport review goes, because I had the impression that putting drives (including HDDs) in RAID 0 would increase IOPS as well as throughput. Is there some kind of latency issue with the RAID controller that will still act as a bottleneck in a system?

Edit: Would putting more than 4 X25-E's in RAID 0 have given a different result?
 
Last edited:

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
What's your budget?

IMO you're not going to be happy with the results on the motherboard controller. Get a real host and play.

Don't put anything with these letters in your system - EVER! OCZ.
 

eric.kjellen

Member
Oct 4, 2010
30
0
0
What's your budget?

IMO you're not going to be happy with the results on the motherboard controller. Get a real host and play.

Don't put anything with these letters in your system - EVER! OCZ.
The Techreport write-up used an Adaptec 5405 card. I would never put SSDs on a motherboard RAID controller. =P

For the moment I'm mostly interested in the performance scaling with different SSDs, different numbers of SSDs and different RAID controllers. Any budget number I could give you would be sketchy at best but let's say it would be in the high-end enthusiast/workstation range.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Areca ARC 1880ix-16 with 4-8 Intel or Crucial 120/128GB SSDs is probably a good place to start. Get the 4GB DIMM and BBU and 5.25" LCD to round out the package. This host will handle 1.8+ GB/S R/W on the array and around 3.5 GB/S W/R from the cache. The Adaptec cannot touch it. :)
 

eric.kjellen

Member
Oct 4, 2010
30
0
0
Areca ARC 1880ix-16 with 4-8 Intel or Crucial 120/128GB SSDs is probably a good place to start. Get the 4GB DIMM and BBU and 5.25" LCD to round out the package. This host will handle 1.8+ GB/S R/W on the array and around 3.5 GB/S W/R from the cache. The Adaptec cannot touch it. :)
That sounds good, but how can I be sure that it will be faster when it comes to booting Windows, application startup, level load times etc. than the 4x X25-E's were on the Adaptec card? I'm still in the dark as to how performance will scale, where the bottleneck is etc. when it comes to storage in a way that I'm not in the CPU/memory area for example. It all seems very unpredictable, for example if you look in the review the 4 X25-E's were often slower than one single X-25E and overall they were average or less than average except in sequential reads and writes. How can four drives in a RAID 0 array (which as I understand it should give 4x the IOPS at least theoretically) at any point be slower than one single drive? It just doesn't seem right.

But if it is the Adaptec card that doesn't cut it and was the bottleneck in the X-25E RAID setup then I can of course see how changing the RAID controller would result in much better performance.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
I could care less about Windows boot times. Really I don't understand the silly fetish with Windows load times. I have an Asus G73 notebook with 16GB RAM and Intel X25-M 160GB SSD and it boots faster than my workstation. Of course when it comes to work it's much slower and annoyingly slow in other things. Application load times are in a similar area.

The Adaptecs are slower than the Areca cards. If all you're doing is gaming and general PC work honestly save your money and buy a 250GB Crucial or wait for the larger Intels to come out.

The Areca will add no less than 20-25 seconds to boot due to initialization of its own BIOS and safety checks. Just think of it as comparing a motorcycle to a rocket. On a bike you hop on, start up and just go. With a rocket there's a bunch of pre launch procedures that seem to take forever. Of course when the rocket does blast off and passes 10,000 mph that motorcycle will appear to be in the other lane going the other way - when you pass it! :biggrin:
 

eric.kjellen

Member
Oct 4, 2010
30
0
0
Yeah well we can ignore Windows boot times because I realize it will be impacted by the initialization of the RAID controller card if nothing else. What I'm mostly interested in is what way you have to go to maximize I/O performance in the same way that you would get more and faster cores, and from there to more physical CPUs, for better CPU performance. Apparently putting X25-E's in RAID did not yield the expected results and I/O performance remained as much the bottleneck of the system as before. What will you have to do to minimize it in a workstation? How do they do it in the enterprise world with servers etc. if not with lots of SSD's in RAID?
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
You are always going to have that bottleneck. Even with a system using battery backed up DDR3 RAM as storage it will be present. (but quite low as well!) A quarter of a million IOPS at 512 BYTE is not out of the question. Price up 600GB of that however and that makes a few SSDs - even SLC ones - a bargain indeed! :)
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
Rubycon have you used the Adaptec 6xxx series? Not sure how the PMC Sierra SAS2 controller compares to the Areca.

Another option for the OP is a LSI 9280-8i w/ a FastPath key.
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
Dang, I like my new motorcycle just fine, thank you ma'am, to carry the metaphor forward. 64-bit Photoshop loads in less than a second. I feel like if you've got a need for 200K IOPS of throughput, that's great. But as noted above, the marginal cost is enormous. IIRC, using current consumer-level SSDs (C300s, X-25, etc...) in RAID disables TRIM/garbage collection. Which, so far as I know, is NOT a good idea. 200K IOPS is generally reserved for enterprise-class storage systems with mass quantities of RAM cache and a huge array of SSDs and mechanical disks, and highly specialized SSDs that are priced far out of the consumer market. And even those performance levels are really, really hard to reach. On a workstation? I'd sure like to see it. But how do you objectively measure something like that, accurately, to be sure you're actually getting the benefit from all of this esoteric hardware?

Boy, have times changed. I remember booting a 14 MHz Amiga off of a 512KB RAM disk in a second or two. with OS and C compiler ready to go. Nothing I've seen since can equal that, although what I have now actually comes pretty close!
 
Last edited:

Dark Shroud

Golden Member
Mar 26, 2010
1,576
1
0
Just to add to this about how bad the board controllers are compared to cards. Two Corsair 6gig SSDs in Raid0 saturate the two SATA 6gig ports on the new Sandy Bridge mobos.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
Rubycon have you used the Adaptec 6xxx series? Not sure how the PMC Sierra SAS2 controller compares to the Areca.

Another option for the OP is a LSI 9280-8i w/ a FastPath key.


I have not and not sure what step is next. Performance is alright for now and I suppose I am just spoiled as much. ;)

As far as having TRIM disabled - with larger arrays of 8 to 48 disks or more the "per disk" load is considerably lower. With a large cache there is not much actual disk hits as seen on the individual disk activity indicators, etc.

After months of use I've pulled the disks and ran the Intel toolbox optimizer (kind of a PIA as it requires the disk to be formatted to work) and it never takes long and restoring the array never really changes in performance.

One mistake I did make was do an on the fly stripe size change from 4KB to 64KB. The only way to restore performance was to run the optimizer on all disk members. :eek:
 

eric.kjellen

Member
Oct 4, 2010
30
0
0
Dang, I like my new motorcycle just fine, thank you ma'am, to carry the metaphor forward. 64-bit Photoshop loads in less than a second. I feel like if you've got a need for 200K IOPS of throughput, that's great. But as noted above, the marginal cost is enormous. IIRC, using current consumer-level SSDs (C300s, X-25, etc...) in RAID disables TRIM/garbage collection. Which, so far as I know, is NOT a good idea. 200K IOPS is generally reserved for enterprise-class storage systems with mass quantities of RAM cache and a huge array of SSDs and mechanical disks, and highly specialized SSDs that are priced far out of the consumer market. And even those performance levels are really, really hard to reach. On a workstation? I'd sure like to see it. But how do you objectively measure something like that, accurately, to be sure you're actually getting the benefit from all of this esoteric hardware?

Boy, have times changed. I remember booting a 14 MHz Amiga off of a 512KB RAM disk in a second or two. with OS and C compiler ready to go. Nothing I've seen since can equal that, although what I have now actually comes pretty close!
I am not looking for the kind of performance you see in servers (far from it) what I'm really wondering is what variables you have to look at when it comes to I/O performance. If the system is slow because there's not enough RAM, you add more, if the CPU is the bottleneck you get more and faster clocked cores and so on. What's the equivalent for storage when putting SSDs in RAID apparently doesn't increase random IOPS performance much at all?
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
What kind of SSD RAID array would be the best for minimizing boot, application startup and level load times and for workstation applications such as CAD, 3D modeling, rendering and physics computing? The ones I've been looking at (4x X25-E in RAID 0) and products such as the OCZ Z-drive, Revodrive etc. have all been let-downs in precisely those fields that are related to random reads and instead they have excelled in sequential performance which is something I don't care much about.

For minimizing boot times, you will NOT want any kind of RAID setup, because the RAID BIOS adds to boot times.

If you care about random reads, then any good SSD will give them to you. RAIDing SSDs mostly just increases sequential numbers. What you want is probably an "enterprise" level drive that has higher I/Os.

Oh yeah, and larger drives usually have better performance up to a point (probably peaks at 120GB).
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
"RAIDing SSDs mostly just increases sequential numbers." Good point. It buys you nothing in terms of random operations.

My guess is, just to answer the question from my relatively basic level, is to get a $200-$400 PCI-E disk controller to bypass the on-board controller (which really is limited beyond a certain point) and call it a day. But heck, with what I have, I'm having a party in my case these days. Someday I might do that, but for now I have no need. It's really nice to be close to the leading edge for once.

I was thinking about measuring transaction rates through SPECPerf or something like that, rather than an artificial measurement like IOPS... then again, all benchmarks are artificial to some degree.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
"RAIDing SSDs mostly just increases sequential numbers." Good point. It buys you nothing in terms of random operations.

For me it was necessary to achieve 2GB/S sustained transfers that the SAS 15K arrays had. The boost in IOPS was indeed very sweet icing on the cake! :awe:

Also gaming and workstation don't mix too well. Sure you can game on a workstation card but it's far from optimal.