• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

8 to 16 SSDs RAID 0

stenrulz

Junior Member
Hello,

I am currently looking into setting up a decent size RAID 0 array with 8 to 16 SSDs, most likely Samsung 840 pro or OCZ Vectors with Adaptec RAID 71605E or MegaRAID SAS 9286CV-8eCC. I have looked at the HBA cards as well but I would like the option to boot Windows of the array at some stage. From my understanding no software raids allow booting, LSI fastpath is similar to software raid to provide extra performance from your CPU but without the limitation. What RAID card and SSDs do you recommend? From my understanding most cards will be able to provide the MB/s but not the IOPS.

Thank you.
 
What are you aiming for, bandwidth, IO, storage space ?

8-16 SSD is likely to saturate 1 single RAID controller bus speed. You may looking at multiple for optimum speed

ie
HP smart array 822 is around 1250 off google search
http://h18004.www1.hp.com/products/quickspecs/14341_na/14341_na.pdf;
it claims it can give each link 600MB/s so 600MB/s * 24 drive = 14.4GiB/S - Thats the limitation of this card.

PCI-e 8 GiB/s (in each direction) PCI-e 3.0 (8 lanes at 8 GT/s)
SAS/SATA 14.4GiB/s (in each direction) SAS-2 (24 physical links at 6 Gb/s)
RAID cache 2 GB/s DDR3-1600 MHz SDRAM (64-bit data and 8-
bit ECC)
 
What are you aiming for, bandwidth, IO, storage space ?

8-16 SSD is likely to saturate 1 single RAID controller bus speed. You may looking at multiple for optimum speed

ie
HP smart array 822 is around 1250 off google search
http://h18004.www1.hp.com/products/quickspecs/14341_na/14341_na.pdf;
it claims it can give each link 600MB/s so 600MB/s * 24 drive = 14.4GiB/S - Thats the limitation of this card.

PCI-e 8 GiB/s (in each direction) PCI-e 3.0 (8 lanes at 8 GT/s)
SAS/SATA 14.4GiB/s (in each direction) SAS-2 (24 physical links at 6 Gb/s)
RAID cache 2 GB/s DDR3-1600 MHz SDRAM (64-bit data and 8-
bit ECC)
Mainly IOPS but would like to max out the PCIe 3.0 8X. That HP smart array seems to be LSI based not sure whats the IOPS limitation on the card? The cache normally get disable when there are only SSD.
 
Depending on how much storage you need, main memory might be a more cost-effective way to get those IOPS.
A dual socket Opteron comes in at around a 1000 dollar (mainboard + CPUs) , and then it's either 10$ per GB up to 256GB or 20$ per GB up to 512GB of RAM. So while 16 32GB RDIMMs are probably not going to be cost-effective compared to 16 SSDs, you will get an order of magnitude better performance.

Also, if you're IOPS limited, I would avoid putting Windows on the same disks as whatever your main load is. That's just wasting array IOPS for something that can be best dealt with on a separate array or single disk.

A quick amendment: It's cheaper to go with a 4 socket system to get to .5 TB, than to use 32GB RDIMMs, unless you have requirements that increase per-socket cost.
 
Last edited:
Samsung SSDs currently have an incompatibility issue with LSI controllers unfortunately.

Do not think this is still an issue as i am seeing a few posts with large SSD raid 0 array with LSI, anything from 8-40+ SSDs.

original.jpg
 
Depending on how much storage you need, main memory might be a more cost-effective way to get those IOPS.
A dual socket Opteron comes in at around a 1000 dollar (mainboard + CPUs) , and then it's either 10$ per GB up to 256GB or 20$ per GB up to 512GB of RAM. So while 16 32GB RDIMMs are probably not going to be cost-effective compared to 16 SSDs, you will get an order of magnitude better performance.

Also, if you're IOPS limited, I would avoid putting Windows on the same disks as whatever your main load is. That's just wasting array IOPS for something that can be best dealt with on a separate array or single disk.

A quick amendment: It's cheaper to go with a 4 socket system to get to .5 TB, than to use 32GB RDIMMs, unless you have requirements that increase per-socket cost.
Thanks for the information will check the compatibly with AMD based system. In the mean time i think i would prefer to keep going down the SSD raid 0 array, any recommendations? LSI HBA IR with hardware raid 0 or LSI megaraid with fastpath?
 
Back
Top