Stripe Size For SSD RAID0 Array

lmccrary

Member
May 6, 2003
71
0
0
I've got 2 Intel x25-V SSDs. The default stripe size on my Gigabyte Ga-890GPA-UD3H is 64KB. Should I run with that, or does somebody have a better idea as to stripe size?
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I've got 2 Intel x25-V SSDs. The default stripe size on my Gigabyte Ga-890GPA-UD3H is 64KB. Should I run with that, or does somebody have a better idea as to stripe size?

512K would be ideal, if your controller supports it. But really as long as it's a multiple of 4K, you should be fine. The folks over in the storage section might have a better idea though.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I'd aim for a smaller stripe size. Smaller stripe sizes provide more efficient bandwidth boost, at the cost of more drive seeks. Larger stripe sizes are less efficient for boosting bandwidth, but more efficient at dividing up drive seeks. If you are working with very large files, then it really makes no difference - as the file will span so many stripes that it's an irrelevance. The only issue is that smaller stripes add additional controller overhead - ATA is able to transfer 512 kB in a single command.

Essentially, the higher the cost of a drive seek, compared to the drive's transfer rate - the larger your stripe size should be.

With SSDs, seeks are virtually free, so you can afford to use a smaller stripe size.

A simple rule of thumb for stripe size is:
STR x (average seek time + 1/2 rotation time)

E.g. 170 MB/s x (0.1 ms + 0) = 16 kB

In practice, you may have to benchmark this - as your controller may be too inefficient at low stripe sizes. Motherboard RAID controllers are notorious for being of piss-poor design, so things may not always work out the way you expect.
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I'd aim for a smaller stripe size. Smaller stripe sizes provide more efficient bandwidth boost, at the cost of more drive seeks. Larger stripe sizes are less efficient for boosting bandwidth, but more efficient at dividing up drive seeks. If you are working with very large files, then it really makes no difference - as the file will span so many stripes that it's an irrelevance. The only issue is that smaller stripes add additional controller overhead - ATA is able to transfer 512 kB in a single command.

Essentially, the higher the cost of a drive seek, compared to the drive's transfer rate - the larger your stripe size should be.

With SSDs, seeks are virtually free, so you can afford to use a smaller stripe size.


A simple rule of thumb for stripe size is:
STR x (average seek time + 1/2 rotation time)

E.g. 170 MB/s x (0.1 ms + 0) = 16 kB

In practice, you may have to benchmark this - as your controller may be too inefficient at low stripe sizes. Motherboard RAID controllers are notorious for being of piss-poor design, so things may not always work out the way you expect.

Actually, this is wrong. Seek time is essentially nil but a small block size will give poor performance on writes.

Remember, to write to flash, you have to first erase a 512KB block, then rewrite in 4KB pages. So to change 16KB, you have to read 512KB and write 512KB elsewhere. Thus, a larger stripe size is more efficient because the data will be much less likely to be spread out among multiple blocks.
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
240
106
Personally, I don't think RAID 0 will buy you much except high risk. If I had two SSDs in my system, I would put the OS and programs on one and my data on the other, and then back that up with an external.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Personally, I don't think RAID 0 will buy you much except high risk. If I had two SSDs in my system, I would put the OS and programs on one and my data on the other, and then back that up with an external.

Check out Anand's recent article about RAIDing X25-Vs. You can get some pretty impressive performance out of $250 worth of SSDs.