Best stripe size for X25-M raid-0 ?

geofelt

Member
Nov 10, 2007
34
0
66
I recently installed two Intel X25-M ssd drives in raid-0 for my OS and application drive. I originally had a single drive, and it performed well.
I wanted the logical C drive to be larger, so I got a second X25-M in raid-0.
Using the onboard raid from a Asus P6T deluxe, I arbitrarily picked a stripe size of 64K. This seems to have worked out reasonably well, but I wonder if the best size should be either much larger, or much smaller.
Does anyone have some insight to this issue?
 

imported_wired247

Golden Member
Jan 18, 2008
1,184
0
0
largest stripe sizes typically give best throughput in benchmarks from my experience, but 64k is absolutely fine for most purposes.

 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
I would imagine that the nearly non-existent seek times of SSDs would allow smaller stripe sizes to work well.

Can anyone dig up a review that states one way or another? Might prove interesting.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Zap
I would imagine that the nearly non-existent seek times of SSDs would allow smaller stripe sizes to work well.

Can anyone dig up a review that states one way or another? Might prove interesting.

IIRC the size of the stripe and its impact on performance has to do with trading off the time penalty incurred for stitching a file back together (smaller stripe means file is broken up into more discreet units that need to be stitched back together on a read) versus the bandwidth and space consumed by writing files smaller than the stripe size.

A 4KB file will be busted into 2x2KB info and then stored on say 2x64KB of the array (consuming 128KB of disc space to store 4KB of data), taking 32x longer to write than was necessary (but even 32 times a really really small number means you don't notice the fact it took 32x longer) had you been using 2KB stripes in this example.

On the other hand writing a 1GB file with a 64KB stripe means the file is broken up into 16,777,216 fragments that have to be created by the controller and sent to the drives, and likewise when the 1GB file is read the raid controller is going to execute nearly 17million stitches to piece that file back together again. (this is why not all controllers give the best performance with the same stripe size even when the same drives are involved, the processing power of the controllers dictates where the trade-off lies)

For a 1GB file the ideal thing would be to have 512MB stripe for a 2-disk raid-0 array, but that would really suck for those 4KB writes!
 

Zap

Elite Member
Oct 13, 1999
22,377
7
81
Originally posted by: Idontcare
For a 1GB file the ideal thing would be to have 512MB stripe for a 2-disk raid-0 array, but that would really suck for those 4KB writes!

Right! Now, with the recent Anand article on SSDs, wasn't the stickling point with the "crappy" SSDs the poor small file writes which caused stutter? Something about it being a better tradeoff to have better small file writes than better throughput for an OS/app drive?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Zap
Originally posted by: Idontcare
For a 1GB file the ideal thing would be to have 512MB stripe for a 2-disk raid-0 array, but that would really suck for those 4KB writes!

Right! Now, with the recent Anand article on SSDs, wasn't the stickling point with the "crappy" SSDs the poor small file writes which caused stutter? Something about it being a better tradeoff to have better small file writes than better throughput for an OS/app drive?

I think we are talking past each other here.

From what I gather you are talking about performance tradeoffs of small file writes with regards to the SSD controller. I am talking about the performance tradeoffs of small stripe size with respect to the Raid controller.

Both the SSD controller and the Raid controller need to have the wind blowing at their respective backs to make a small(er) stripe size perform better than an otherwise large(r) stripe size.

Yes swapping out a spindle-drive for an SSD while keeping the same raid controller will result in higher performance for smaller stripes with the SSD versus the spindle-drive, but it will also result in higher performance for larger stripes as well and the larger stripes will still deliver superior overall performance because the raid controller simply won't be fast enough to handle the blizzard of millions of 4KB write/read requests that a 4KB stripe size would create.

Checkout how hamstrung these $1000 raid cards are when Johan stressed them with SSD's of "normal" stripe sizes: (bold emphasis is Johan's, not mine)

Should we blame Adaptec? No. The bandwidth and crunching power available on our Adaptec card far outstrips the demands of any array of eight magnetic disks (the maximum of drives supported on this controller). You might remember from our older storage articles that even the much more complex RAID 6 calculations were no problem for the modern storage CPUs. However, the superior performance of the Intel X25-E drives makes long forgotten bottlenecks rear their ugly head once again.

However, be aware that these ultra fast storage devices cause bottlenecks higher in the storage hierarchy. The current storage processors seem to have to trouble scaling well from four to eight drives. We have witnessed negative scaling only in some extreme cases, 100% random writes in RAID 5 for example. It is unlikely that you will witness this kind of behavior in the real world. Still, the trend is clear: scaling will be poor if you attach 16 or more SLC SSDs on products like the Adaptec 51645, 51645, and especially the 52445. Those RAID controllers allow you to attach up to 24 drives, but the available storage processor is the same as our Adaptec 5805 (IOP348 at 1.2GHz). We think it is best to attach no more than eight SLC drives per IOP348, especially if you are planning to use the more processor intensive RAID levels like RAID 5 and 6. Intel and others had better come up with faster storage processors soon, because these fast SLC drives make the limits of the current generation of storage processors painfully clear.

http://it.anandtech.com/IT/showdoc.aspx?i=3532&p=5
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Idontcare
A 4KB file will be busted into 2x2KB info and then stored on say 2x64KB of the array (consuming 128KB of disc space to store 4KB of data), taking 32x longer to write than was necessary (but even 32 times a really really small number means you don't notice the fact it took 32x longer) had you been using 2KB stripes in this example.

This is incorrect. A file will not be broken up unnecessarily across stripe boundaries, and the stripe size has no such impact on space used.

OP, it probably doesn't matter much, and the default stripe size is a good choice for a random guess. If you want to really know, you'll have to measure it -- there's no substitute for measurement for performance optimization (knowledge helps, but you always have to double-check that the implementation follows your logic and vice-versa). Performance measurements for SSDs are tricky because of the complex write issues, so a random guess of the default stripe size is my actual suggestion.

 

geofelt

Member
Nov 10, 2007
34
0
66
Thanks for the replys so far. My primary use is for normal desktop operations such as e-mail, and web browsing. Also some game load and saving operations. The recent anand ssd article makes me wonder if the 64k stripe is aligned with the 64k block size that the ssd uses. If so, then 64k would seem to be particularly fortuitous.

I was really hoping for some sort of a reference that tests the options out. I was unable to find one. Still, the current performance seems good, and I would have to have a compelling readon to change.
 

Denithor

Diamond Member
Apr 11, 2004
6,298
23
81
That would actually make an interesting article for them to do - effect of stripe size on disk performance. Comparing SSD/SAS/SATA drives on two controllers: the onboard solution found on most Intel desktop boards and also one of the $1000+ professional models.
 

Elfear

Diamond Member
May 30, 2004
7,167
824
126
The J Micron drives in RAID 0 seemed to perform best with a 128kb or 256kb stripe. Not sure if that will help you since your Intels have different controllers.