• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

LSI 9266-8i with Four Vertex 4 128GB SSD in RAID 0 performance issue...

Qianglong

Senior member
I've just upgraded my workstation's storage sub-system with the following hardware:

RAID Controller - LSI 9266-8i (No fast path or cachecade, just the bare card)
SSD Drives - 4 X Vertex 4 128GB Firmware 1.4 in Raid 0

Stripe size: 64KB
Other controller setting:
- write through caching
- direct IO
- no read ahead

I did some benchmark using AIDA64's disk benchmark tool and here is the result:
Is the result a bit low for this controller and SSD? What else can I do to increase the random read performance?

4KB Read Test suite:
bench1.jpg



2MB Read test Suite:


bench2x.jpg


asssdio.png

asssdmb.png
 
Last edited:
Random read performance does not benefit from RAID 0 unless you're dealing with higher queue depths. That's because at QD of 1, only one 4KB read request is sent at a time, which can only be processed by one drive. If you increase QD, the drives can process the requests in parallel since more than one request is sent at a time.
 
thank you for the reply - so what can i do to improve random read performance? I am guessing it is totally a drive limitation?
 
thank you for the reply - so what can i do to improve random read performance? I am guessing it is totally a drive limitation?

Yes, it's a drive limitation. You would need an enterprise level PCIe/SAS SSD to for noticeable improvement. E.g. Fusion-IO offers PCIe SSDs with over 140K random read IOPS (Vertex 4 offers up to 95K IOPS).

Random read performance has a lot to do with latency and most of that is due to NAND latency, which cannot really be improved. SLC NAND has lower latency so hence enterprise SSDs often offer better random read performance, but it comes with a price.
 
Your results seem pretty reasonable to me. You're never going to get a huge number of IOPS at a queue depth of 1 because you can't take advantage of pipelining. You're basically limited by the amount of time it takes for an instruction to make the round trip between the CPU and the NAND. You see that high queue depth results are much higher because many instructions are being processed simultaneously and the bottleneck shifts to the RAID and SSD controller's available compute resources.
 
Back
Top