• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Need to settle a RAID debate

PliotronX

Diamond Member
Hey gang, got into it with my boss and coworkers because they are RAID-5 purists and so I have been scouting to find the benchies (5 v 10) but I am left more confused than going in. I thought canonically that 10 was faster in both Iops as well as throughput but the numbers from various pages conflict with each other. Some show 5 with higher Iops with worse throughput, others show the diametric opposite. Lets say we are purely dealing with LSI/Avago controllers capable of offloading the parity calcs so I'm just trying to find pure H/W numbers with a small array (<5). Numbers with larger arrays would be cool too because my boss is regretting going 10 in a NAS (>12).

:colbert:

(Thx!)
 
Last edited:
Our SAN uses RAID10 as a write tier because it's generally faster (no parity calc.) But I'm not sure how many disks the stripes span (I think it's 12).

If you're ignoring parity calculation penalties, then it comes down to how big your stripes are. A 4-disk RAID-5 will read or write from a three disk stripe all the time. A 4-DISK RAID-10 will write to a two disk stripe (slower) but can theoretically read half of each request from each mirror, if the RAID controller is smart enough to do that (effectively a four-disk stripe.)

Since not all RAID controllers are created equal, and there IS a parity calculation penalty, even if it's being offloaded to the RAID controller, it's not surprising that you're seeing them trade off wins/losses.
 
Hey gang, got into it with my boss and coworkers because they are RAID-5 purists and so I have been scouting to find the benchies (5 v 10) but I am left more confused than going in. I thought canonically that 10 was faster in both Iops as well as throughput but the numbers from various pages conflict with each other. Some show 5 with higher Iops with worse throughput, others show the diametric opposite. Lets say we are purely dealing with LSI/Avago controllers capable of offloading the parity calcs so I'm just trying to find pure H/W numbers with a small array (<5). Numbers with larger arrays would be cool too because my boss is regretting going 10 in a NAS (>12).

:colbert:

(Thx!)

Your calculation is correct for sequential traffic. If you look at iops, you have the problem with raid-5 that for every read or write all heads must be positioned what means that a raid-5 has the iops of only one disk while with raid-10 the read iops are equal the number of disks and the write raid iops equal the number if mirrors or vdevs in case of ZFS.

I would avoid raid-5 or raid-1 on arrays with more disks or large capacity as this is not reliable enough. Use raid-6 or Z2 (any two disks are allowed to fail) or ZFS -Z3 (any three disks are allowed to fail)

Beside that you must care of the write hole problem. To reduce the problem you can use hardware raid with BBU and cache. To avoid the problem you can use a CopyOnWrite filesystem like ZFS.
 
Thanks guys! Was able to convince others to go with 10 on a new deployment. For this particular setup it was probably a matter of semantics because the office is still on FE so Iops I think are a better priority for responsiveness rather than throughput on a terminal server VM but as always, plan for the future 🙂
 
Back
Top