This is an incredibly stupid statement.
In my 10+ years of managing storage systems with anywhere from 2-200+ disks, I have never once encountered a situation where a RAID 5 array failed to rebuild due to parity corruption. If you're using battery-backed or flash-backed write cache, I don't see how such a situation is even possible.
I've got multiple DAS shelves packed with 12 1TB drives that have successfully rebuilt their RAID 5 arrays on multiple occasions (during production, no less).
That being said, as the number of drives in an array increases, I would prefer to use RAID 6 or one of its nested derivatives.
Every RAID controller I have ever used has allowed you to configure whether it prioritizes production I/O or the RAID rebuild. You would have to go out of your way to configure the RAID controller to behave as you describe.
With RAID 5, in the event of a disk failure, you might (for sufficiently infinitesimal values of might) lose the array due to parity corruption.
With RAID 0, in the event of a disk failure, you WILL lose the array. Period.
There is no way that RAID 0 will ever have higher uptime than RAID 5.
Obviously not. Stop posting.
I basically want to confirm this. The stuff up there about failing rebuild etc really must be on poor hardware. I have has arrays send me an alert that a sector failed to be rebuilt but it went right on and finished building (specifically consumer SATA drives in this case). I have never had a RAID 5 array simply not complete on me yet.
Well that would be lying, we did have some fun with the SANs in test before it got moved to prod. I yanked a disk on a 5 disk test group, during the rebuild I then yanked another disk. What happened after was interesting, the group shut down, once the disks were reinstalled it actually restarted and completed the rebuild but marked it as dirty. The tech docs basically stated that the SAN shutdown the rebuild then when I inserted all the disks and tried to online the disks it grabbed the UUIDs and rebuilt the disk based on order the disks had been popped out. Test data survived intact amazingly enough. So the rebuild failed due to an apparent lost disk but once all the disks were back the array managed to recover. Stuff like Netapps are fun! Also benching the disks resulted in an 1500 IOP test during the rebuild when it would normally be 1800ish.
Last edited:
