• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

How to reconfigure a Level 5 RAID drive ??

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
This is an incredibly stupid statement.



In my 10+ years of managing storage systems with anywhere from 2-200+ disks, I have never once encountered a situation where a RAID 5 array failed to rebuild due to parity corruption. If you're using battery-backed or flash-backed write cache, I don't see how such a situation is even possible.



I've got multiple DAS shelves packed with 12 1TB drives that have successfully rebuilt their RAID 5 arrays on multiple occasions (during production, no less).

That being said, as the number of drives in an array increases, I would prefer to use RAID 6 or one of its nested derivatives.



Every RAID controller I have ever used has allowed you to configure whether it prioritizes production I/O or the RAID rebuild. You would have to go out of your way to configure the RAID controller to behave as you describe.



With RAID 5, in the event of a disk failure, you might (for sufficiently infinitesimal values of might) lose the array due to parity corruption.

With RAID 0, in the event of a disk failure, you WILL lose the array. Period.

There is no way that RAID 0 will ever have higher uptime than RAID 5.



Obviously not. Stop posting.

I basically want to confirm this. The stuff up there about failing rebuild etc really must be on poor hardware. I have has arrays send me an alert that a sector failed to be rebuilt but it went right on and finished building (specifically consumer SATA drives in this case). I have never had a RAID 5 array simply not complete on me yet.

Well that would be lying, we did have some fun with the SANs in test before it got moved to prod. I yanked a disk on a 5 disk test group, during the rebuild I then yanked another disk. What happened after was interesting, the group shut down, once the disks were reinstalled it actually restarted and completed the rebuild but marked it as dirty. The tech docs basically stated that the SAN shutdown the rebuild then when I inserted all the disks and tried to online the disks it grabbed the UUIDs and rebuilt the disk based on order the disks had been popped out. Test data survived intact amazingly enough. So the rebuild failed due to an apparent lost disk but once all the disks were back the array managed to recover. Stuff like Netapps are fun! Also benching the disks resulted in an 1500 IOP test during the rebuild when it would normally be 1800ish.
 
Last edited:

FishAk

Senior member
Jun 13, 2010
987
0
0
There are plenty of extremely knowledgeable people on here- like Nothinman- who I trust will jump in to correct me, if I am wrong however.

And thank goodness for that too. Apparently, the countless articles on the internet concerning the liabilities of RAID 5 are incorrect. With a proper RAID controller, RAID 5 is perfectly fine.

In my 10+ years of managing storage systems with anywhere from 2-200+ disks, I have never once encountered a situation where a RAID 5 array failed to rebuild due to parity corruption.

So despite what one might find on the internet about 3 disks being the minimum number for RAID 5, you can actually do it with just 2. And it's good to know that more than 200 disks in RAID 5 is no problem. Hmmm... I wonder just how many disks can fail at the same time in a 200+ disk array, and still have the data survive.
 

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
And thank goodness for that too. Apparently, the countless articles on the internet concerning the liabilities of RAID 5 are incorrect. With a proper RAID controller, RAID 5 is perfectly fine.

Because everything you read on the Internet is the absolute truth, right?

So despite what one might find on the internet about 3 disks being the minimum number for RAID 5, you can actually do it with just 2.

You can actually run a RAID 5 array with two drives :sneaky:

And it's good to know that more than 200 disks in RAID 5 is no problem. Hmmm... I wonder just how many disks can fail at the same time in a 200+ disk array, and still have the data survive.

Nobody that values their data would use RAID 5 on such an array. Arrays that large would use some type of nested RAID, such as RAID 60.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
I've had two dual disk failures in my life to say raid-5 blows ass.
call me unlucky but raid-10 or bust.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
i wouldn't go over 6 drives in raid-5 - there is a reason they stripe raid 5/6 -> 50/60.

I have a 12 drive raid-5 - hp dl320s windows storage server and man it is balls slow - battery failed after a year and took out the cache board , battery(duh) and 1 drive. Other than that small bug in the p400 firmware - only one of the seagate NS drives failed during the 3 year of warranty. I'd give it up when the 4TB drives come out in RE4/Exxxx/Constellation - but not on consumer drives.

remember a good raid controller requires TLER(CER etc) - like a nice p410i with 1GB flash-back write cache (no battery to replace) - it will not tolerate non-TLER drives.

Apparently ZFS is the way to go - i'd love to see a walkthrough on how to make a nice tiered storage setup - 2 intel 710 SSD -> 4 intel 510 SSD's (they work well in raid) -> 8 2TB WD RE4 - or something like that.

Quite honestly now that 900GB 2.5" savvio 10K - you could just rock those.

2 intel 710 SSD
4 intel 510 ssd
10 900gb savvio 10K in raid-50/60
p410i in DL380 G7

Would be interesting if you could boot the server to esxi on sdhc/usb then boot a VSA of opensolaris which would export (iscsi/nfs) back to itself and other servers using that spiffy tiered caching setup. Probably run some light duty apps (AD server/qmail servers) on the same box.

If you passed direct access of the P410i to the opensolaris VM that might work really well.

Not sure how opensolaris compares to esxi5's VSA appliance - the main part of the esx VSA and lefthand VSA are quorum - so you can have storage clustering to increase reliability and performance. but it would be interesting to compare the results of both solutions.