RAID Options

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Red Squirrel

No Lifer
May 24, 2003
70,182
13,576
126
www.anyf.ca
I'm a big fan of Linux md raid. It's simple, solid and versatile. You can live grow arrays, and if your kernel is new enough you can even change raid levels (go from 5 to 6 for example).

I actually prefer software raid to hardware raid now because I'm not depending on very specific hardware. I can take all my mdraid drives and throw them in a completely different system using a competently different controller and still assemble the raid easily.

And while raid will lower the odds of data loss, it will not eliminate it in case you accidentally delete something, you do need to still make backups to a separate storage system. (another raid on another system or individual drives).
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the $249 HP P420/1gb fbwc is probably the best bang for buck,throw it in that new microserver based off the ML310e :)

hp smartcache with SAAP 2.0 key ;) Read caching - works with any ssd. 750GB with 1GBFBWC, 1.5TB with 2GB FBWC model. but for $249 we're not going to complain.

no lame batteries (ahem dell 24hour battery) all supercap here!

Advanced VOD tweaks (saap 2.0 key licensed per server, pretty much how ilo1-4 keys work!)

Compatible with pretty much every SAS raid hp ever made. that goes back farther than linux and ZFS ;)

Smart Error or standard TLER mode (hp calls it pre-failure, LSI SmartEr).

Doesn't have carpy firmware like buggy LSI cards which require firmware upgrades every 2 weeks and the drivers are all out of whack.

No LAME SAFEID transfer feature key junk- One key, honor system per server, and if you need to pinch hit - that's your morals - you aren't at the mercy of LSI's website to give you a transfer/new key in a DR scenario.

RAID ADM from Integrity. Basically they know how much RE4 4TB drives suck - chances of B.e.r. is so high you can turn 3 drives into 1 raid drive, then raid-nest it.

3 drives = 1 raid disk ADM, then raid ADM+0 (1+0) so 30 drives become 10 but you can lose 20 outer of 30 and rock out - plus its faster.

GOOD luck rebuilding a RAID-5 or RAID-6 with 4TB RE4 sata drives with 7-10 drives. lol. you will be in so much stress biting your nails waiting for the dreaded failz.

PI is new - it allows newer SSD's and drives to send end to end ECC - FDE requires drive end to end IOEDC/IOECC but this extends it to the controller which can rule out backplane and cabling issues.

You can force trust the parity or data - it can even read the parity and verify like ZFS but in hardware 1ghz SRvx6g pmc adaptec cpu with DDR3 1gb ram (backed by 1gb NVRAM+supercap). Zfs isn't the only fs (refs) and controllers now can do the same bitrot check as zfs - so well that's that.

anyhoo. If you can't afford that remember hp sells Intel raid with HP Virtual Smart Array drivers for linux,ESXi 5.1, and windows and also sells the LSI 2308 HP Virtual Smart Array driver and both can use the smaller 512 BBWC. The LSI 2308 comes on the riser card on hp servers and can be turned back into IT mode if you can't run the HP VSA driver or want to rock ZFS with solaris. (solaris has no VSA driver).

Man i wonder what it would take to enable the intel sata ports, LSI ports and a P420 at the same time. The intel and LSI would rock for SSD and P420 for drives. could do a tiering filesystem with both cpu and hardware assist. yum!
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I think we can conduct this conversation without having a bunch of unnecessary fluff.

The fact that some RAID cards are finally getting write and read caches doesn't change the fact that nearly all virtualized storage systems support multiple tiers or storage not only as a cache but as a data storage area. This allows a full mix of SSDs, 15K, 10K, and 7.2K disks depending on data needs and budget. RAID cards will likely never get this sort of granularity.

Your talk about backwards compatibility with previous HP SAS RAID cards is entirely untrue. The first SAS hardware didn't even show up until late 2005. This is the exact time ZFS also started showing up and WAY later than Linux MDADM has been around (2001). This still doesn't take into account the other factor that I mentioned. If it fails, you still have to buy another HP RAID controller to fix the problem.

Additionally all advanced virtualized storage systems support ADM, mainly for performance reasons (If you have 24 hard drives in an array, making it into 4 6-disk "RAID 6" arrays in an aggregated pool is FAR faster than having a single huge 24 disk array, especially when its time to rebuild).

Force trusting parity isn't a good idea at all in practice and is only useful when you know the data should be right (like when you move a hard drive around). This "option" is also available in virtualized storage systems. With today's modern processors, this isn't a problem. It's storage encryption that is the real CPU sucker, and a RAID ASIC won't do a thing to help you there (until they want to release one, but of course the real answer is FIPS end to end, but that's an industry that's just getting started).

As for bitrot checking in the RAID world, this needs ECC memory to be done properly (which tends to limit them to hardware RAID controllers or Virtual RAID drivers on systems with ECC memory). As for regarding ZFS as the only file system, I don't believe anyone ever said it was. I'd be curious why you feel that way, since I believe I mentioned ReFS in my previous post. When it comes to bitrot checking, RAID does not check end-to-end. The only thing RAID checking does is verify that whatever data made it from the card to the hard drive is ok. This doesn't verify that its the right data, that it's in the right place, or even if the card itself had an error (like you said buggy firmware). Virtualized storage systems do a better job, but they aren't perfect either.

There's a pretty interesting blog post about that subject here

Another thing about virtualized RAID vs hardware RAID is that in large parity arrays (like RAID 6 over 10 disks), parity calculations can get *expensive*. And this would matter *if* you had an all in one storage box that not only was holding data but doing calculations as well on data. You take that out though, in the form of a virtualized storage appliance, and all it is doing is managing storage. So what does it matter if the CPU in the storage appliance then has something to do?
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
The catch there is cost. Like I said, you can get that Smart Array for $100 on Amazon right now. unRAID is $70 for up to 7 drives (which I realize is enough for most people) or $120 for more than 7. Given the $30 difference between the basic unRAID and an actual hardware setup, I'd rather just spend the extra for the controller. To each his own I guess.

I just like the unRAID option because it gives OP the opportunity to easily expand the size of the array versus using a hardware RAID card.