Best RAID for home NAS?

Discussion in 'Memory and Storage' started by BusyDoingNothing, Feb 10, 2010.

  1. BusyDoingNothing

    Joined:
    Nov 10, 2005
    Messages:
    28
    Likes Received:
    0
    I'm planning on building a home server using Ubuntu (most likely) which will be mainly used for backup, storage, file serving, and possibly video encoding. In anticipation, I got an Adaptec 2610SA 6-port hardware RAID adapter off eBay. I'm trying to figure out which RAID solution is best for me, and I've narrowed it down to RAID-5 and RAID-10.

    Performance isn't necessarily my biggest concern. I think as far as writing goes, I'll be limited mostly by my network speed. I shoot film with an HD cam, so I'll be storing writing a lot of large video files (likely around 10GB a pop) to the array. I'd also like to use the box to encode the videos into different formats, but I think I'll be more limited by CPU than anything when it comes to that.

    I guess my biggest concerns are space, reliability, and cost. I don't want to break the bank. Regardless of which route I go, I'm gonna buy 3 or 4 Hitachi 1TB Deskstars for the array, so it'll end up costing me about $300 tops. I'd like to get the most space possible. I'll be backing up my PCs and storing my music and film projects, so I want to feel secure with my data. I can always do a backup to a different drive outside the array (internal or external) for my most important stuff.

    What do you guys suggest? It seems to me that RAID 10 is most highly regarded. It's gonna cost more per storage space, but it seems to be more reliable, as I may be able to lose 2 drives but still be able to recover. Any input is welcome. Thanks!
     
  2. pjkenned

    pjkenned Senior member

    Joined:
    Jan 14, 2008
    Messages:
    629
    Likes Received:
    0
    With 4x 1TB drives:
    Raid 10 = 2TB (one or two drives can fail depending on drive, low overhead)
    Raid 5 = 3TB (single drive can fail)
    Raid 6 = 2TB (any two drives can fail)

    I've been using Raid 1 for 2x1TB OS drive(s) and Raid 6 for the 12+ 1.5TB storage drives for over a year.

    New build (new motherboard today) I think will stay raid 6, but I've been contemplating just using the WHS built in duplication. Then again, WHS only supports 32 drives so that's an issue with not going Raid 6.
     
  3. tuteja1986

    tuteja1986 Diamond Member

    Joined:
    Jun 1, 2005
    Messages:
    3,676
    Likes Received:
    0
    well in my file server i have different system :
    1x 200GB IDE = OS drive (don't care if it died)
    2x 500GB WD Etherprise raid 1 = personal data
    4x 1TB WD Green raid 5 = data not that important
    SeaSonic S12II 430B PSU ( Good solid 80+ BRONZE PSU, don't get a 600W+ PSU)
    1GB ram (never need more than that , 2GB max )
    Intel Celeron 430 775 with Tuniq Tower 120 (don't need a power hungry CPU)
    Gigabyte P35 DS3P
    Thermaltake A2309 (VERY IMPROTANT keeps hard drive cool , fits three HDD)
    Case that will take Two Thermaltake A2309 cage


    Also make sure to get a UPS with 700W load capcity $150+ that can last for 20mins plus when power goes off. ALso make sure your fileserver can shutdown in less than 1min 30seconds. I use highly cutdown version of Windows 2003 which can shut down in less than 30 seconds. Linux and Unix would be better but i didn't have comptiable driver for my motherboard raid drivers.

    Make sure to get intelligent UPS that does ful recharge before starting up the server when the power is restored.

    Make sure to have Two back hard drive in the draw ready to be take over if hard drive start to show signs of failure. Use a good HDD monitor tool.
     
    #3 tuteja1986, Feb 11, 2010
    Last edited: Feb 11, 2010
  4. taltamir

    taltamir Lifer

    Joined:
    Mar 21, 2004
    Messages:
    13,574
    Likes Received:
    0
    you want OS based (pure software, do NOT use a mobo controller under any circumstance) RAID1 arrays (plural) or maybe RAID10.
    Avoid RAID5 and RAID6, never use a mobo controller based RAID... quality hardware RAID is costly and locks you in, but it is very fast and reliable if you are willing to pay. OS based raid works best with solaris from genunix.org which allows you to use ZFS.
    ZFS is by far the best file system available right now, being at least 2 or 3 generations ahead in terms of what it can do compared to any other file system currently available.
    http://en.wikipedia.org/wiki/Zfs
    although, there is one equivalent file system, known as btrfs aka better FS currently in development, it will be some time before it is available (current version is 0.19 unstable alpha)
    http://en.wikipedia.org/wiki/Btrfs
     
    #4 taltamir, Feb 11, 2010
    Last edited: Feb 11, 2010
  5. Knavish

    Knavish Senior member

    Joined:
    May 17, 2002
    Messages:
    901
    Likes Received:
    0
    FreeBSD supports ZFS / RAID-Z as well.


     
  6. tuteja1986

    tuteja1986 Diamond Member

    Joined:
    Jun 1, 2005
    Messages:
    3,676
    Likes Received:
    0
    If you do go OS raid Make sure you get a UPS !
     
  7. Emulex

    Emulex Diamond Member

    Joined:
    Jan 28, 2001
    Messages:
    9,759
    Likes Received:
    0
    who doesn't rock a solid ups these days?
     
  8. taltamir

    taltamir Lifer

    Joined:
    Mar 21, 2004
    Messages:
    13,574
    Likes Received:
    0
    correct, in fact I have tested the ability to import a ZFS raidz2 array between solaris, nexenta, and freeBSD (it was successful every time) before putting a single file on it.
    just type:
    #zfs import -f tank
    and wait 1 to 5 seconds and you have your array in the new / different OS.

    that can be said for any type of storage or RAID solution.
     
  9. BusyDoingNothing

    Joined:
    Nov 10, 2005
    Messages:
    28
    Likes Received:
    0
    Thanks for all the input so far, guys. Keep in mind, I have a hardware RAID controller that I will be using to construct the RAID. Apparently it doesn't support RAID 6, so that's out of the question (view specs here: http://support.dell.com/support/edocs/storage/RAID/CS6CH/en/index.htm).

    The OS will not be on this RAID; I'll most likely be running the OS off a Compact Flash card or a much smaller hard drive.

    Is ZFS the best file system option? I thought I read that it's still experimental. Does Ubuntu support it?

    It looks like RAID 10 or dual RAID 1 might be my best bet?
     
  10. Emulex

    Emulex Diamond Member

    Joined:
    Jan 28, 2001
    Messages:
    9,759
    Likes Received:
    0
    iirc opensolaris has the best zfs and raid-z(2) support. but weaker iscsi (persistent reservations ala scsi-3)?
     
  11. bigi

    bigi Golden Member

    Joined:
    Aug 8, 2001
    Messages:
    1,684
    Likes Received:
    1
    I use RAID 6 myself because I am afraid that 2nd drive may fail especially during RAID rebuild.
    Depending of controller/drives and size of your unit rebuild may take long time to complete - even days in some cases. During this time RAID5 works very hard to rebuild itself, but it is vulnerable to HDD failure during that time.
    I'd go 1+0 in your case.
     
  12. BusyDoingNothing

    Joined:
    Nov 10, 2005
    Messages:
    28
    Likes Received:
    0
    I decided to go a different route. I got 2 WD 500GB Green drives to use as a RAID 1 for backup. I got 3 Hitachi 1TB drives to either turn into a 2TB RAID 5 or a 1TB RAID 1 for my most important non-backup data (i.e. film and music projects) and the extra 1TB to use as just an extra data drive. I don't know...is RAID 5 really as bad as some things I've read? Or should I go with it?
     
  13. Cr0nJ0b

    Cr0nJ0b Golden Member

    Joined:
    Apr 13, 2004
    Messages:
    1,041
    Likes Received:
    1
    I have about 8TB at home right now. I use Linux and software RAID 5. If you don't care about performance, and you want the best bang for the buck, it's the best way to go. SW RAID is another performance step down, but it also free. I like to setup several volumes of no more than 5 drives each. That gets me some good performance and relatively low risk. I wouldn't look at RAID 6 unless you have a very high SLA and are running a 24 x 7 operation. For me that's not an issue. What RAID 6 buys you is time between a first and second failure before data loss. I personally am not really worried about a double disk fault. I have seen a number of other issues like File system corruption and user error that are more frequent. I back up from one set to a seperate server on a relatively frequet basis, so I'm protected there.

    In a nutshell: 4+1 RAID5 stripes using linux MDRAID tools.

    good luck.
     
  14. pjkenned

    pjkenned Senior member

    Joined:
    Jan 14, 2008
    Messages:
    629
    Likes Received:
    0
    Two thoughts, first is that, using consumer drives, you have to worry about a TLER (or other vendor equivalent) error kicking a disk out of a degraded array in Raid 5. Second, I know that I'm not the only person who has accidentally pulled the wrong drive or knocked a cable in a bad way. User error can also kill a raid 5 array, especially when the spindle counts rise.

    All that being said, I'm running one WHS VM right now allowing it to directly control the disks instead of creating a raid array on the Areca. Performance isn't super, but it is sufficient. Moving to 30+ drives, I know that I will be seeing a few failures/ year. With 2TB drives and raid 6 + hotspare it is 7 drives for 8TB of capacity (2TB MBR partitions * 4) v. 8 drives for 8TB (and another 8TB for duplication) with WHS managing, plus two drive failure is partial data set loss not a total loss.
     
  15. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    FreeBSD 8.0 has ZFS support up to v13 so you won't miss anything. Can strongly recommend playing with ZFS. It makes all other storage setups - especially those under Windows - be obsolete in almost every way.
     
  16. pjkenned

    pjkenned Senior member

    Joined:
    Jan 14, 2008
    Messages:
    629
    Likes Received:
    0
    No OCE in Raid-Z/ Raid-Z2 :-(
     
  17. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    No, but you can add a second RAID-Z array to the same pool, so you start with a 4-disk RAID-Z and later add a second 4-disk RAID-Z; same overhead as 8-disk RAID6 but much more resilience against dataloss.
     
  18. pjkenned

    pjkenned Senior member

    Joined:
    Jan 14, 2008
    Messages:
    629
    Likes Received:
    0
    You mean raid-z2 = raid 6 I think :), but you need to add a minimum of four drives with raid-z2. HW Raid you can add 1 disk, 2 disks, or a bunch more to an existing array.

    Also, at 4 drive arrays raid 1 makes a strong case in any event. You trade the speed of Raid 6 (which is not as great on a single NIC) for the ability for any two drives to fail in the array. On the third drive failure you lose 100% data. In two raid 1 arrays you would have to lose both disks in a raid 1 array to lose the data and if they were in alternate arrays, you would lose only half the data on the four drives.

    So really for raid-z2 starts making a good case at 5+ drives, which means you are adding 5+ drives at a time. For people with large arrays, this is no issue since it is an expensive undertaking. For those with small arrays <10 drives, it means that each time you add capacity you are adding 5+ drives versus one or only a few.

    I like raid-z2 but things like OCE are really useful, especially if you aren't adding 2TB/mo of data (since unused capacity has an electric cost, warranty cost, opportunity cost, and will cause wear on a drive without needing to).

    That being said, my next toy is certainly a big Raid-Z3 array :)
     
  19. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    Two RAID-Z (RAID5) arrays has the same 'overhead' as one RAID-Z2 (RAID6) array.


    My point is you don't need OCE:
    1) start with 4-disk RAID-Z (1 disk lost due to overhead)
    2) after 6 months buy another 4-disk RAID-Z (total 2 disks overhead same as RAID6)
    3) after a year add another 4-disk RAID-Z

    So you started with 3TB (assuming you have 1TB drives), then you expanded to 6TB, later to 9TB. Ever without using capacity expansion. All you did was add a second (and third) array to the existing storage pool; that works. So all free space is shared and ZFS basically just behaves as if it were one big RAID-Z2 array.

    If you require more redundancy; try setting your copies=2 for directories you find important. That would double the number of copies, and store the copies on different physical disks. So it can withstand even more HDD failure or corruption.
     
  20. Emulex

    Emulex Diamond Member

    Joined:
    Jan 28, 2001
    Messages:
    9,759
    Likes Received:
    0
    well if you had two raid-5 the logical thing to do would be to stripe them to raid-50
     
  21. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    That's what ZFS automatically does when you add it to an existing array. So you can 'expand' the RAID0 part. And just put multiple RAID-Z in a single pool ("directory" if you are unfamiliar with ZFS). This will both increase the storage space you have available on your existing volume, and increase performance due to the fact that ZFS had multiple arrays to read and write to.
     
  22. Emulex

    Emulex Diamond Member

    Joined:
    Jan 28, 2001
    Messages:
    9,759
    Likes Received:
    0
    how do you enforce boundaries like say drive 1 bay 1 mirrors to drive 1 bay 2 ?

    to prevent failure if a drive bay/cage fails? what happens if the hot-spare is only in one bay? could you unplug the dead drive and move the spare to other bay? (cold)
     
  23. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    All my disks are labeled. So ZFS knows about "disk1" and i know which one that is. When i remove it, i'll check that i removed the right one; else i plug it back in. :)

    Hardware RAID often has beeping LEDS etc. Thats useful, but its no disaster if you pull the wrong drive; or mix the order you connect them. ZFS will detect each disk individually, no matter how they are connected.
     
  24. Emulex

    Emulex Diamond Member

    Joined:
    Jan 28, 2001
    Messages:
    9,759
    Likes Received:
    0
    well the raid edition drives and sas drives have a WWN on them so you can order them all day long. before formatting you can locate the drive.

    i wish they still had beeping raid controllers - all silent nowadays - i prefer a screaming raid controller to let someone know its not happy!
     
  25. sub.mesa

    sub.mesa Senior member

    Joined:
    Feb 16, 2010
    Messages:
    611
    Likes Received:
    0
    Well i use drive cages with LEDs on them. So i simply read from a disk and then know which one it is. I use a simple dd query for that, crude but effective. :)