Go Back   AnandTech Forums > Hardware and Technology > Memory and Storage

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2013
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 02-10-2010, 09:38 PM   #1
BusyDoingNothing
Member
 
Join Date: Nov 2005
Posts: 28
Lightbulb Best RAID for home NAS?

I'm planning on building a home server using Ubuntu (most likely) which will be mainly used for backup, storage, file serving, and possibly video encoding. In anticipation, I got an Adaptec 2610SA 6-port hardware RAID adapter off eBay. I'm trying to figure out which RAID solution is best for me, and I've narrowed it down to RAID-5 and RAID-10.

Performance isn't necessarily my biggest concern. I think as far as writing goes, I'll be limited mostly by my network speed. I shoot film with an HD cam, so I'll be storing writing a lot of large video files (likely around 10GB a pop) to the array. I'd also like to use the box to encode the videos into different formats, but I think I'll be more limited by CPU than anything when it comes to that.

I guess my biggest concerns are space, reliability, and cost. I don't want to break the bank. Regardless of which route I go, I'm gonna buy 3 or 4 Hitachi 1TB Deskstars for the array, so it'll end up costing me about $300 tops. I'd like to get the most space possible. I'll be backing up my PCs and storing my music and film projects, so I want to feel secure with my data. I can always do a backup to a different drive outside the array (internal or external) for my most important stuff.

What do you guys suggest? It seems to me that RAID 10 is most highly regarded. It's gonna cost more per storage space, but it seems to be more reliable, as I may be able to lose 2 drives but still be able to recover. Any input is welcome. Thanks!
__________________
Intel C2D E6600 || 2GB G.Skill DDR2-800 || XFX 8800GTX || Asus P5W DH Deluxe || Creative X-Fi Platinum || Antec P180 || Enermax Liberty 620W || Arctic Freezer 7 Pro || Dell 2007FPW
BusyDoingNothing is offline   Reply With Quote
Old 02-11-2010, 12:39 AM   #2
pjkenned
Senior Member
 
Join Date: Jan 2008
Posts: 629
Default

With 4x 1TB drives:
Raid 10 = 2TB (one or two drives can fail depending on drive, low overhead)
Raid 5 = 3TB (single drive can fail)
Raid 6 = 2TB (any two drives can fail)

I've been using Raid 1 for 2x1TB OS drive(s) and Raid 6 for the 12+ 1.5TB storage drives for over a year.

New build (new motherboard today) I think will stay raid 6, but I've been contemplating just using the WHS built in duplication. Then again, WHS only supports 32 drives so that's an issue with not going Raid 6.
__________________
My Home Server Site and Current Project: ServeTheHome RAID Calculator
Feedback on the calculator much appreciated as we add new features.
pjkenned is offline   Reply With Quote
Old 02-11-2010, 06:11 AM   #3
tuteja1986
Diamond Member
 
Join Date: Jun 2005
Posts: 3,676
Default

well in my file server i have different system :
1x 200GB IDE = OS drive (don't care if it died)
2x 500GB WD Etherprise raid 1 = personal data
4x 1TB WD Green raid 5 = data not that important
SeaSonic S12II 430B PSU ( Good solid 80+ BRONZE PSU, don't get a 600W+ PSU)
1GB ram (never need more than that , 2GB max )
Intel Celeron 430 775 with Tuniq Tower 120 (don't need a power hungry CPU)
Gigabyte P35 DS3P
Thermaltake A2309 (VERY IMPROTANT keeps hard drive cool , fits three HDD)
Case that will take Two Thermaltake A2309 cage


Also make sure to get a UPS with 700W load capcity $150+ that can last for 20mins plus when power goes off. ALso make sure your fileserver can shutdown in less than 1min 30seconds. I use highly cutdown version of Windows 2003 which can shut down in less than 30 seconds. Linux and Unix would be better but i didn't have comptiable driver for my motherboard raid drivers.

Make sure to get intelligent UPS that does ful recharge before starting up the server when the power is restored.

Make sure to have Two back hard drive in the draw ready to be take over if hard drive start to show signs of failure. Use a good HDD monitor tool.
__________________
"A person can gain everything without having nothing. But a person with everything can not gain more happiness than a person with nothing. If you truly understand this than you can be anything. "

Join Anandtech Steam community Group

Last edited by tuteja1986; 02-11-2010 at 06:22 AM.
tuteja1986 is offline   Reply With Quote
Old 02-11-2010, 07:25 AM   #4
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,523
Default

you want OS based (pure software, do NOT use a mobo controller under any circumstance) RAID1 arrays (plural) or maybe RAID10.
Avoid RAID5 and RAID6, never use a mobo controller based RAID... quality hardware RAID is costly and locks you in, but it is very fast and reliable if you are willing to pay. OS based raid works best with solaris from genunix.org which allows you to use ZFS.
ZFS is by far the best file system available right now, being at least 2 or 3 generations ahead in terms of what it can do compared to any other file system currently available.
http://en.wikipedia.org/wiki/Zfs
although, there is one equivalent file system, known as btrfs aka better FS currently in development, it will be some time before it is available (current version is 0.19 unstable alpha)
http://en.wikipedia.org/wiki/Btrfs
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.

Last edited by taltamir; 02-11-2010 at 07:30 AM.
taltamir is offline   Reply With Quote
Old 02-11-2010, 02:20 PM   #5
Knavish
Senior Member
 
Knavish's Avatar
 
Join Date: May 2002
Location: MA
Posts: 843
Default

FreeBSD supports ZFS / RAID-Z as well.


Quote:
Originally Posted by taltamir View Post
...you are willing to pay. OS based raid works best with solaris from genunix.org which allows you to use ZFS.
ZFS is by far the best file system available right now, being at least 2 or 3 generations ahead in terms of what it can do compared to any other file system currently available.
...
Knavish is offline   Reply With Quote
Old 02-11-2010, 06:39 PM   #6
tuteja1986
Diamond Member
 
Join Date: Jun 2005
Posts: 3,676
Default

If you do go OS raid Make sure you get a UPS !
__________________
"A person can gain everything without having nothing. But a person with everything can not gain more happiness than a person with nothing. If you truly understand this than you can be anything. "

Join Anandtech Steam community Group
tuteja1986 is offline   Reply With Quote
Old 02-11-2010, 07:19 PM   #7
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,540
Default

who doesn't rock a solid ups these days?
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 02-11-2010, 07:25 PM   #8
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,523
Default

Quote:
Originally Posted by Knavish View Post
FreeBSD supports ZFS / RAID-Z as well.
correct, in fact I have tested the ability to import a ZFS raidz2 array between solaris, nexenta, and freeBSD (it was successful every time) before putting a single file on it.
just type:
#zfs import -f tank
and wait 1 to 5 seconds and you have your array in the new / different OS.

Quote:
Originally Posted by tuteja1986 View Post
If you do go OS raid Make sure you get a UPS !
that can be said for any type of storage or RAID solution.
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 02-11-2010, 07:33 PM   #9
BusyDoingNothing
Member
 
Join Date: Nov 2005
Posts: 28
Default

Thanks for all the input so far, guys. Keep in mind, I have a hardware RAID controller that I will be using to construct the RAID. Apparently it doesn't support RAID 6, so that's out of the question (view specs here: http://support.dell.com/support/edoc...H/en/index.htm).

The OS will not be on this RAID; I'll most likely be running the OS off a Compact Flash card or a much smaller hard drive.

Is ZFS the best file system option? I thought I read that it's still experimental. Does Ubuntu support it?

It looks like RAID 10 or dual RAID 1 might be my best bet?
__________________
Intel C2D E6600 || 2GB G.Skill DDR2-800 || XFX 8800GTX || Asus P5W DH Deluxe || Creative X-Fi Platinum || Antec P180 || Enermax Liberty 620W || Arctic Freezer 7 Pro || Dell 2007FPW
BusyDoingNothing is offline   Reply With Quote
Old 02-11-2010, 08:16 PM   #10
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,540
Default

iirc opensolaris has the best zfs and raid-z(2) support. but weaker iscsi (persistent reservations ala scsi-3)?
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 02-12-2010, 12:18 PM   #11
bigi
Golden Member
 
bigi's Avatar
 
Join Date: Aug 2001
Posts: 1,386
Default

I use RAID 6 myself because I am afraid that 2nd drive may fail especially during RAID rebuild.
Depending of controller/drives and size of your unit rebuild may take long time to complete - even days in some cases. During this time RAID5 works very hard to rebuild itself, but it is vulnerable to HDD failure during that time.
I'd go 1+0 in your case.
__________________
"I'm sick of being so healthy" - H.Simpson
bigi is offline   Reply With Quote
Old 02-14-2010, 01:04 PM   #12
BusyDoingNothing
Member
 
Join Date: Nov 2005
Posts: 28
Default

I decided to go a different route. I got 2 WD 500GB Green drives to use as a RAID 1 for backup. I got 3 Hitachi 1TB drives to either turn into a 2TB RAID 5 or a 1TB RAID 1 for my most important non-backup data (i.e. film and music projects) and the extra 1TB to use as just an extra data drive. I don't know...is RAID 5 really as bad as some things I've read? Or should I go with it?
__________________
Intel C2D E6600 || 2GB G.Skill DDR2-800 || XFX 8800GTX || Asus P5W DH Deluxe || Creative X-Fi Platinum || Antec P180 || Enermax Liberty 620W || Arctic Freezer 7 Pro || Dell 2007FPW
BusyDoingNothing is offline   Reply With Quote
Old 02-14-2010, 01:14 PM   #13
Cr0nJ0b
Senior Member
 
Cr0nJ0b's Avatar
 
Join Date: Apr 2004
Posts: 952
Default RAID 5 is the way to go

I have about 8TB at home right now. I use Linux and software RAID 5. If you don't care about performance, and you want the best bang for the buck, it's the best way to go. SW RAID is another performance step down, but it also free. I like to setup several volumes of no more than 5 drives each. That gets me some good performance and relatively low risk. I wouldn't look at RAID 6 unless you have a very high SLA and are running a 24 x 7 operation. For me that's not an issue. What RAID 6 buys you is time between a first and second failure before data loss. I personally am not really worried about a double disk fault. I have seen a number of other issues like File system corruption and user error that are more frequent. I back up from one set to a seperate server on a relatively frequet basis, so I'm protected there.

In a nutshell: 4+1 RAID5 stripes using linux MDRAID tools.

good luck.
__________________
ASUS P8Z68-V Pro, 4 x 4GB DDR (1600MHz) Corsair CMZ8GX3M2A1600C9B
i5 3570K stock speed, XFX HD6870 Radeon ; 1 x 180GB Intel SSD G2 SATAII in JBOD (Boot)
4 X 750GB SATAII RAID0 (Storage); DVD-RW(SATA) , CD-RW (IDE) OptiArc
Cr0nJ0b is offline   Reply With Quote
Old 02-14-2010, 03:05 PM   #14
pjkenned
Senior Member
 
Join Date: Jan 2008
Posts: 629
Default

Quote:
Originally Posted by Cr0nJ0b View Post
I wouldn't look at RAID 6 unless you have a very high SLA and are running a 24 x 7 operation. For me that's not an issue. What RAID 6 buys you is time between a first and second failure before data loss. I personally am not really worried about a double disk fault.
Two thoughts, first is that, using consumer drives, you have to worry about a TLER (or other vendor equivalent) error kicking a disk out of a degraded array in Raid 5. Second, I know that I'm not the only person who has accidentally pulled the wrong drive or knocked a cable in a bad way. User error can also kill a raid 5 array, especially when the spindle counts rise.

All that being said, I'm running one WHS VM right now allowing it to directly control the disks instead of creating a raid array on the Areca. Performance isn't super, but it is sufficient. Moving to 30+ drives, I know that I will be seeing a few failures/ year. With 2TB drives and raid 6 + hotspare it is 7 drives for 8TB of capacity (2TB MBR partitions * 4) v. 8 drives for 8TB (and another 8TB for duplication) with WHS managing, plus two drive failure is partial data set loss not a total loss.
__________________
My Home Server Site and Current Project: ServeTheHome RAID Calculator
Feedback on the calculator much appreciated as we add new features.
pjkenned is offline   Reply With Quote
Old 02-16-2010, 05:15 AM   #15
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

FreeBSD 8.0 has ZFS support up to v13 so you won't miss anything. Can strongly recommend playing with ZFS. It makes all other storage setups - especially those under Windows - be obsolete in almost every way.
sub.mesa is offline   Reply With Quote
Old 02-16-2010, 12:52 PM   #16
pjkenned
Senior Member
 
Join Date: Jan 2008
Posts: 629
Default

Quote:
Originally Posted by sub.mesa View Post
FreeBSD 8.0 has ZFS support up to v13 so you won't miss anything. Can strongly recommend playing with ZFS. It makes all other storage setups - especially those under Windows - be obsolete in almost every way.
No OCE in Raid-Z/ Raid-Z2 :-(
__________________
My Home Server Site and Current Project: ServeTheHome RAID Calculator
Feedback on the calculator much appreciated as we add new features.
pjkenned is offline   Reply With Quote
Old 02-16-2010, 03:42 PM   #17
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

No, but you can add a second RAID-Z array to the same pool, so you start with a 4-disk RAID-Z and later add a second 4-disk RAID-Z; same overhead as 8-disk RAID6 but much more resilience against dataloss.
sub.mesa is offline   Reply With Quote
Old 02-17-2010, 12:41 AM   #18
pjkenned
Senior Member
 
Join Date: Jan 2008
Posts: 629
Default

You mean raid-z2 = raid 6 I think :-), but you need to add a minimum of four drives with raid-z2. HW Raid you can add 1 disk, 2 disks, or a bunch more to an existing array.

Also, at 4 drive arrays raid 1 makes a strong case in any event. You trade the speed of Raid 6 (which is not as great on a single NIC) for the ability for any two drives to fail in the array. On the third drive failure you lose 100% data. In two raid 1 arrays you would have to lose both disks in a raid 1 array to lose the data and if they were in alternate arrays, you would lose only half the data on the four drives.

So really for raid-z2 starts making a good case at 5+ drives, which means you are adding 5+ drives at a time. For people with large arrays, this is no issue since it is an expensive undertaking. For those with small arrays <10 drives, it means that each time you add capacity you are adding 5+ drives versus one or only a few.

I like raid-z2 but things like OCE are really useful, especially if you aren't adding 2TB/mo of data (since unused capacity has an electric cost, warranty cost, opportunity cost, and will cause wear on a drive without needing to).

That being said, my next toy is certainly a big Raid-Z3 array :-)
__________________
My Home Server Site and Current Project: ServeTheHome RAID Calculator
Feedback on the calculator much appreciated as we add new features.
pjkenned is offline   Reply With Quote
Old 02-17-2010, 04:46 AM   #19
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

Two RAID-Z (RAID5) arrays has the same 'overhead' as one RAID-Z2 (RAID6) array.


My point is you don't need OCE:
1) start with 4-disk RAID-Z (1 disk lost due to overhead)
2) after 6 months buy another 4-disk RAID-Z (total 2 disks overhead same as RAID6)
3) after a year add another 4-disk RAID-Z

So you started with 3TB (assuming you have 1TB drives), then you expanded to 6TB, later to 9TB. Ever without using capacity expansion. All you did was add a second (and third) array to the existing storage pool; that works. So all free space is shared and ZFS basically just behaves as if it were one big RAID-Z2 array.

If you require more redundancy; try setting your copies=2 for directories you find important. That would double the number of copies, and store the copies on different physical disks. So it can withstand even more HDD failure or corruption.
sub.mesa is offline   Reply With Quote
Old 02-17-2010, 05:22 AM   #20
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,540
Default

well if you had two raid-5 the logical thing to do would be to stripe them to raid-50
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 02-17-2010, 05:54 AM   #21
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

That's what ZFS automatically does when you add it to an existing array. So you can 'expand' the RAID0 part. And just put multiple RAID-Z in a single pool ("directory" if you are unfamiliar with ZFS). This will both increase the storage space you have available on your existing volume, and increase performance due to the fact that ZFS had multiple arrays to read and write to.
sub.mesa is offline   Reply With Quote
Old 02-17-2010, 06:00 PM   #22
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,540
Default

how do you enforce boundaries like say drive 1 bay 1 mirrors to drive 1 bay 2 ?

to prevent failure if a drive bay/cage fails? what happens if the hot-spare is only in one bay? could you unplug the dead drive and move the spare to other bay? (cold)
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 02-18-2010, 04:41 AM   #23
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

All my disks are labeled. So ZFS knows about "disk1" and i know which one that is. When i remove it, i'll check that i removed the right one; else i plug it back in.

Hardware RAID often has beeping LEDS etc. Thats useful, but its no disaster if you pull the wrong drive; or mix the order you connect them. ZFS will detect each disk individually, no matter how they are connected.
sub.mesa is offline   Reply With Quote
Old 02-18-2010, 05:51 AM   #24
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,540
Default

well the raid edition drives and sas drives have a WWN on them so you can order them all day long. before formatting you can locate the drive.

i wish they still had beeping raid controllers - all silent nowadays - i prefer a screaming raid controller to let someone know its not happy!
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 02-18-2010, 06:51 AM   #25
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

Well i use drive cages with LEDs on them. So i simply read from a disk and then know which one it is. I use a simple dd query for that, crude but effective.
sub.mesa is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:10 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.