RAID Pointless in a Home Media Server?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
raid5 and raidd6 I have regretted... I was warned against them but decided to try them anyways... both in hybrid mode (mobo controller) and full software (OS based)... I never tried those 300$+ pure hardware controllers because that price is ridiculous.

I am very interested in this: Why did you regret it? Especially why did you regret your RAID 6/Z setup (because that seems like as good as it gets)?

Honestly what the above poster put is what I have read- writes are terrible on a RAID5/6 without a dedicated card, but reads can be as fast or faster with software raid than with a card.

Currently I am trying out unRaid with three drives to see if it has ANYWHERE near the performance I want. I don't like how it uses a Linux that is so basic that I can't expand it easily to do other tasks (I originally planned for my media server to be a MythTV backend as well), but I will dedicated a box to the task if it can stream my avatar blu ray properly.

If not I am back to independent disks (square one) with tons of guilt that such a setup has little automatic redundancy! ;)
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
You might consider looking in to Greyhole. It is still in late beta, but it is being released with Amahi servers now. It is a Linux version of WHS drive redundancy. I'm still waiting for full release before I build my server, but from everything I have seen it will do exactly what I want :)

That seems pretty cool, but I don't like the SAMBA emphasis. I have used SAMBA in the past to connect my machines and NFS just allows for faster transfers.

I have also been looking a "Flexraid." So many options, and none are exactly what is needed. Sigh.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
I am very interested in this: Why did you regret it? Especially why did you regret your RAID 6/Z setup (because that seems like as good as it gets)?

Honestly what the above poster put is what I have read- writes are terrible on a RAID5/6 without a dedicated card, but reads can be as fast or faster with software raid than with a card.

Currently I am trying out unRaid with three drives to see if it has ANYWHERE near the performance I want. I don't like how it uses a Linux that is so basic that I can't expand it easily to do other tasks (I originally planned for my media server to be a MythTV backend as well), but I will dedicated a box to the task if it can stream my avatar blu ray properly.

If not I am back to independent disks (square one) with tons of guilt that such a setup has little automatic redundancy! ;)

my mobo raid5 i regretted because it kept on getting lost... i learned how to recover it but it was scary as heck.. plus performance was utterly atrocious...

my 5 drive raidz2 (raid6 zfs) I am regretting for 2 reasons:
1. It is very very slow. much slower then a lone drive or raid1. Faster then raid5 on my mobo back in the day (and performance has been improving with various updates), but still too slow. I thought I wouldn't care about speed for bulk storage but I was surprised.
2. upgrading capacity is PITA. I have to replace all drives, one by one, with higher capacity drives and let it "heal" after each change (which is risky btw)... OR, I need to build a new array and migrate, which is problematic due to lack of SATA ports and no option to go from raid6 to raid5 mode. (its been on the todo list since 2004)...

If I had used 2 raid1 arrays I could have added a third raid1 array, and when I next needed to upgrade capacity replace just the 2 smallest drives. Every time I would run out of space (and ONLY then) I would replace the two smallest drives with bigger ones. But with RAID6 I need to replace all 5 drives at once ins a risky, time consuming, and complicated procedure.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
1. It is very very slow. much slower then a lone drive or raid1. Faster then raid5 on my mobo back in the day (and performance has been improving with various updates), but still too slow. I thought I wouldn't care about speed for bulk storage but I was surprised.

Sorry to keep bugging you, but you are really helping my understanding about this. I have to ask:

Is all of it slow? Small writes/reads, big writes/reads, etc?

Would it be fast enough you think (fast enough to be usable) if you threw what is in you main box at it? What I plan to use as a file server is basically exactly your main mobo and a Q6600.

Or is it no hardware raid, no go? Makes sense if that is the case.

Last question: On a raid 1, are the reads at least 50mb/s?
 

DesktopMan

Junior Member
May 3, 2010
24
0
0
I think people in general underestimate software raid, at least well implemented ones such as md on Linux.

I have a software raid 6 array on Linux and the performance of the array is faster than any of the individual drives. In addition you have data safety, and multiple reads from the array at the same time is faster than multiple reads of a single drive, as was mentioned earlier in the thread. The only time I'd actually recommend hardware raid would be for corporate servers.

Even a 5400 RPM Western Digital 2TB Green drive has sustained read speeds at or above 100mbs. That means that on that one drive I can serve two clients a copy of an Avatar-sized Blu Ray (which has 48 mb/s peeks)
Seems there are some confusion to speeds here. A typical harddrive can do 100 MByte per second++. BluRay tops out at about 50 mbit per second. That's only 6.25 MByte per second, way slower than the harddrive. As pointed out by someone else the seek time on the harddrive is the bigger issue, but how many simultanious reads you can have of 6.25MBps on a single drive should be more than two. Don't have data on it though, give it a whirl. In either case, you'll be able to do more simultaneous streams off a raid 5 or raid 6.

As far as reliability goes, moving a Linux raid 6 to another machine is as easy as connecting the hard drives and installing md. Debian and Ubuntu already come with md so if you connect the drives your array is automatically assembled. It doesn't rebuild unless there was a drive failure or inconsistent state (such as a powerloss), nor does it write anything until it finds something that needs to be written during the check. Replacing a failed drive is as easy as adding a new one to the array.

Last time I tried software raid on Windows was with Windows Server 2003. At that point it was horribly slow compared to Linux on the same machine (30MB/s vs 130MB/s or so) so I went with Linux. It might be better now (and hardware is faster in general) but I'd still recommend Linux with md.

Any raid that comes with your mobo should be shunned like witches. (The bad kind.) I had a go at nVidia raid once, and the stuff I had to do to get my data back was a nightmare. Never had such issues with md raid even if I've had about twelve harddrive failures the last five years or so. Not a single data loss, and all drives were replaced on a running system.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Sorry to keep bugging you, but you are really helping my understanding about this. I have to ask:

Is all of it slow? Small writes/reads, big writes/reads, etc?

Would it be fast enough you think (fast enough to be usable) if you threw what is in you main box at it? What I plan to use as a file server is basically exactly your main mobo and a Q6600.

Or is it no hardware raid, no go? Makes sense if that is the case.

Last question: On a raid 1, are the reads at least 50mb/s?

I gave it 4GB of ram, I recently upgraded the CPU from athlon x2 @2ghz to Athlon2 x4 @3ghz...
Did not show much improvement. its slow when I write stuff to it, big or small.

The biggest speed improvement was actually when I enabled jumbo frames...
I am using gigabit ethernet going via a switch (always use a switch, even if your router is gigE use a switch!).

I tried dedup for a while but it was ungodly slow so I disabled it. I do have compression on and I think it actually improves performance (since CPU power is plentiful and it results in less data to write).

read speed is good though

EDIT: its been a while since I last benchmarked, and as I said performance improves with each version.. I just tested a large ISO and it is actually decent. I am getting a solid 60 MB/s
when I just started I used to get 5 to 15 MB/s on this very same array (just with a much earlier version of open solaris).
I need to retest small files though.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
EDIT: its been a while since I last benchmarked, and as I said performance improves with each version.. I just tested a large ISO and it is actually decent. I am getting a solid 60 MB/s
when I just started I used to get 5 to 15 MB/s on this very same array (just with a much earlier version of open solaris).
I need to retest small files though.


Heck, even around 40 sounds good to me. Can't wait to hear back...
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
i used to wish for 40... But performance has improved wonders with never versions... using the latest I actually get very good performance and speaking about slowness is mostly bitter memories and was plain wrong of me.
Also, I did recently upgrade the CPU from athlon x2 @2ghz to athlon2 x4 @3ghz.

my open solaris b134 raidz2 5x750GB array:
osolb134raidz25x750GB.png


my WD640 2 platter drive:
WD640GBtwoplatterdrive.png


my Intel X25-M 80GB G2:
IntelX25-M80GBG2.png
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Ok, you basically just convinced me to go software RAID 6. If I get half of what you are getting then I would be super happy.

I will probably go the Linux route because:

A. I am not comfortable with solaris
B. I really want the box to double as a MythTV backend

I will put in a EP35-DS3R and either a Q6600 or an E8400 (have both sitting around, need to find out which is better for this I guess). I will get 4GB of RAM, and and start filling it with 2TB drives!

Thank you very much all, I feel like I got a lot from this and it means a lot to me...
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
After messing with software RAID for a while (and me really not liking it), I found a solution that gave me everything I wanted: Unraid.

I get a JBOD of any disk I want, with one disk parity protection. Even though the data is not striped for extra performance, I find Unraid is able to max out my cheap gigabit switch anyway. Since it's not striped a two or three disk failure doesn't mean total data loss. Unraid uses the exact same pile of mobos and cheap PCIe 1x sata adaptors I was using in my old server (as its just Slackware), plus it runs off a pen drive to allow every sata port to be hooked to a data drive.

The only downsides are slow write speeds and license cost, which are two things I can deal with for a media server. Already rebuilt my old server into a 10 drive Unraid box, now I am working or a 16 drive one....
 

zephyrprime

Diamond Member
Feb 18, 2001
7,512
2
81
Even a 5400 RPM Western Digital 2TB Green drive has sustained read speeds at or above 100mbs. That means that on that one drive I can serve two clients a copy of an Avatar-sized Blu Ray (which has 48 mb/s peeks).
That's 100MBytes/sec versus 46mbit/sec. So in theory, one drive could serve up 16 simultaneous clients.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
note that those are sustained SEQUENTIAL read speeds.
multiple concurrect accesses to the same drive significantly slow it down.
 

pjkenned

Senior member
Jan 14, 2008
630
0
71
www.servethehome.com
After messing with software RAID for a while (and me really not liking it), I found a solution that gave me everything I wanted: Unraid.

I get a JBOD of any disk I want, with one disk parity protection. Even though the data is not striped for extra performance, I find Unraid is able to max out my cheap gigabit switch anyway. Since it's not striped a two or three disk failure doesn't mean total data loss. Unraid uses the exact same pile of mobos and cheap PCIe 1x sata adaptors I was using in my old server (as its just Slackware), plus it runs off a pen drive to allow every sata port to be hooked to a data drive.

The only downsides are slow write speeds and license cost, which are two things I can deal with for a media server. Already rebuilt my old server into a 10 drive Unraid box, now I am working or a 16 drive one....

Full disclosure on unRaid/ Raid 4: NetApp is the only top tier storage vendor to use RAID 4. Even with a proprietary file system (WAFL) purpose built hardware, and etc, RAID 4 is currently only offered for backwards compatibility. More or less, Raid 1, 10/ 1+0, 5, 6 and to a lesser extent RAID 0 are all used in enterprise systems. There is a reason for this, and putting unRaid in a low power spare PC will not make a better system than $10,000+ filers with custom systems.

I am fairly biased though since I started on higher-end NAS products before trying unRaid.

Also, don't discount hardware raid. Online capacity expansion, easy management, OS independent RAID and battery backed write cache are all really strong features.

Two thoughts: Install Hyper-V, ESXi, and/or VirtualBox and test your configuration before deployment. Test how failures are remedied before they happen, remote management, and etc. Spend a LOT of time on this. When you lose a drive, there is a lot of comfort to having the recovery process be very easy. I remember losing a drive in a 18TB RAID 6 array, hearing the Adaptec beep for about 5 seconds before shutting off. By the time I sat down and logged into Adaptec storage manger, the array was aready being rebuilt using the hotspare. A WHS drive failure (non-virtualized instance) can be easily remedied by clicking through a few menus and inserting another drive. Unlike traditional raid arrays, you can get back to duplication/ redundancy (assuming you have the free space) before you insert another drive. Each implementation has strengths and weaknesses and those are weighted differently for each person.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
note that those are sustained SEQUENTIAL read speeds.
multiple concurrect accesses to the same drive significantly slow it down.

Agreed. It is enough to serve two clients worth now though, and that is all I have. When I grow beyond that my plan is to clone my server for the extra clients.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
For what its worth i have a 3 disk(seagate 1.5TB LP) software RAID 5 array on a ubuntu machine and its running fine, can write as fast as a gigabit link and so far have had no issues with streaming 2 streams at a time(well since i wired my playstation, wireless really blows).
 

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
I know you've already seemingly made your decision, but according to your original concerns, your best choice seems to be RAID 1. It lets you have enough redundancy to prevent redoing all your rips and increase your read speeds, but allows you to do it in disk pairs for a lower barrier of entry, even when upgrading. You can have multiple RAID 1 arrays, so when you do upgrade, you just throw in two new disks, and have that much more capacity with very little setup.

I'm really just echoing taltamir, but it stood out to me that he recommended RAID 1 several times but you ended up landing on the full-blown RAID 6 option.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
I would not use linux OS based raid5 or 6 either... I would use linux OS raid1.
Only time you should even consider raid5 or 6 is if you have a 300$ controller
Care to explain? Software RAID 5 on Linux seems to be fairly popular.


edit: wait...

I never tried those 300$+ pure hardware controllers because that price is ridiculous.
Then how can you possibly make the above recommendation for something you've never even tried?
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
I know you've already seemingly made your decision, but according to your original concerns, your best choice seems to be RAID 1. It lets you have enough redundancy to prevent redoing all your rips and increase your read speeds, but allows you to do it in disk pairs for a lower barrier of entry, even when upgrading. You can have multiple RAID 1 arrays, so when you do upgrade, you just throw in two new disks, and have that much more capacity with very little setup.

I'm really just echoing taltamir, but it stood out to me that he recommended RAID 1 several times but you ended up landing on the full-blown RAID 6 option.

RAID one is kinda a non-starter to me because it doubles my storage costs. I wouldn't think of anything else for financial files or something, but for my media server its expensive overkill. In fact RAID 1 parity is pretty much what WHS offers, which is why WHS was a non-starter for me. Its not just about the individual drive costs- each drive has bay costs of what it takes to put that drive on the network. When you double all those costs, suddenly losing everything with JBOD seems like a bargin!

That is why I went with Unraid- I lose only one disk in a (up to) 20 drive array for parity.

Well first I looked into software RAID6, but research kept telling me about a "RAID write hole." So then like suggested I looked at RAIDZ, but I couldn't find a way to easily grow an array.

Unraid gives me everything I want, with downsides I can live with.
 

Barnaby W. Füi

Elite Member
Aug 14, 2001
12,343
0
0
RAID one is kinda a non-starter to me because it doubles my storage costs. I wouldn't think of anything else for financial files or something, but for my media server its expensive overkill. In fact RAID 1 parity is pretty much what WHS offers, which is why WHS was a non-starter for me. Its not just about the individual drive costs- each drive has bay costs of what it takes to put that drive on the network. When you double all those costs, suddenly losing everything with JBOD seems like a bargin!

That is why I went with Unraid- I lose only one disk in a (up to) 20 drive array for parity.

Well first I looked into software RAID6, but research kept telling me about a "RAID write hole." So then like suggested I looked at RAIDZ, but I couldn't find a way to easily grow an array.

Unraid gives me everything I want, with downsides I can live with.

RAID 5 only requires one extra disk! Considering how cheap drives are, and how many man-hours it could take to replace terabytes of data, it's quite a bargain.

Unraid appears to use RAID 4, which is essentially the same as RAID 5 but implemented a bit differently.
 
Last edited:

LokutusofBorg

Golden Member
Mar 20, 2001
1,065
0
76
It was my understanding that WHS doesn't use RAID, they basically just use a JBOD format that allows you to add and remove disks from the disk group at will. If I was to build a home media server that's what I would use. But then I have MSDN so can get it for free, and I'm not a Linux guy.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Barnaby W. Füi;30058789 said:
taltamir said:
I would not use linux OS based raid5 or 6 either... I would use linux OS raid1.
Only time you should even consider raid5 or 6 is if you have a 300$ controller
Care to explain? Software RAID 5 on Linux seems to be fairly popular.


edit: wait...


Then how can you possibly make the above recommendation for something you've never even tried?

1. I said "the only time you should EVEN CONSIDER"... this is a far cry from a "recommendation". I recommend you DO NOT use the types I have tried (OS based and mobo based). I niether support nor oppose 300+$ controller based RAID5.

2. Trustworthy experts claim that the problems that plague RAID5 aren't an issue with such controllers. Trustworthy experts warn against other forms of RAID5 (both mobo and OS based), I did not listen and tried both of them myself... I found out first hand how horrific RAID5 is.

So should you use a 300+$ controller based RAID5? I can't really tell you... I have no experience with that one. But I can tell you that you sure as heck shouldn't use other types of RAID5.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Barnaby W. Füi;30058823 said:
RAID 5 only requires one extra disk! Considering how cheap drives are, and how many man-hours it could take to replace terabytes of data, it's quite a bargain.

This pushed me away from RAID 5 for forever:

http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

I want to build big (10 drive +) arrays with big (1.5TB+) disks. Raid 5 seems scary in those conditions.

Unraid appears to use RAID 4, which is essentially the same as RAID 5 but implemented a bit differently.

Main difference between RAID 4 and RAID 5/6 from what I understand is that the latter share their parity data across all disks, while the former has a dedicated parity drive with the rest of the data striped. The downside of RAID 4 (and why its pretty much dead tech) is that this dedicated drive creates a bottleneck for writes which is unacceptable in a business environment. Unraid still has this bottleneck, but write speeds for a media server matter way less than read speeds.

Main difference between RAID 4 (and RAID 0/5/6) and Unraid is that Unraid does not stripe any of its data. This means that reads are slower than on a RAID 4/5/6 system, but as I said in my tests Unraid is only a little slower than the disks by themselves which is still more than fast enough to saturate my cheap gigabit network.

The upside to no striping is that you can have a disk failure of many disks and you won't lose all your data- you can always pull each drive out of the Unraid server and browse it with a Linux Live CD. This means that you get the single parity drive like RAID 5, but if you lose more than two drives unlike RAID 5 you don't lose everything. This also means you can grow your array by replacing smaller drives with larger drives one at a time- something impossible in RAID 4/5/6! In fact on my second Unraid server I plan to never buy more than one disk at a time and try to spread them out among manufacturers because Unraid lets me and it seems like I would be less vulnerable to massive disk failure due to getting a bad batch of drives.

If I am wrong about any of this, someone smarter please correct me. Prior to this thread I didn't know what parity was so I am still the village idiot on storage in my mind.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
It was my understanding that WHS doesn't use RAID, they basically just use a JBOD format that allows you to add and remove disks from the disk group at will. If I was to build a home media server that's what I would use. But then I have MSDN so can get it for free, and I'm not a Linux guy.

Sorry. My bad. What I meant to say is that WHS uses 1 to 1 mirroring like RAID 1. From what I understand you pick what you care about the most, and its mirrored on the server. It most certainly does not use RAID, and it is a might fine option for a media server.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
This pushed me away from RAID 5 for forever:

http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162

I want to build big (10 drive +) arrays with big (1.5TB+) disks. Raid 5 seems scary in those conditions.

That is why traditional filesystems are obsolete and filesystems like Btrfs and ZFS have a golden future. By checksumming your data you can solve the BER problem; ZFS is now virtually untouched by BER.

Also using 4K Advanced Format (EARS) drives can help, as these have a lower bit-error rate to begin with, though that doesn't translate in better specs. But specs say little honestly.
 
Status
Not open for further replies.