unRAID Server build

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
http://www.newegg.com/Product/Produc...82E16813130663

Contemplating changing out my Gigabyte Q6600 + P35C-DS3R board for the above MSI A85X board and a 65W FM2 Trinity quad-core.

Would be used for a storage server, most likely at this point running unRAID (boots off of USB).

The MSI A85X board lacks an additional IDE port, both boards have 8 SATA ports.

The upside of the MSI, is being able to use the onboard video, and not having to take up a PCI-E x16 slot for video like the Gigabyte, leaving two whole PCI-E x16 (as x8/x8) free for RAID cards.

The case is a ChiefTec Dragon, which has six 5.25", floppy, and six 3.5". It currently has one 4-in-3 cage on top with four 5400RPM Hitachi 2TB HDDs, and six 7200RPM Hitachi 2TB HDDs in the internal bays. It also has an IDE DVD-RW in the lowest 5.25" bay, which would have to be removed for another drive cage.

I also have a Lian-Li/Rocketfish full-tower case, which also has six internal 3.5" bays, and six 5.25" bays. Unknown at this point if I can put in two 4-in-3 cages into the Rocketfish.

I had originally designed the hardware around WHS v1, but after seeing the many limitations of that setup, combined with the multitude of bug reports, is what made me choose unRAID. Currently running with the eval version, which only allows for two data drives and one parity drive.

Edit: After doing some research on controller cards, I may pick up a couple of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115029
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
I guess I should mention, I already own the MSI A85X mobo, I got it off of a friend for $40.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Given how many drives you have (10), I don't think I would go with unRAID because you'd have to buy the $120 Pro version. unRAID is great for people who want to start small and grow a drive or two at a time, but it stops making sense once you have enough drives to add/replace big sets at a time.

What I'd probably do instead is run FreeNAS with ZFS which will be faster and cheaper. Create a 3 RAIDZ1 vdevs (4 5400rpm, 3 7200rpm, 3 7200rpm) so that you can grow in units of 3-4 drives.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Well, I dug up ye olde storage server again, and hooked it up tonight.

Lost the old flash drive with the unRAID OS and drive config, so I had to make a new one. I reconfigured using the same /dev/sd#, and it seems like it's the same config.

I had been thinking of building a new storage server, with the aforementioned A85X mobo (actually, I have both the MSI and the Biostar ones), and using an NZXT Source 220, or a Rosewill Line Glow case. Both have 8 internal drive bays and 3 dvd bays. The rosewill is a tiny bit more expensive, but comes with two front fans pre-installed, which would save me quite a bit of time and expense.

I'm on a pretty small budget.

One reason I wasn't really using my "big" server was indeed the cost of the unRAID Pro license.

So, how is WHS 2011? That's a lot cheaper than unRAID Pro.
 
Last edited:

code65536

Golden Member
Mar 7, 2006
1,006
0
76
FreeNAS is even cheaper than WHS 2011.

If you're going to go the Redmond route, I'd strongly suggest considering the new Storage Spaces in 8/8.1. It's one of the understated hidden gems of the new OS.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Well, I dug up ye olde storage server again, and hooked it up tonight.

Lost the old flash drive with the unRAID OS and drive config, so I had to make a new one. I reconfigured using the same /dev/sd#, and it seems like it's the same config.

I had been thinking of building a new storage server, with the aforementioned A85X mobo (actually, I have both the MSI and the Biostar ones), and using an NZXT Source 220, or a Rosewill Line Glow case. Both have 8 internal drive bays and 3 dvd bays. The rosewill is a tiny bit more expensive, but comes with two front fans pre-installed, which would save me quite a bit of time and expense.

I'm on a pretty small budget.

One reason I wasn't really using my "big" server was indeed the cost of the unRAID Pro license.

So, how is WHS 2011? That's a lot cheaper than unRAID Pro.

WHS 2011 is less expensive than unRAID, but leaves you without much in the drive-pooling or RAID category. Unless I had the need to run programs under a Windows environment, then I would cough up the extra $$$ for the unRAID.

As mentioned above, FreeNAS is free. A little more complicated than unRAID, but not much and you will get performance and data storage protection that you wouldn't get from unRAID.

OTOH, FreeNAS uses striping across the array and makes it much harder to expand the array later on. Also, if you are mixing and matching different size HDDs then you'll get penalized in that all drives in FreeNAS (or any other ZFS based RAID) will be treated as equal to the smallest drive in the array.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Well, I did some research last night.

WHS 2011 = $50-60 at Newegg
StableBit DrivePool add-on for $20
FlexRaid for parity protection is $50

So we're looking at $120-130 for the WHS solution.

Whereas, unRAID Pro is $120.

I really like unRAID overall, the best, as far as storage efficiency, protection, and ease of use (being able to mix and match drive sizes as my array grows - this is one of the downfalls of using RAIDZ1/Z2 pools).

I think that, as of now, I'll probably just pony up for an unRAID Pro license in March.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
FlexRaid for parity protection is $50

it is ment to be good for using snapshot raid, and the basic version comes with it's own version of drive extender.

Though if you have enough space / not needing all the data protected from drive failure, then stablebit's system does have a file level raid 1 setup on selected folders. Makes the option a little better if that is all of the data protection you want.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Well, I did some research last night.

WHS 2011 = $50-60 at Newegg
StableBit DrivePool add-on for $20
FlexRaid for parity protection is $50

So we're looking at $120-130 for the WHS solution.

Whereas, unRAID Pro is $120.

I really like unRAID overall, the best, as far as storage efficiency, protection, and ease of use (being able to mix and match drive sizes as my array grows - this is one of the downfalls of using RAIDZ1/Z2 pools).

I think that, as of now, I'll probably just pony up for an unRAID Pro license in March.

If you use FlexRAID, you don't need DrivePool.

Also, SnapRAID is a free option that will you create snapshot protection in RAID 5 or 6. It's best if you are comfortable with CLI, but you can also try the Elucidate GUI for it. I never could get Elucidate to work, but I also didn't spend much time troubleshooting before I gave up.

Personally, I went the WHS 2011 + FlexRAID option just because it was the simplest and most straight forward way for me to get what I wanted.

I had seriously considered unRAID and had messed around with it, but FlexRAID let me import my drives with data already on them and I can pull drives out of the array and slide them right into a separate Windows install, having access to all data on the file. Not to mention the flexibility of having a Windows OS.
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
OTOH, FreeNAS uses striping across the array and makes it much harder to expand the array later on. Also, if you are mixing and matching different size HDDs then you'll get penalized in that all drives in FreeNAS (or any other ZFS based RAID) will be treated as equal to the smallest drive in the array.

You can make multiple zpools, you don't have to just have one big one. I would especially recommend that in Larry's case because he doesn't have homogenous drives. Mixing 5400RPM and 7200RPM drives in a RAID array (any RAID array) will just bring them all down to the performance of the 5400RPM drives, and so is not recommended.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
You can make multiple zpools, you don't have to just have one big one. I would especially recommend that in Larry's case because he doesn't have homogenous drives. Mixing 5400RPM and 7200RPM drives in a RAID array (any RAID array) will just bring them all down to the performance of the 5400RPM drives, and so is not recommended.

But if I make three zpools, I lose three out of my 10 drives to parity info. Whereas with unraid, I just have one parity drive. Plus, when reading, unraid only spins up that one drive, and when writing, only that drive and the parity drive.

Edit: Not to mention, the four 5400RPM drives are connected via a PCI 4-port SATA 1.5Gbit/sec controller card.
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
But if I make three zpools, I lose three out of my 10 drives to parity info. Whereas with unraid, I just have one parity drive. Plus, when reading, unraid only spins up that one drive, and when writing, only that drive and the parity drive.

If you're running a 10 drive array with only one parity drive, you're just asking to lose data IMHO. That's further compounded when you have drives that are at different parts of their lifecycles and different performance characteristics. If you drive the 5400RPM drives to their limits trying to keep up with the 7200RPM ones, you're going to kill them more quickly. Even if you go unRAID, don't mix and match drive speeds in the same volume.

Edit: Not to mention, the four 5400RPM drives are connected via a PCI 4-port SATA 1.5Gbit/sec controller card.

That's even more reason to not put them in the same storage volume (zpool, unRAID volume, etc.) They're going to be horribly slow.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
If you're running a 10 drive array with only one parity drive, you're just asking to lose data IMHO.

Another reason I went with FlexRAID. You can use an unlimited number of parity drives.

That's further compounded when you have drives that are at different parts of their lifecycles and different performance characteristics. If you drive the 5400RPM drives to their limits trying to keep up with the 7200RPM ones, you're going to kill them more quickly. Even if you go unRAID, don't mix and match drive speeds in the same volume.

You are missing the point of unRAID and other parity setups. They don't spinup unused drives. There are no slow drives trying to keep up with fast drives. Data isn't striped so the only drive that is spinning up is the drive that actually contains the data.
 

velillen

Platinum Member
Jul 12, 2006
2,120
1
81
If you're running a 10 drive array with only one parity drive, you're just asking to lose data IMHO. That's further compounded when you have drives that are at different parts of their lifecycles and different performance characteristics. If you drive the 5400RPM drives to their limits trying to keep up with the 7200RPM ones, you're going to kill them more quickly. Even if you go unRAID, don't mix and match drive speeds in the same volume.

unraid goes one drive at a time. It spins up only the drive stuff is being moved to plus the parity. So theres pretty rarely going to be a time when you are doing something that would have the 5200rpm trying to race to keep up with another drive. That is one of the nice things about unraid....if you dont access the data on a disk that disk will pretty much never spin up. Only the disk that is being accessed is spun up to provide the data.



That's even more reason to not put them in the same storage volume (zpool, unRAID volume, etc.) They're going to be horribly slow.

But at the same time the NIC card would come into play as to if that would really limit. Also how often would he really being pulling or storing data to all four drives at the same time anyways? With unraid he can setup the user shares to only use one disk from that card and the rest from on board or whatever as well.

Add in a cache drive and you also gain some benefit. Itll store the data on the cache drive till a set time (3am is mine i believe) then move the files into the array. At the same time you can still access the data it just isnt "protected"



Like any setup though its all about wants vs needs vs cost vs protection needed. For a basic storage point for media/files i like unraid. Its a bit more secure than just a single drive, provides basic parity protection, and losing 1 disk isnt the end of the world. Lose two disks and you dont lose the entire array either.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
You are missing the point of unRAID and other parity setups. They don't spinup unused drives. There are no slow drives trying to keep up with fast drives. Data isn't striped so the only drive that is spinning up is the drive that actually contains the data.

No, I know quite well how it works. A 5400RPM will have to keep up with a 7200RPM drive in a couple of scenarios:

(a) Your parity drive is 5400RPM
(b) You are accessing data in multiple files. Each file only lives on a single drive, but files themselves are distributed across drives.

Granted, these are avoidable, but it requires some extra setup and knowledge from the user.

unraid goes one drive at a time. It spins up only the drive stuff is being moved to plus the parity. So theres pretty rarely going to be a time when you are doing something that would have the 5200rpm trying to race to keep up with another drive. That is one of the nice things about unraid....if you dont access the data on a disk that disk will pretty much never spin up. Only the disk that is being accessed is spun up to provide the data..

No, unRAID keeps each file on a single drive at a time. If presumably want to access more than one file, so you will have to spin up multiple drives or keep the data all on one drive and get really bad performance and increased drive wear as it seeks all over the place.

But at the same time the NIC card would come into play as to if that would really limit. Also how often would he really being pulling or storing data to all four drives at the same time anyways? With unraid he can setup the user shares to only use one disk from that card and the rest from on board or whatever as well.

Random access can easily beat a bunch of spinning drives to death long before you saturate a Gigabit NIC. So it depends upon the usage pattern. See above for why keeping all files on one drive is bad for performance.

Add in a cache drive and you also gain some benefit. Itll store the data on the cache drive till a set time (3am is mine i believe) then move the files into the array. At the same time you can still access the data it just isnt "protected"

Having a write (!!!!!) cache that isn't protected is a pretty terrible idea from a data integrity point of view.

Like any setup though its all about wants vs needs vs cost vs protection needed. For a basic storage point for media/files i like unraid. Its a bit more secure than just a single drive, provides basic parity protection, and losing 1 disk isnt the end of the world. Lose two disks and you dont lose the entire array either.

I agree with the first statement, and I'm glad you made the tradeoff super-clear for Larry. That point sometimes gets lost in the unRAID marketing materials.
 
Last edited:

velillen

Platinum Member
Jul 12, 2006
2,120
1
81
First off not trying to argue but rather just expand my knowledge. Im familiar with unraid so hence referencing that all the time :)

No, unRAID keeps each file on a single drive at a time. If presumably want to access more than one file, so you will have to spin up multiple drives or keep the data all on one drive and get really bad performance and increased drive wear as it seeks all over the place.

Which is different than a Flexraid, raid 5, raid 6 setup how then? Unraid you can at least setup highwater levels and have it keep say all your music on one disk. So anytime i listen to music (be it the living room, bedroom, or both) only that disk has to be spun up. If that disk has a failure then thats where the parity disk comes in and i can recover. Having each song spread out between the entire array...of course youd be spinning disks up and down but that would be the same across any array if you had single music files on every disk.

Yes the highwater keeps all the files on one disk and might "increase wear from seeking" but i dont see how that is any different than having to seek on multiple drives and having to have them all spinning up?



Random access can easily beat a bunch of spinning drives to death long before you saturate a Gigabit NIC. So it depends upon the usage pattern. See above for why keeping all files on one drive is bad for performance.

As you mentioned though proper setup can avoid that. And also i think you are forgetting this is a storage server. So i would be assuming that both myself and the op are not to terribly concerned with huge amounts of accesses and transfers coming in all at once.



Having a write (!!!!!) cache that isn't protected is a pretty terrible idea from a data integrity point of view.

Which i should have expanded on so my fault. You can set each share to use a cache disk or not. Important thinks (in my case pictures) go straight to the array. Non important things (recorded tv shows) go to the cache disk first. Using a cache is definitely up to the user but it is a nice added benefit of unraid IMO for those files you arent to concerned about.



I agree with the first statement, and I'm glad you made the tradeoff super-clear for Larry. That point sometimes gets lost in the unRAID marketing materials.

It gets lost in ALL marketing from ALL types of setups. Everyone thinks their (by everyone i mean the companies) software/hardware is the best solution and everyone (this case the users) should be using it. Which just isnt true. No single setup is perfect. Unraid + storing data on another pc + cloud backups is a pretty dang solid approach IMO. As would a flexraid/WHS/raid5/raid6 +local storage+cloud backup.
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Which is different than a Flexraid, raid 5, raid 6 setup how then? Unraid you can at least setup highwater levels and have it keep say all your music on one disk. So anytime i listen to music (be it the living room, bedroom, or both) only that disk has to be spun up. If that disk has a failure then thats where the parity disk comes in and i can recover. Having each song spread out between the entire array...of course youd be spinning disks up and down but that would be the same across any array if you had single music files on every disk.

Yes the highwater keeps all the files on one disk and might "increase wear from seeking" but i dont see how that is any different than having to seek on multiple drives and having to have them all spinning up?

It's not any different from other file-based parity schemes. It is different from more traditional RAID or ZFS because the load is unevenly applied to one drive. If you all many drives working, each one has to do less, leading to better performance and less wear.

As you mentioned though proper setup can avoid that. And also i think you are forgetting this is a storage server. So i would be assuming that both myself and the op are not to terribly concerned with huge amounts of accesses and transfers coming in all at once.

As 10 people what they do with their "storage server" and you'll probably get 11 answers. Many people (including myself) expect good performance from their primary storage server. If you said "archive server" or "tier2/secondary storage server", then yeah I would agree that performance isn't as big of a deal.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
It's not any different from other file-based parity schemes. It is different from more traditional RAID or ZFS because the load is unevenly applied to one drive. If you all many drives working, each one has to do less, leading to better performance and less wear.



As 10 people what they do with their "storage server" and you'll probably get 11 answers. Many people (including myself) expect good performance from their primary storage server. If you said "archive server" or "tier2/secondary storage server", then yeah I would agree that performance isn't as big of a deal.

Finally, I think you are getting the point.

First of all, splitting files across disks doesn't imply any kind of game of "keeping up", regardless of the relative speeds of each drive. Let's says I have a jigsaw puzzle with pieces in 2 different boxes. How fast or slow I can pull pieces out of Box A doesn't affect how fast I can pull pieces out of Box B in the slightest.

The fact that we can both agree that RAID does not equal Backup makes the idea of a cache drive reasonable. Unless you are constantly backing up or synching your files all day long, then the risk to data integrity is basically the same. What are the odds of my cache disk failing versus the odds of your array failing? RAID is about uptime not backup.

unRAID, FlexRAID and SnapRAID all run on the same basic premises with different file systems and applications. I have virtually no experience with SnapRAID but FlexRAID is faster than unRAID. unRAID overhead seems to max out the array around 80MB/s even with the best spinners. FlexRAID will be only limited by the speed of the drives. My FlexRAID will saturate my Gigabit because of the drives I use. unRAID will have a hard time doing that.

That being said, unless your life is a constant barrage of large file transfers, it's not going to matter. OP isn't talking about some high I/O situation. He wants to store music and media files. If OP wants to serve up his FLAC files of the Stones concert at 640kbps, there will be no real world difference. Even unRAID would provide enough speed to serve it up to 50 different people at the same time, if other system limitations didn't prevent that from happening in the first place.

I don't understand your logic behind undue wear and tear on an unRAID server vs. a striped array. Your argument makes so little sense that I don't even know where to start disputing it.

ZFS and hardware RAID are nice and provide some great performance. ZFS has some awesome data integrity features. They have their place, but OP doesn't need to invest in enterprise level systems to store his mp3 files. His server isn't getting hundreds of thousands of I/O requests every hour.

Quite frankly, using hardware RAID on a homeserver is just an exercise in seeing who's ball sack swings bigger anyway. It's just bragging rights for something that just ends up getting in the way, anyway.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Forgive me if I don't respond to the ad hominem arguments in your post. I'm just going to snip those out because they have no place in a technical discussion.

First of all, splitting files across disks doesn't imply any kind of game of "keeping up", regardless of the relative speeds of each drive. Let's says I have a jigsaw puzzle with pieces in 2 different boxes. How fast or slow I can pull pieces out of Box A doesn't affect how fast I can pull pieces out of Box B in the slightest.

It most certainly does if your application is waiting on box A to finish up before it pulls the next item out of box B.

The fact that we can both agree that RAID does not equal Backup makes the idea of a cache drive reasonable. Unless you are constantly backing up or synching your files all day long, then the risk to data integrity is basically the same. What are the odds of my cache disk failing versus the odds of your array failing? RAID is about uptime not backup.

The odds of your cache drive failing are higher than the odds of your array failing because the cache drive is a single disk whereas the array is a redundant array of individual disks.

A main point of putting something on external archival storage is that you don't have enough space on your local machine. Let's say I drop a file onto the unRAID volume and it goes into the cache disk first. I delete that file off my local machine and then the cache drive dies. Oops, I just lost data. If I had written it directly to a RAID volume, I would have had to lose two (or more disks).

Now lets say I have my cache drive on an SSD because SSDs are cool and fast. Unfortunately that SSD sometimes returns the wrong bits (same can happen with an HDD of course). Since I didn't calculate parity right off the bat while the data was in memory like I would with any sort of RAID, I now have no idea that the data I just committed to storage is corrupt. Ooops, I just corrupted data.

The backup argument is a non-starter becasue this is about files on the array. You're not going to be backing up files on your local disk that you will intentionally delete (move to the array).

I don't understand your logic behind undue wear and tear on an unRAID server vs. a striped array. Your argument makes so little sense that I don't even know where to start disputing it.

Let me restate my original post so that we have a point of reference.

"If you presumably want to access more than one file, so you will have to spin up multiple drives or keep the data all on one drive and get really bad performance and increased drive wear as it seeks all over the place."

I added emphasis to the part of my argument where I am presenting a choice. Keep in mind that this is in response to velillen's assertion that file-based parity is better because it keeps drives spun down more often.

How let me rephrase the argument in the hopes that you can get what I'm saying. Let's say that I have a workload that requires 10,000 read IOPS and I have 4 non-parity disks in my array (5 total). In a file-based RAID scenario, you have two options:

(a) Keep the files distributed across disks. This makes it no better than striped RAID from the perspective of disk being spun down but distributes the workload roughly evenly across disk (2500 IOPS per non-parity disk, same as striped RAID).

(b) Keep the files all on one disk. This means that 3 non-parity disks will be spun down, but I will send 10,000 IOPS worth of work to the active disk, resulting in unbalanced wear.

Therefore, file-based RAID is at best the same as a striped RAID and at worst worse from a wear-leveling point of view.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Forgive me if I don't respond to the ad hominem arguments in your post. I'm just going to snip those out because they have no place in a technical discussion.



It most certainly does if your application is waiting on box A to finish up before it pulls the next item out of box B.



The odds of your cache drive failing are higher than the odds of your array failing because the cache drive is a single disk whereas the array is a redundant array of individual disks.

A main point of putting something on external archival storage is that you don't have enough space on your local machine. Let's say I drop a file onto the unRAID volume and it goes into the cache disk first. I delete that file off my local machine and then the cache drive dies. Oops, I just lost data. If I had written it directly to a RAID volume, I would have had to lose two (or more disks).

Now lets say I have my cache drive on an SSD because SSDs are cool and fast. Unfortunately that SSD sometimes returns the wrong bits (same can happen with an HDD of course). Since I didn't calculate parity right off the bat while the data was in memory like I would with any sort of RAID, I now have no idea that the data I just committed to storage is corrupt. Ooops, I just corrupted data.

The backup argument is a non-starter becasue this is about files on the array. You're not going to be backing up files on your local disk that you will intentionally delete (move to the array).



Let me restate my original post so that we have a point of reference.

"If you presumably want to access more than one file, so you will have to spin up multiple drives or keep the data all on one drive and get really bad performance and increased drive wear as it seeks all over the place."

I added emphasis to the part of my argument where I am presenting a choice. Keep in mind that this is in response to velillen's assertion that file-based parity is better because it keeps drives spun down more often.

How let me rephrase the argument in the hopes that you can get what I'm saying. Let's say that I have a workload that requires 10,000 read IOPS and I have 4 non-parity disks in my array (5 total). In a file-based RAID scenario, you have two options:

(a) Keep the files distributed across disks. This makes it no better than striped RAID from the perspective of disk being spun down but distributes the workload roughly evenly across disk (2500 IOPS per non-parity disk, same as striped RAID).

(b) Keep the files all on one disk. This means that 3 non-parity disks will be spun down, but I will send 10,000 IOPS worth of work to the active disk, resulting in unbalanced wear.

Therefore, file-based RAID is at best the same as a striped RAID and at worst worse from a wear-leveling point of view.

I think we're just going to have to agree to disagree.