Configure this storage hardware with the following needs...

JimPhreak

Senior member
Dec 31, 2012
374
0
76
I've got the following components to use for a dedicated NAS box. I will need the data on this NAS to be accessed by multiple Virtual Machines (one that will be serving up media files stored on this box constantly).

Components:

  • Intel Q6600 CPU
  • DFI Blood Iron LGA 775 board
  • 8GB DDR2-667 RAM
  • Perc 6i RAID controller
  • 8x2TB Hitachi Deskstar 7k3000 Sata 6.0 drives
  • 160GB SATA 3.0 drive
  • 16GB Micro Flash Drive (for installing ESXi if I go that route)

I have licensing for ESXi 5.1 so that's an option to run VM's inside of on this server. Looking for suggestions for how best to configure this storage box. Data redundancy is very important as I care a lot more about not losing my data than I do about space.
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
ditch the PERC/6i - it is a big turd. serious turd.

If htose drives are not enterprise, you will have EPIC FAIL quickly anyways. Run them JBOD - I kid you not. seriously.

If you can cross flash that PERC/6i into regular IT mode or single drive mode do it. DO NOT attempt raid-5/6 or 1/1E/10 with standard drives and LSI controllers. EPIC fail will happen.

The flash drive, i'd be careful if it is slow at random i/o. There is a reason why 2GB was chosen (direct access mode SD versus block mode SDHC).

The decent 2GB sticks that HP sells sustain 16/16/16/16 (read seq/write seq/read random/write random) over the entire 2gb zone. SLC of course.

I'd honestly just throw a pair of SSD on that PERC 6/i for the o/s and for host caching (Swap) since your ram is so low.

http://forums.freenas.org/showthread.php?589-Anyone-use-the-DELL-SAS-6-IR-controller

^^ how to flash the controller to IT mode. Then use FREENAS or NEXENTA to do software raid since those drives are not TLER compliant.

Since you don't have VT-D you can make RDM's of 2TB each and pass them to freenas/nexenta to make a software raid (or windows 2012).

Or just run nexenta or freenas on that box bare metal.

It's going to be slow with those drives and that controller. really slow.

If i had a say i'd go with nexenta and couple of ssd's to cache the raid and ram to help a little bit more.

Did you mention you had some decent NC360T or NC364T nic's for access? 2 nic's 2 gibabit , you could exhaust that linear, but with NC364T or two NC360T you could push 4 gigabit in each direction to your users.

You did not specify how many users. I think the only poor choice is the raid controller and drive combo. But that $20 raid card can be turned into a IT mode and use software raid ZFS to handle that.

Let me know if you want further opinions. ESXi is cool but those older processors are not so fast at context switching and you will feel the pain when running multiple vm's and not getting the near 99% expected performance. (run 3 vm's and wonder why all 3 are slow, run 1 vm and be amazed at how fast it is compared to native).
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
ditch the PERC/6i - it is a big turd. serious turd.

If htose drives are not enterprise, you will have EPIC FAIL quickly anyways. Run them JBOD - I kid you not. seriously.

If you can cross flash that PERC/6i into regular IT mode or single drive mode do it. DO NOT attempt raid-5/6 or 1/1E/10 with standard drives and LSI controllers. EPIC fail will happen.

The flash drive, i'd be careful if it is slow at random i/o. There is a reason why 2GB was chosen (direct access mode SD versus block mode SDHC).

The decent 2GB sticks that HP sells sustain 16/16/16/16 (read seq/write seq/read random/write random) over the entire 2gb zone. SLC of course.

I'd honestly just throw a pair of SSD on that PERC 6/i for the o/s and for host caching (Swap) since your ram is so low.

http://forums.freenas.org/showthread.php?589-Anyone-use-the-DELL-SAS-6-IR-controller

^^ how to flash the controller to IT mode. Then use FREENAS or NEXENTA to do software raid since those drives are not TLER compliant.

Since you don't have VT-D you can make RDM's of 2TB each and pass them to freenas/nexenta to make a software raid (or windows 2012).

Or just run nexenta or freenas on that box bare metal.

It's going to be slow with those drives and that controller. really slow.

If i had a say i'd go with nexenta and couple of ssd's to cache the raid and ram to help a little bit more.

Did you mention you had some decent NC360T or NC364T nic's for access? 2 nic's 2 gibabit , you could exhaust that linear, but with NC364T or two NC360T you could push 4 gigabit in each direction to your users.

You did not specify how many users. I think the only poor choice is the raid controller and drive combo. But that $20 raid card can be turned into a IT mode and use software raid ZFS to handle that.

Let me know if you want further opinions. ESXi is cool but those older processors are not so fast at context switching and you will feel the pain when running multiple vm's and not getting the near 99% expected performance. (run 3 vm's and wonder why all 3 are slow, run 1 vm and be amazed at how fast it is compared to native).

Wow that's a lot to digest haha. Others had recommended using the Perc 6i as it was a fairly inexpensive (I paid $70 for it I don't know where you see it for $20) hardware RAID controller.

I had considered running ZFS if I could get the drives into single mode however since I don't have ECC memory that concerns be because data redundancy is much more important to me than space. I can't lose this data.

Honestly I don't think I need enterprise level hardware here. The only server that is of any kind of production grade (meaning I need it to be highly available) is my media streaming server because it serves 5-10 clients (usually 2-3 at a time). Other than that, this storage is just for home use although I did want to make the available space I have for backups.

When you say it's going to be really slow with the hardware I've got, what do you mean exactly? What is going to be so slow? And slow compared to what exactly? Remember, I'm not running a business here so I'm trying to get an idea as to the context of your statement.

I don't have any plans to buy new hardware as I already just put money into a new VM server that is going to be running most of my VM's. The only reason I was even considering loading ESXi onto this box is in case I wanted to migrate a VM over to it for a short while so that I could take my VM server down for maintenance or for whatever reason.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
I also think you should dump the RAID controller. If you want a true striping RAID setup, go with something like ZFS RAIDz1 or z2 (since you said you really want redundancy). With 8 drives, you may want to bump the memory up a little.

You could also setup FlexRAID with a Windows or Linux OS if your HDDs aren't server class. You can use infinite parity drives depending on your level of caution and since the data isn't striped you would probably never run into a situation where you would lose ALL of your data.

UnRAID would be the simplest and least demanding for your hardware, but parity is limited to just one drive. The rest of it is much like FlexRAID.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
As far as speed goes. ZFS will probably beat any hardware RAID you'll setup. Since FlexRAID and unRAID aren't striped you'll just be limited to the speed of the individual drives that store the data. Probably in the neighborhood of 90MB/s and up. Plenty fast for multiple home media streams.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
My board only supports 8GB of RAM unfortunately so I'm maxed out. So since I already have the Perc 6i with the 2 SAS-to-4xSATA cables, would it make sense to try and run my controller in single drive mode to just connect all my drives or do I not have enough memory to make ZFS (or another software RAID solution) work?
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
As far as speed goes. ZFS will probably beat any hardware RAID you'll setup. Since FlexRAID and unRAID aren't striped you'll just be limited to the speed of the individual drives that store the data. Probably in the neighborhood of 90MB/s and up. Plenty fast for multiple home media streams.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
As far as speed goes. ZFS will probably beat any hardware RAID you'll setup. Since FlexRAID and unRAID aren't striped you'll just be limited to the speed of the individual drives that store the data. Probably in the neighborhood of 90MB/s and up. Plenty fast for multiple home media streams.

I get that ZFS will beat hardware RAID in regards to speed. However Priority #1 is redundancy for me. My priorities for my RAID setup is as follows:

1. Redundancy (data integrity is a premium, can't lose it)
2. Speed (mainly read speeds for serving up media)
3. Space (I'd like to be able to lose 2+ drives and still have my data be ok)
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
I'm one of the people out there that thinks there's no real point to hardware RAID, especially in light of the software versions available to the home user. You are not going to be managing huge amounts of I/O requests at once. Newer systems like ZFS are just as fast or faster than a home RAID setup, are more stable and less likely to end up with catastrophic data loss or corruption.

I would just use your card for the connections and use something in software. 8 GB is enough to run ZFS, but a good rule of thumb is 1GB per TB of storage for optimal results.

I would still strongly suggest a solution that doesn't using striping, though. That way if you do end up with multiple drive failure, your losses are minimized.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
I'd probably put FreeNAS in my flash drive and run RAIDz2 or FlexRAID installed on top of Linux, Windows 7 or WHS 2011.

If your drives are server grade then I'd go ZFS. If not, I'd go FlexRAID.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
I'm one of the people out there that thinks there's no real point to hardware RAID, especially in light of the software versions available to the home user. You are not going to be managing huge amounts of I/O requests at once. Newer systems like ZFS are just as fast or faster than a home RAID setup, are more stable and less likely to end up with catastrophic data loss or corruption.

I would just use your card for the connections and use something in software. 8 GB is enough to run ZFS, but a good rule of thumb is 1GB per TB of storage for optimal results.

I would still strongly suggest a solution that doesn't using striping, though. That way if you do end up with multiple drive failure, your losses are minimized.

I'd probably put FreeNAS in my flash drive and run RAIDz2 or FlexRAID installed on top of Linux, Windows 7 or WHS 2011.

If your drives are server grade then I'd go ZFS. If not, I'd go FlexRAID.

I have licenses for WS 2008 R2 and WS 2012 so could I run RAIDz2 or FlexRAID on either of those?

If those software RAID solutions don't use striping, where does the performance boost come from since I'm assuming there'd be some kind of mirroring going on for redundancy? I know you mentioned without striping at least if I lose a drive or 2 I only lose the data on those drives. But right now with my RAID10 array I could technically lose 2 drives and not lose ANY data. I'd prefer to not lose any data at all. I'm not familiar with software RAID solutions so I forgive my ignorance on how they work.
 

bigboxes

Lifer
Apr 6, 2002
40,789
12,283
146
Just remember that RAID is not a backup. It is redundancy, but it is meant for uptime not backup. So, if you get data corruption you have it in both mirrored drives. The same if you accidentally delete a file.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Just remember that RAID is not a backup. It is redundancy, but it is meant for uptime not backup. So, if you get data corruption you have it in both mirrored drives. The same if you accidentally delete a file.

Right, that is why I bought a second set of drives to use for backups of my current RAID10 (4x2TB drives) array but since I'm doing a lot of reconfiguration of my home network it made me take step back and analyze exactly how I'm doing things. That's why Ideally I'll probably wind up using 2 separate arrays (one for my main storage and one for backups of that storage along with the backups of some images/VM snapshots, etc.).
 
Last edited:

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
RAIDz1 is like RAID 5 (n-1) and RAIDz2 is like RAID 6 (n-2), where n is your total number of drives. With 8 2TB HDDs, RAIDz1 gives you 14 TB of usable space and the ability to recover from 1 drive failure. RAIDz2 would give you 12 TB of usable space and the ability to recover from 2 drive failures. RAIDz uses striping so you would get significant performance increases over single or JBOD setups. The only way I've ever set up ZFS RAID is via FreeNAS on a flash drive. But it is it's own operating system and basically will run like an appliance. It stores data, does it quickly and provides the maximum amount of protection from errors and bit flopping. You really won't be able to run much else but FreeNAS does have some torrenting and streaming plugins thay you might find useful.

I was going to do FreeNAS till someone introduced me to unRAID (www.lime-technology.com) and then eventually to FlexRAID. UnRAID is like FreeNAS where you run it fron it's own Linux based OS on a flash drive and there are a few good plugins and great community support. UnRAID is not for you, though because it only supports recovery from 1 HDD failure.

FlexRAID has a lot in common with unRAID but let's you add infinite parity drives. You could set it up to recover from 5, 6, 20 or whatever # of drive failures you wanna set up. Your only limited by your resources. With 8 drives you could set it up with 5 Data Drives and 3 parity drives so that you could recover from the loss of 3 HDDs. You'll wanna go 6+2 it sounds like. You'll have to install an OS and then FlexRAID will act as a driver to create storage pools and RAID parity.

UnRAID and FlexRAID do not break up files and stripe them across multiple drives like traditional RAID and ZFS. Instead, files reside on a single disk just like a normal non-RAID setup. Because of this, speed will be limited by the speed of the drive that holds the file, much slower than any striped array that reads off of several HDDs simultaneously. However, because of this there are 4 big advantages:

1 - They require less powerful hardware because there are fewer or no parity calculations involved. A lot of people run unRAID with systems like a single core Sempron recovered from parts lying around the garage. Hardware requirements for FlexRAID depend on the OS you choose to install.

2 - Storage pool expansion is as easy as just adding another HDD to the pool. If you wanted to add a 9th HDD, you just import it and that's it. You just upped your storage space. If you want to do that with a striped setup you have to backup your array, build a new array with the new HDD and then restore the array from your backup. Also keep in mind that a striped array will limit your drive size so that all drives appear the same size as the smallest HDD in the array. If you want to add a 3 TB HDD to your array of 2 TB HDDs, 1 TB will be unusable. With FlexRAID and unRAID you can add drives of any size as long as the parity drives are equal in size to the largest drive in the group. You can mix and match sizes all over the place and use everything. You could add a 500 GB later and it would just add the space to the storage pool. This also means you can use normal desktop drives more safely. With a striped array, if one drive burps for a moment it knocks your whole array out until you restore it. Drives meant for NAS and server use have special firmware that prevents these pauses. That's a big chunk of what you pay for when you get Ultrastars instead of Deskstars, Constellations instead Barracudas and Reds instead of Greens. Your Deskstars + Striped RAID = Headaches.

3 - Because files aren't spanned across the array, FlexRAID and unRAID use less energy and generate less heat. If you wanna pull up a 40 GB Blu Ray rip with unRAID or FlexRAID then only the drive with that rip spins up. With a striped array they all spin up to pull each piece of info as it's spread across the array.

4 - Because files aren't spanned across the array, if you lose more drives than you have parity, you only lose the data on the dead HDDs. All the other drives are unaffected and you can still recover all of your data from unaffected HDDs. Total data loss is almost impossible.

I ended up going FlexRAID on top of Win XP 64-bit. I run my torrents, DLNA media server and use it to do any re-encoding with Handbrake. I run it headless using remote desktop from my tower or laptop. I only run 1 parity because most of my data is DVD and BR rips that I can rerip if I lose a couple of drives. I backup everything else to a 2TB external every night.

I hope this helps.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
I should add that any 2TB HDD should be able to serve 3 or 4 HD video streams simultaneously. Even the biggest Blu-Ray file maxes out around 45Mb/s. That's a long way from the 100+ MB/s that your HDDs can push.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
First off smitbret, I just want to say...thank you! Not only for being active in this thread but especially for that last post because it was EXTREMELY helpful. Great explanation and that clears up a lot of questions I had about these different software RAID solutions.

I think from what you're saying, FlexRAID sounds ideal for what I want to do. However I have the following question. I want to use these 8 drives to not only store my media files, but also my personal files (documents, pics, etc.) and to backup the different computers and virtual machines I run (in a separate server). Therefore would it make more sense for me to pick 2 drives out of the 8 and set them into they're own (mirrored?) array to use just for backups while the other 6 drives are used to store my files? Obviously I lose space but 8TB of usable space is PLENTY for me right now. My current RAID10 array is only using 2TB at this moment.

Also, on a side note, I see you mentioned you run a DLNA server on the same box you use FlexRAID on right? What are the hardware specs of your box? I ask because the main thing I do with my network at home is use Plex Media server not only for myself at home but shared to multiple external clients (my sister relies on it completely, she has no cable service and my parents and friends use it as well through Roku boxes). However I was planning on running my Plex Media Server box as a VM on my new VM server since that box has much more beefy hardware (i7 3770k @ 4.3Ghz, 32GB of RAM, etc.) and I was hoping to speed up the transcoding that Plex has to do regularly.
 
Last edited:

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
I'm running an AMD FX-6100. I have a couple of WD Live boxes that don't need transcoding, but I have a couple of DirecTV boxes that have to have everything transcoded. My DLNA server let's me set the quality/speed of the transcode for each location so I dial things down for the small TVs and I have not had a problem with not having enough CPU power. I've played with Plex, but not enough to know how its transcoder performs.

As far as splitting up your array, I have 0 experience with VM. You could always install Plex on your beefier system and just point the library at the shares and use it when necessary. Just as an FYI, unRAID has a Plex plugin. The unRAID community is really good, the FlexRAID not so much. If I could have found a way to do Handbrake and Mezzmo on unRAID, I would have gone that way instead of FlexRAID.

Also, because there's no striping, you don't have to pool your drives with FlexRAID or unRAID. You can have specific drives allocated for specific duties and still have parity protection. It comes in 3 flavors:

Pooling only
Parity only
Pooling + Parity

With different prices to match.

FlexRAID www.flexraid.com
UnRAID www.lime-technology.com
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
I'm running an AMD FX-6100. I have a couple of WD Live boxes that don't need transcoding, but I have a couple of DirecTV boxes that have to have everything transcoded. My DLNA server let's me set the quality/speed of the transcode for each location so I dial things down for the small TVs and I have not had a problem with not having enough CPU power. I've played with Plex, but not enough to know how its transcoder performs.

As far as splitting up your array, I have 0 experience with VM. You could always install Plex on your beefier system and just point the library at the shares and use it when necessary. Just as an FYI, unRAID has a Plex plugin. The unRAID community is really good, the FlexRAID not so much. If I could have found a way to do Handbrake and Mezzmo on unRAID, I would have gone that way instead of FlexRAID.

Also, because there's no striping, you don't have to pool your drives with FlexRAID or unRAID. You can have specific drives allocated for specific duties and still have parity protection. It comes in 3 flavors:

Pooling only
Parity only
Pooling + Parity

With different prices to match.

FlexRAID www.flexraid.com
UnRAID www.lime-technology.com

Plex plugin huh? That means I'd have to run my Plex Media Server on the same box as my drives though right? I kind of bought the 3770K so I could OC it to be sure I was getting the best performance when transcoding. There are times when 4-5 people are streaming videos at the same time and a lot of my videos are AVI so they need to be transcoded.

Damn I need to really read up on both FlexRAID and UnRAID as they both seem to have a lot of good features. Just gotta figure out which one is best for me.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I had considered running ZFS if I could get the drives into single mode however since I don't have ECC memory that concerns be because data redundancy is much more important to me than space. I can't lose this data.
How would this be much different from hardware RAID? In fact, ZFS at least has some protection against RAM corruption. A bit flip now and then will not permanently destroy your data. The actual data may be corrupted, but on the next encounter, this corruption would be fixed. Defective RAM can be typically identified as checksum errors on all disks at random. Severe corruption can cause multiple blocks in the same stripe to become corrupted, defeating the protection ZFS offers. But for the casual bit flip from using ordinary non-ECC memory, ZFS is not as bad as some say it is.

But there is one major exception: ZFS loses its end-to-end data integrity whenever RAM corruption occurs. It can serve corrupt data to your application in this case, including backup medium. To transfer data safely to backups, you should utilise ZFS send/receive via SSH.

I don't have any plans to buy new hardware as I already just put money into a new VM server that is going to be running most of my VM's. The only reason I was even considering loading ESXi onto this box is in case I wanted to migrate a VM over to it for a short while so that I could take my VM server down for maintenance or for whatever reason.
Personally, I recommend you use one box for ESXi and the other for ZFS. ESXi in particular has some issues with disks passthrough (RDM I believe?) combined with newer versions of FreeBSD 9.1+ utilised in NAS4Free and ZFSguru.

If you want to use ZFS, you should adjust your hardware to match this task. This means no hardware RAID but chipset + non-RAID controllers. As much RAM as you can (8GiB is ok without virtualization) and non-TLER harddrives.

If your drives are server grade then I'd go ZFS. If not, I'd go FlexRAID.
It is the other way around?! You should use consumer disks without TLER for ZFS. Other solutions with no protection should use expensive drives that are more reliable by itself. These two philosophies are the opposite of each other. RAID was traditionally utilised to address this very issue; create a more reliable volume out of several lesser reliable but much cheaper disks. This created a viable alternative to simply paying a lot of money on drives that were more reliable.

ZFS works great with high capacity 5400rpm drives. Generally I recommend 1TB platter harddrives. Do note that some combinations are more optimal than others. For example, I recommend to use 10 disks in RAID-Z2 instead of 8 disks. 10 is more optimal for 4K sector harddrives resulting in a performance gain as well more available storage space.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
ZFS will still struggle with non-TLER drives. He could get some WD Reds since RE drives would certainly be overkill. If his Deskstars are CCTL enabled then ZFS should work fine.

As far as bit flopping and error correction, these are real problems for large data centers that do millions of transactions; enough that it could be troublesome on a daily basis. This is hardly OP's situation. I don't remember where I read it, but during my research I believe that it said the average non-ECC user could expect something like 1 error every 3 years that could have been corrected by ZFS redundancy. And since most of OP's usage will be read, not write, bit rot will be a far more likely issue.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
ECC info

http://en.m.wikipedia.org/wiki/ECC_memory#section_1

You'll have to decide if you think it's an issue or not.

It would be cheaper to replace the MB and CPU than it would to replace 8 HDDs.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
ZFS will still struggle with non-TLER drives.
Why?

It is in fact the hardware RAID controller that requires TLER disks; because otherwise the firmware can and will detach the drive, making it invisible to ZFS.

But if you use proper non-RAID controllers also known as HBAs, then you should be fine. I actually recommend disable TLER when using ZFS because enabling TLER can be dangerous if you lost your redundancy. TLER disables the last chance at recovery of unreadable data. Even if the chance is small, you want to use this opportunity in cases where you need it (degraded RAID-Z).

I recommend IBM M1015 + the onboard chipset SATA ports. This should make a 10-disk RAID-Z2 which is an optimal number for 4K sector harddrives. But ditch the hardware RAID controller. Hardware RAID and ZFS are largely incompatible.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Good discussion going on in here :).

First off sub.mesa, I'm not buying any additional hardware. I've already dropped over a grand on a new VM server so buying anything else is out of the question at this point as I've stated earlier in this thread.

Secondly, I plan on using the hardware RAID controller only as a means of connecting the drives but doing so in single (IT) mode. But regardless, the 8 drives I have is what I've got and I won't be adding to it anytime soon. I personally feel like the hardware I have in hand right now is sufficient for a home NAS, it's not just a matter of optimizing what I have.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
If this means that you are forced to use your hardware RAID controller, you should look for another solution instead of ZFS. A Windows-based system with NTFS is an alternative. But since your disks do not have TLER (they have CCTL which is volatile), the disks may not work well with hardware RAID either. TLER is something that works on all controllers because the controller doesn't have to do something. But CCTL and ERC have to be activated on every boot or power cycle. If your controller doesn't do this, this equals running non-TLER disks on hardware RAID.

In either case, despite the amount of money you put into this box, I cannot see how you can provide reliable storage without making at least some changes. I would also argue that one should seek advice before buying the hardware. First the choice should be made what software solution you will be running. The hardware should reflect that choice, not the other way around.