Configure this storage hardware with the following needs...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

JimPhreak

Senior member
Dec 31, 2012
374
0
76
If this means that you are forced to use your hardware RAID controller, you should look for another solution instead of ZFS. A Windows-based system with NTFS is an alternative. But since your disks do not have TLER (they have CCTL which is volatile), the disks may not work well with hardware RAID either. TLER is something that works on all controllers because the controller doesn't have to do something. But CCTL and ERC have to be activated on every boot or power cycle. If your controller doesn't do this, this equals running non-TLER disks on hardware RAID.

In either case, despite the amount of money you put into this box, I cannot see how you can provide reliable storage without making at least some changes. I would also argue that one should seek advice before buying the hardware. First the choice should be made what software solution you will be running. The hardware should reflect that choice, not the other way around.

I didn't just purchase this hardware, I've had it. I purchased new hardware for a different purpose and I'm left with these parts to make a NAS as I don't currently have one. I was just using these parts as a Windows fileserver with RAID10 volumes on my Perc 6i controller.

Are you saying that using my hardware RAID controller ONLY as a means of connecting the drives (by configuring it for IT/single mode) but not to actually configure them still is a problem? How is that any different than simply connecting the drives to on-board SATA slots for example?

What exactly is bad about connecting my drives through the RAID controller but then using something like FlexRAID or UnRAID to configure my drives in an array?
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Are you saying that using my hardware RAID controller ONLY as a means of connecting the drives (by configuring it for IT/single mode) but not to actually configure them still is a problem? How is that any different than simply connecting the drives to on-board SATA slots for example?
Yes, this is the exact issue. It IS different. The difference lies in the fact that your hardware RAID controller contains a SoC-chip with firmware. That firmware is the actual problem. Virtually all Hardware RAID will detach disks if they have a high timeout. This means that if any disk doesn't respond within a certain time - as low as 10 seconds - to any I/O request, the controller thinks that drive is defective and makes that drive invisible to the host operating system.

To ZFS, this means the drives simply disappear. ZFS - or any other software solution - will not be able to stop this. Basically, if the controller finds a disk with a bad sector on it, the disk is detached.

So this means that using hardware RAID as just a controller to get some extra SATA ports is not going to work properly. Hardware RAID pretty much means you're stuck on TLER/CCTL/ERC. Now your Hitachi drives do support CCTL, if your hardware RAID controller does too, then there shouldn't be a big problem.

I've owned Areca hardware RAID (1210 and 1230) which has the exact same problem. when using this controller in combination with ZFS, it drops about half the disks whenever they find a bad sector causing longer than 10 seconds recovery time. The chipset SATA ports or other non-RAID controller will not interfere with what ZFS wants. ZFS wants to know about the timeout and will use redundant information to overwrite the bad sector instead. So instead of disks dropping, ZFS should fix everything without even flexing a muscle.

This means that hardware RAID is pretty dangerous, most especially in combination with high-capacity harddrives (2TB+) with high data densities and correspondingly low error correction (uBER = 10^-14). This is relevant to the story described here:

Why RAID5 stops working in 2009
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Yes, this is the exact issue. It IS different. The difference lies in the fact that your hardware RAID controller contains a SoC-chip with firmware. That firmware is the actual problem. Virtually all Hardware RAID will detach disks if they have a high timeout. This means that if any disk doesn't respond within a certain time - as low as 10 seconds - to any I/O request, the controller thinks that drive is defective and makes that drive invisible to the host operating system.

To ZFS, this means the drives simply disappear. ZFS - or any other software solution - will not be able to stop this. Basically, if the controller finds a disk with a bad sector on it, the disk is detached.

So this means that using hardware RAID as just a controller to get some extra SATA ports is not going to work properly. Hardware RAID pretty much means you're stuck on TLER/CCTL/ERC. Now your Hitachi drives do support CCTL, if your hardware RAID controller does too, then there shouldn't be a big problem.

I've owned Areca hardware RAID (1210 and 1230) which has the exact same problem. when using this controller in combination with ZFS, it drops about half the disks whenever they find a bad sector causing longer than 10 seconds recovery time. The chipset SATA ports or other non-RAID controller will not interfere with what ZFS wants. ZFS wants to know about the timeout and will use redundant information to overwrite the bad sector instead. So instead of disks dropping, ZFS should fix everything without even flexing a muscle.

This means that hardware RAID is pretty dangerous, most especially in combination with high-capacity harddrives (2TB+) with high data densities and correspondingly low error correction (uBER = 10^-14). This is relevant to the story described here:

Why RAID5 stops working in 2009

Thank you for that explanation, I understand now. So could I not just buy a PCIe Sata card to connect my drives and just sell my Perc 6i? I have 4 onboard Sata ports so I'd only need 4 more if I'm using something like FreeNAS or UnRAID since they can be installed on a USB stick.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Yes, this is the exact issue. It IS different. The difference lies in the fact that your hardware RAID controller contains a SoC-chip with firmware. That firmware is the actual problem. Virtually all Hardware RAID will detach disks if they have a high timeout. This means that if any disk doesn't respond within a certain time - as low as 10 seconds - to any I/O request, the controller thinks that drive is defective and makes that drive invisible to the host operating system.

To ZFS, this means the drives simply disappear. ZFS - or any other software solution - will not be able to stop this. Basically, if the controller finds a disk with a bad sector on it, the disk is detached.

So this means that using hardware RAID as just a controller to get some extra SATA ports is not going to work properly. Hardware RAID pretty much means you're stuck on TLER/CCTL/ERC. Now your Hitachi drives do support CCTL, if your hardware RAID controller does too, then there shouldn't be a big problem.

I've owned Areca hardware RAID (1210 and 1230) which has the exact same problem. when using this controller in combination with ZFS, it drops about half the disks whenever they find a bad sector causing longer than 10 seconds recovery time. The chipset SATA ports or other non-RAID controller will not interfere with what ZFS wants. ZFS wants to know about the timeout and will use redundant information to overwrite the bad sector instead. So instead of disks dropping, ZFS should fix everything without even flexing a muscle.

This means that hardware RAID is pretty dangerous, most especially in combination with high-capacity harddrives (2TB+) with high data densities and correspondingly low error correction (uBER = 10^-14). This is relevant to the story described here:

Why RAID5 stops working in 2009

My apologies to both of you, then. Like I mentioned I only had FreeNAS installed for a very brief time before deciding that FlexRAID would be a better solution.

I though that ZFS RAIDzX had the same issues with dropping an unresponsive HDD that Hardware RAID did. I stand corrected.

As far as the hardware RAID controller, I side with sub.mesa with an *. If you decide to go ZFS, get a different SATA board. Because it is a striped setup, you really can't start over fresh if you find out your RAID controller keeps dropping the drives from the array. Lose one drive, lose everything. If you decide to go unRAID or FlexRAID you can always give it a shot. If things go bad with the controller you can pick up another SATA board and just reimport all the data back into the pool with the new controller card.

Also, I just looked up the specs on your MB. You don't say whether it's the P35 or P45, but both have 6 SATA ports onboard. Either way, you'll need an add-on. This list is a pretty good idea of what's good (and not just for unRAID):

http://www.lime-technology.com/wiki/index.php/Hardware_Compatibility#PCI_SATA_Controllers

But any good controller card based on good chipsets should work fine.

As far as changing hardware, I don't see any need for it. I don't think sub.mesa is wrong in his recommendations, but it's all relative. This is a non-commercial, low use application. ZFS is by far the most robust but error frequency shouldn't be an issue. Then again, there's always that chance. I guess you'll have to decide. I'm not personally familiar with that Dell RAID controller so I have no opinion on it. Where it's a front line piece of hardware, I would feel that it's worth $50-75 to get one with a better reputation and without the RAID features. In a non-stripe setup it might be worth trying out. With ZFS I wouldn't.

Your CPU is a little older but should be able to transcode multiple video streams OTF. My Phenom II x4 benchmarks about 40% better than the Q6600 and I can transcode a 1280x720 at medium quality around 70fps. You should be able to get 2 or 3 DVD quality streams out of it at a time. Just take a trial run. Install one of your OSs , setup Plex with just a couple of video files and see how many you can do at once. If it works, great! Format the drive and set up your NAS. I'll bet you'll be fine with it. If you end up unRAID, you'll have even more CPU resources available than if you were running ZFS or FlexRAID. You may want to explore running a Hot Swap with the unRAID setup. Keep in mind that multiple drive failures on RAID setups occur when a drive is lost and replaced and the 2nd drive fails during the array rebuild process because it is so intense.

The transcoding and Plex server can always be run through the other PC, too. I just like the idea of an always on Media Server that takes care of everything for me regardless of what I am doing with my personal PC.

Tough choice an remember

RAID is not the same as backup.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Do you have another machine that you plan to run ESX on or do you want to do it all on one?

I have a storage setup with 24 x 1 TB drives. Sixteen are on three IT mode SAS controllers VMDirectPath'd to a Nexenta VM. The remaining eight are on an Areca 1680 in RAID 10 for a VMFS datastore. Nexenta has plenty of RAM, CPU, and a couple SSD drives for ARC (also VMDirectPath). I run Plex in an Ubuntu VM with mounts to NFS exports of my media stored in Nexenta.

I have enterprise class drives for the Areca, but the sixteen drives in the ZFS triple mirror are just Samsung Spinpoint F3s. It's been running great for almost two years now. On the same box I also run OpenVPN, a management node for my wireless, and whatever else I'm playing with at any given time.

If you're going to go down this route and you truly care about your data as much as you say you do get rid of the desktop hardware and buy yourself an entry level server board and Xeon so you can get ECC. The error correcting features of ZFS do you no good whatsoever if you don't have ECC.

Viper GTS
 
Last edited:

JimPhreak

Senior member
Dec 31, 2012
374
0
76
My apologies to both of you, then. Like I mentioned I only had FreeNAS installed for a very brief time before deciding that FlexRAID would be a better solution.

I though that ZFS RAIDzX had the same issues with dropping an unresponsive HDD that Hardware RAID did. I stand corrected.

As far as the hardware RAID controller, I side with sub.mesa with an *. If you decide to go ZFS, get a different SATA board. Because it is a striped setup, you really can't start over fresh if you find out your RAID controller keeps dropping the drives from the array. Lose one drive, lose everything. If you decide to go unRAID or FlexRAID you can always give it a shot. If things go bad with the controller you can pick up another SATA board and just reimport all the data back into the pool with the new controller card.

Also, I just looked up the specs on your MB. You don't say whether it's the P35 or P45, but both have 6 SATA ports onboard. Either way, you'll need an add-on. This list is a pretty good idea of what's good (and not just for unRAID):

http://www.lime-technology.com/wiki/index.php/Hardware_Compatibility#PCI_SATA_Controllers

But any good controller card based on good chipsets should work fine.

As far as changing hardware, I don't see any need for it. I don't think sub.mesa is wrong in his recommendations, but it's all relative. This is a non-commercial, low use application. ZFS is by far the most robust but error frequency shouldn't be an issue. Then again, there's always that chance. I guess you'll have to decide. I'm not personally familiar with that Dell RAID controller so I have no opinion on it. Where it's a front line piece of hardware, I would feel that it's worth $50-75 to get one with a better reputation and without the RAID features. In a non-stripe setup it might be worth trying out. With ZFS I wouldn't.

Your CPU is a little older but should be able to transcode multiple video streams OTF. My Phenom II x4 benchmarks about 40% better than the Q6600 and I can transcode a 1280x720 at medium quality around 70fps. You should be able to get 2 or 3 DVD quality streams out of it at a time. Just take a trial run. Install one of your OSs , setup Plex with just a couple of video files and see how many you can do at once. If it works, great! Format the drive and set up your NAS. I'll bet you'll be fine with it. If you end up unRAID, you'll have even more CPU resources available than if you were running ZFS or FlexRAID. You may want to explore running a Hot Swap with the unRAID setup. Keep in mind that multiple drive failures on RAID setups occur when a drive is lost and replaced and the 2nd drive fails during the array rebuild process because it is so intense.

The transcoding and Plex server can always be run through the other PC, too. I just like the idea of an always on Media Server that takes care of everything for me regardless of what I am doing with my personal PC.

Tough choice an remember

RAID is not the same as backup.

Yea I have some deciding to do. And I'm definitely aware that a RAID setup is not a backup. That is why I want to take 2 of the 2TB drives and put them in their own mirrored array just for backups. I do like the idea of FlexRAID or UnRAID not being striped though because that way if I lose more drives than I have parity for I only lose the data on those drives. However I'm guessing that with ZFS (since it's striped) that the performance is better? I'm just worried that I have only half as much memory as I do storage (in terms of the 1GB of RAM for 1TB of storage ratio that is recommended).

Basically, if anything I can pickup a 2 port SATA card to give me the the additional ports I need and sell my Perc 6i, but that's as far as I'm willing to go with buying new hardware. And yes my board is a P35 Blood Iron.

I do like the idea of running Plex off my NAS as well but I'm not keen on the idea of having to transfer my Windows configuration of Plex over to Linux as I'm not even sure if that's doable.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Do you have another machine that you plan to run ESX on or do you want to do it all on one?

I have a storage setup with 24 x 1 TB drives. Sixteen are on three IT mode SAS controllers VMDirectPath'd to a Nexenta VM. The remaining eight are on an Areca 1680 in RAID 10 for a VMFS datastore. Nexenta has plenty of RAM, CPU, and a couple SSD drives for ARC (also VMDirectPath). I run Plex in an Ubuntu VM with mounts to NFS exports of my media stored in Nexenta.

I have enterprise class drives for the Areca, but the sixteen drives in the ZFS triple mirror are just Samsung Spinpoint F3s. It's been running great for almost two years now. On the same box I also run OpenVPN, a management node for my wireless, and whatever else I'm playing with at any given time.

If you're going to go down this route and you truly care about your data as much as you say you do get rid of the desktop hardware and buy yourself an entry level server board and Xeon so you can get ECC. The error correcting features of ZFS do you no good whatsoever if you don't have ECC.

Viper GTS

Yes I have a 2nd system that I plan to run ESXi on. What I have in hand between the two planned systems is the following:

Box #1 (current fileserver/planned NAS):

  • Intel Q6600 CPU
  • DFI Blood Iron P35 LGA775 board
  • 8GB DDR2-667 RAM
  • 160GB SATA OS Drive
  • RAID10 Array (4x2TB drives)
    - currently configured on a Dell Perc 6i and storing all my files
  • 4x2TB brand new drives (same drives as in my current array) not installed yet
  • 16GB Micro Flash Drive


Box #2 (planned VM server):

  • Intel i7 3770K CPU (got this to OC to 4.3-4.5Ghz as I transcode multiple files at a time with Plex)
  • Asus P8Z77-V ATX board
  • 32GB DDR3-1600 RAM
  • 500GB Sata Drive
  • 2x128GB Crucial M4 SSD's (for frequently used VM's)
  • Intel EXPI9301CT PCIe x1 Gigabit Adapter
  • 16GB Micro Flash Drive

It sounds like without ECC there's not much point in using ZFS so I think UnRAID or FlexRAID are my best options right now.
 
Last edited:

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Looks like I'm just going to be adding a second RAID10 array to the Perc 6i. Don't want to spend the money and I've already got an array set up so I think I'd rather spend my time doing the things I bought my new VM server for instead of learning a new software RAID solution right now. Besides, I think I'd like to use my NAS for other things as well which is why I'm going to keep Windows on it. I know I can use FlexRAID for that but since running my Perc 6i is in IT mode is unreliable from what I've been reading, I'm just going to stick with hardware RAID for the time being.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
It sounds like without ECC there's not much point in using ZFS
I'm sorry but this is simply incorrect. I have the opportunity to say a bit more about these things. I have ran my own tests using a test ZFS server working with defective memory. I've come to very different conclusions.

Because the truth is actually quite the opposite: if you don't have ECC memory, you want to run ZFS more than anything else:

1) ZFS can cope with limited memory corruption when using non-ECC memory; your data on disk will be corrupted but corrected the next time ZFS touches the data again. Give me one alternative that can correct corruption caused by your RAM?!
2) ZFS lets you know about corruption; you will see checksum errors spread at random over all member disks. Knowing is better than not knowing at all and finding out years later that your data is corrupt.
3) other systems have no protection at all against memory corruption; so if you want ECC you should grant it to all workstations first, and use it on the ZFS server last. ZFS at least has some kind of protection, where all alternatives have none at all.

In other words, unless you need End-to-End data integrity which prevents sending corrupt data to applications, you should be fine with ZFS on a non-ECC system. If you have at least some redundancy, this should cope with any and all memory errors caused by non-ECC. In my tests I used defective RAM memory which shows thousands of errors in memtest86+, but ZFS can survive this kind of punishment. I don't know about alternatives that can match the level of protection ZFS can.

If you want ECC memory, you should assign this to your ESXi and other workstation boxes prior to equipping your ZFS server with ECC. Everything but your ZFS box will have no protection at all, you will never know that corruption occurred. Since ZFS can offer at least some protection, this means lack of ECC memory hurts your ZFS system the least, I would argue.

If you can somehow make a ZFS server separate to your ESXi server, this would be a very good setup where at least your storage is protected by ZFS. Snapshots can add a significant protection against changes and keep your ESXi volumes consistent. I think you should seriously consider this setup. Just consider what the level of protection ZFS offers can mean to your data and the low level of maintenance you have to spend on ZFS.

If you want to focus on hardware RAID or other options such as UnRAID or FlexRAID, then your protection will come from backups. Do not trust the protections these systems offer; consider it a nice extra but do not rely on it. Your backups will protect you. With ZFS, the story is a little different, since you won't have to rely on backups for numerous dangers. Of course, I can only recommend ZFS, but you have to make up your own mind.

Just make sure that if you buy a second hardware controller, this pretty much means you shut off the road to ZFS. Because you have already invested a lot of money in hardware that is virtually incompatible with ZFS. Also consider that you don't really need a hardware RAID controller if you limit yourself to RAID1 and/or RAID10.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Believe me if I went software RAID ZFS sounds like the way I want to go because of exactly the kinds of protection you are alluded to. The problem is I don't have ECC RAM for any of my systems (nor will I be getting any because all 3 of my boxes (PC, VM Server, and NAS box) are maxed out with memory for their given boards. Furthermore I don't have any CPU's that have ECC.

Also, I won't be buying an additional hardware controller. The main reason for sticking with hardware RAID is because I already have the controller with the 4 drives in my RAID10 array connected to one of the SAS ports. The other 4 for my new array would be connected to the second SAS slot port. I already have the drives, SAS-to-4xSATA cables, and all 8 drives so with hardware RAID I don't have to purchase anything else.

My VM ESXi server will indeed be separate from my NAS box. Also, I will be reading from it FAR more than I will be writing to it. The majority of the data on my current array is media that will be right once and ready many times. Most of my VM's will be run locally off my VM server's 2xSSD's although I'm considering running a VM or 2 of the NAS for HA/vMotion testing.
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
It sounds like you've already got a pretty decent setup with hardware RAID. sub.mesa is right, there would be some advantages to ZFS but where this is not an enterprise use situation and the majority of your I/O is read, not write the advantages would be minimal. Hardware RAID has served millions of people very well for a couple of decades and there's no reason it wouldn't work for you now. New might be better, but it probably isn't better for you and certainly not worth a significant investment.

I'm a big fan of "If it ain't broke, don't fix it".
 

smitbret

Diamond Member
Jul 27, 2006
3,382
17
81
Just as an FYI, I noticed that the developer of FlexRAID will be implementing ZFS as part of his FlexRAID system. Should happen in the next couple of months. Might be worth waiting for in your situation.
 

JimPhreak

Senior member
Dec 31, 2012
374
0
76
Just as an FYI, I noticed that the developer of FlexRAID will be implementing ZFS as part of his FlexRAID system. Should happen in the next couple of months. Might be worth waiting for in your situation.

Thanks for letting me know! I will keep an eye on that as for now I'll be sticking with Windows and my hardware RAID10 setup. Since I'm sticking with Windows it should make it easier to transition to FlexRAID should I ever decide to go that route.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
First off smitbret, I just want to say...thank you! Not only for being active in this thread but especially for that last post because it was EXTREMELY helpful. Great explanation and that clears up a lot of questions I had about these different software RAID solutions.

I think from what you're saying, FlexRAID sounds ideal for what I want to do. However I have the following question. I want to use these 8 drives to not only store my media files, but also my personal files (documents, pics, etc.) and to backup the different computers and virtual machines I run (in a separate server). Therefore would it make more sense for me to pick 2 drives out of the 8 and set them into they're own (mirrored?) array to use just for backups while the other 6 drives are used to store my files? Obviously I lose space but 8TB of usable space is PLENTY for me right now. My current RAID10 array is only using 2TB at this moment.

Also, on a side note, I see you mentioned you run a DLNA server on the same box you use FlexRAID on right? What are the hardware specs of your box? I ask because the main thing I do with my network at home is use Plex Media server not only for myself at home but shared to multiple external clients (my sister relies on it completely, she has no cable service and my parents and friends use it as well through Roku boxes). However I was planning on running my Plex Media Server box as a VM on my new VM server since that box has much more beefy hardware (i7 3770k @ 4.3Ghz, 32GB of RAM, etc.) and I was hoping to speed up the transcoding that Plex has to do regularly.

+1, Im in the same boat...Have a G5 ML350 that I want to take home from work. It has 4 ore and 12Gb RAM, so I dont want to waste the whole box on storage, so ESXi5.1, so I can have a few VMs as well as the storage pool....

I was looking at FreeNAS, but was unsure about hardware RAID, or ZFS or RAIDz1...as the battery on the onboard E2001 is knackered and writing to the drive is terribly slow at present. Im waiting for the P800 in another old box to come available, but for now, perhaps Ill just use unRAID..

Cheers guys