Best NAS OS / config for storage server, 8x2TB HDDs?

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Curious what people's opinions are on NAS OSes? Free is best, but paid solutions could be acceptable.

Bonus if it supports VMs and / or Docker containers.

Current hardware config, is 8x2TB HDDs, A6-5400K APU, A85X board, 4x8GB DDR3.
 
Feb 25, 2011
16,789
1,469
126
Linux.

Use Debian. Most directions for Ubuntu will work fine, but you'll get patches a little faster. (Ubuntu gets them once Debian gets them. Mint gets them once Ubuntu gets them, etc. Follow the family tree up.)

Install a base Linux install w/ KVM.

Create a VM in KVM. This is your NAS. Pass through or assign your drives to it (you should be able to do this somehow, although if your hardware is incompatible with virtIO, you may need to replace a motherboard or CPU. I don't imagine you'll shy away from that.)

Your choice for a NAS OS may vary, but I would recommend another Linux install, mdadm to control the drives (RAID-6.), ext4 as a file system, and some fairly simple file sharing configuration. You definitely don't need a GUI for this, but webmin may be helpful. Any OS in a file server role will use pretty much all the RAM available as a file system cache, so MOAR RAMS IS BETTERS.
Or you can use ZFSonLinux, but I don't think the effort/benefit is there for your application. (I would use ZFS on a file server with a crapton of RAM, that hosted a lot of shares, a lot of clients, etc. But in most home-use cases, ZFS is using an AA gun to swat a mosquito.)

I would avoid FreeBSD or FreeBSD-based distros like FreeNAS just because BSD is not Linux but it's really damn similar and I get confused easily.

Create another VM to be your Docker host environment if you really want to mess around with Docker containers. They have their pros and cons.
 
Last edited:
  • Like
Reactions: grimpr

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Linux.

Use Debian. Most directions for Ubuntu will work fine, but you'll get patches a little faster. (Ubuntu gets them once Debian gets them. Mint gets them once Ubuntu gets them, etc. Follow the family tree up.)

Install a base Linux install w/ KVM.

Create a VM in KVM. This is your NAS. Pass through or assign your drives to it (you should be able to do this somehow, although if your hardware is incompatible with virtIO, you may need to replace a motherboard or CPU. I don't imagine you'll shy away from that.)

Your choice for a NAS OS may vary, but I would recommend another Linux install, mdadm to control the drives (RAID-6.), ext4 as a file system, and some fairly simple file sharing configuration. You definitely don't need a GUI for this, but webmin may be helpful. Any OS in a file server role will use pretty much all the RAM available as a file system cache, so MOAR RAMS IS BETTERS.
Or you can use ZFSonLinux, but I don't think the effort/benefit is there for your application. (I would use ZFS on a file server with a crapton of RAM, that hosted a lot of shares, a lot of clients, etc. But in most home-use cases, ZFS is using an AA gun to swat a mosquito.)

I would avoid FreeBSD or FreeBSD-based distros like FreeNAS just because BSD is not Linux but it's really damn similar and I get confused easily.

Create another VM to be your Docker host environment if you really want to mess around with Docker containers. They have their pros and cons.

You pass the drives to the NAS OS VM and make a raid6 within the VM or with the host Debian OS?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Thanks, Dave. I didn't really want to do things "the hard way" though. I was hopeful I could use a NAS-as-appliance OS distro. FreeNAS, unRAID, NAS4Free, Rockstor, etc.

Also, what kind of drive config would you suggest for 8 drives? RAID-6?
 

daxzy

Senior member
Dec 22, 2013
393
77
101
Any OS in a file server role will use pretty much all the RAM available as a file system cache, so MOAR RAMS IS BETTERS.
Or you can use ZFSonLinux, but I don't think the effort/benefit is there for your application. (I would use ZFS on a file server with a crapton of RAM, that hosted a lot of shares, a lot of clients, etc. But in most home-use cases, ZFS is using an AA gun to swat a mosquito.)

I would avoid FreeBSD or FreeBSD-based distros like FreeNAS just because BSD is not Linux but it's really damn similar and I get confused easily.

I'm not getting your counter argument to ZFS/FreeNAS (because you get it confused with Linux?). Also, you don't need that much RAM in a ZFS deployment. It's better if you care about write performance, but with only 16TB worth of disks (less with parity), the memory requirements are actually quite low.

But I think the strongest advice would be to get ECC memory in any software based RAID. Without ECC, you could literally be writing corrupt data to the disks and not know it.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Solaris + napp-it since you've got enough RAM for ZFS. Or FreeNAS if you really want to take the easy route.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,339
10,044
126
Or FreeNAS if you really want to take the easy route.

Yes, I do want an "easy project", and FreeNAS was my 1st choice thus far. unRAID costs money for the number of drives that I have, unfortunately.

Also, it looks like I might be able to take the four 5400RPM 2TB Hitachi drives I've got in my QNAP 431 NAS, and fit them back into my 4-in-3 cage, and fit that cage into the three 5.25" bays in the NZXT Source 210 case that I'm using.

Would it be better to have an 8x2TB server NAS, and a 4x2TB QNAP NAS, or a 12x2TB server NAS? Is there greater utility from having more spindles on one NAS unit, or having multiple NAS units?

I think of things like RAID rebuild times, and I think a single RAID-5 / 6 of 8 drives is already nearly too large. (Edit: That's where unRAID's parity-drive model comes in handy.)
 
Last edited:
Feb 25, 2011
16,789
1,469
126
I'm not getting your counter argument to ZFS/FreeNAS (because you get it confused with Linux?).

I'm saying ZFS is a "don't bother" because while it's got some really awesome features as a file system, as a RAID controller, it loses points to mdadm for things like inability to "grow" an array. The file system features are mostly available in other file systems now.

I'm saying FreeBSD is annoying because it's not the same as Linux.* Larry's a Linux Mint guy, so Debian makes sense. You can use ZFS on Linux just fine

*I use both. All day. Every day. It's annoying. Moving between them is fine, and logging in to do basic shell stuff or text editing is fine, but as soon as you have to do anything the least bit sysadmin-ey, it's, "waitaminute... is that in /usr/local/etc or /etc or /boot/ or...? is it called apache or httpd...? dammit!"

And I'm saying don't use a canned appliance OS like FreeNAS if you want to set up VMs, Docker hosts, and actually tinker with stuff. (Like VirtualLarry usually does.)

Also, you don't need that much RAM in a ZFS deployment. It's better if you care about write performance, but with only 16TB worth of disks (less with parity), the memory requirements are actually quite low.

But I think the strongest advice would be to get ECC memory in any software based RAID. Without ECC, you could literally be writing corrupt data to the disks and not know it.

That's true with or without RAID, on any system, at any time. I don't see a lot of laptops with ECC.
 
Last edited:
Feb 25, 2011
16,789
1,469
126
You pass the drives to the NAS OS VM and make a raid6 within the VM or with the host Debian OS?

RAID in the VM. You can import the array somewhere else if you need to, and you can have the bare metal system do double duty as a NAS and a VM host, but to my mind, the VM host is for VM hosting and the NAS OS VM is for NAS OS VMing.
 
Feb 25, 2011
16,789
1,469
126
Thanks, Dave. I didn't really want to do things "the hard way" though. I was hopeful I could use a NAS-as-appliance OS distro. FreeNAS, unRAID, NAS4Free, Rockstor, etc.

Sissy. :)

IMHO, the appliance distros are only easier as long as everything works perfectly. Once you have a problem, you're down a rabbit hole.

Also, what kind of drive config would you suggest for 8 drives? RAID-6?

Yes. -5 would probably be fine, but I'm a worry-wart.

Yes, I do want an "easy project", and FreeNAS was my 1st choice thus far. unRAID costs money for the number of drives that I have, unfortunately.

Also, it looks like I might be able to take the four 5400RPM 2TB Hitachi drives I've got in my QNAP 431 NAS, and fit them back into my 4-in-3 cage, and fit that cage into the three 5.25" bays in the NZXT Source 210 case that I'm using.

Would it be better to have an 8x2TB server NAS, and a 4x2TB QNAP NAS, or a 12x2TB server NAS? Is there greater utility from having more spindles on one NAS unit, or having multiple NAS units?

I think of things like RAID rebuild times, and I think a single RAID-5 / 6 of 8 drives is already nearly too large. (Edit: That's where unRAID's parity-drive model comes in handy.)


12x2. More spindles generally equates to better performance. Up to a point.

If you do use ZFS, iirc, there are performance benefits to having specific numbers of drives - I think it's... maybe I'm lying here, but I think it's the base stripe should be a power of 2 (2, 4, 8, 16, etc.). So, like, 10 drives in a RAID-Z2, where 2 are parity and the other eight are the stripe size (8 = 2^3).
 
Last edited:

daxzy

Senior member
Dec 22, 2013
393
77
101
I'm saying ZFS is a "don't bother" because while it's got some really awesome features as a file system, as a RAID controller, it loses points to mdadm for things like inability to "grow" an array. The file system features are mostly available in other file systems now.

I'm saying FreeBSD is annoying because it's not the same as Linux.* Larry's a Linux Mint guy, so Debian makes sense. You can use ZFS on Linux just fine

And I'm saying don't use a canned appliance OS like FreeNAS if you want to set up VMs, Docker hosts, and actually tinker with stuff. (Like VirtualLarry usually does.)

Point taken.

That's true with or without RAID, on any system, at any time. I don't see a lot of laptops with ECC.

I think it's a desktop system. But yes, please get ECC on any storage systems unless you can afford to lose your data.
 

Ranulf

Platinum Member
Jul 18, 2001
2,349
1,172
136
Sissy. :)

IMHO, the appliance distros are only easier as long as everything works perfectly. Once you have a problem, you're down a rabbit hole.

That was my experience with FreeNAS. It didn't seem to like being shut down. Setup was a little odd but no real problems getting it up and running. Good transfer speeds. Two weeks later I'm spending a couple of hours trying to figure out why none of my machines can see the NAS on the network. ZFS seems great and all but yeah, adding more drives is a pain.
 

gea

Senior member
Aug 3, 2014
210
12
81
There is no best of all solution

- best filesystem is ZFS with Solaris (where it comes from) as the OS with the best performance or ZFS integration
- best base for virtualisation of any VM OS is ESXi (BSD,OSX,Linux,Solaris,Windows)
- best (easiest) NAS OS is an appliance software with web-ui

- best base for Linux VMs is Linux with a lightweight method like docker or LX zones ex on SmartOS/OmniOS (Solaris clone) or a virtualiser with best Linux support like ESXi

so you can use a ZFS appliance software like FreeNAS or OmniOS or with some restrictions OMV and virtualise within or on top.

or you can use ESXi (free) and virtualise everything on top including the NAS part.
This is what I prefer, see the howto for my All-In-One setup that I use for many years with OmniOS, a free Solaris fork but you can use other systems in a similar config.
http://napp-it.org/doc/downloads/napp-in-one.pdf
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I've been working on basically the same thing. My home server right now is a 4U Rosewill rackmount case with a quad core CPU, 32 GB of RAM, and a PERC6i where I have two old WD Green drives in RAID1 for my boot drive and six Toshiba 2TB drives in RAID5 for data and then a 1TB 850 Evo just cabled up to the motherboard's onboard SATA controller for VMs and I'm running Ubuntu 16.04 now as the native OS.

I've been experimenting with different OSes, different file systems and different virtualization technologies.

Some things I've decided on:
I won't be using btrfs - the reports of bugs that cause irreparable corruption scare me
I will be using ZFS - I want a filesystem with checksums
I'll be using VirtualBox as opposed to QEMU/KVM - storage performance is 5-10x better with VirtualBox for some reason
I will continue using a hardware RAID controller - the ease of replacing drives and having rebuild/expand operations handled by dedicated hardware is nice

I still have no idea what to use for the OS. I really want to use something like FreeBSD or Solaris because I believe they are inherently better for server-type workloads and stuff like Ubuntu/Mint etc. are better as desktop/workstation OSs. However, I have to keep telling myself, this is for home use... easier is better. I spend many hours at work using CLI and PowerShell and such... it's nice to have stuff that "just works" or only requires a couple point and click operations at home.

I'm also trying to decide which is better - SMB or NFS. I'm leaning toward SMB at this point, because I really don't want to run LDAP at home and NIS appears to be more or less a deprecated technology in the little bit of research I've done... and without those two things, I can't find a good way to be able to manage unix file permissions on an NFS export which is shared between multiple devices with local users.

*EDIT* What would be awesome is if someone like NetApp would follow the lead of Sophos (been using UTM 9 and before that, ASG since version 7) and make a home edition of their enterprise product available free for somewhat limited use. Like ONTAP Home - maybe it doesn't support replication or encryption or any of the tools like Unified Manager or Performance Manager... but does support deduplication, compression, nas and block protocols, etc. Maybe supports up to 50TB of storage. That would be awesome.
 
Last edited:

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
I like Unraid personally. I think it is well worth the money. It is so nice to have mismatched drive sizes in the same array so you can upgrade gradually, or how easy it is to upgrade hardware (power off, install compatible hardware, power on). Also the VM functionality in the new version makes it as deep as any other Linux solution really, I just have an Ubuntu VM that handles all the network stuff I need. I love that worst case the drives are readable by themselves-no chance of data being locked into a corrupt array.

Normally I prefer free solutions myself, but Unraid has served me for six years now and I never regret going in that direction. Hell Unraid has probably saved me way more than it cost because I can upgrade drives gradually as they go on sale.
 
Last edited:
  • Like
Reactions: grimpr and Ranulf

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
FreeNAS sounds perfect for you, but I don't know how well supported the realtek nics are under freebsd. You might consider checking the freebsd hardware compatibility lists for your board model and NIC model. Generally for freebsd, chelsio nics are tier 1 godlike, intel nics are tier 2 great, realtek is kind of all over the place. My experience is that linux usually supports realtek nics better, but ymmv with model.

On the plus side, everything is a gui, you don't have to command line anything and it has great support for smb, nfs, afp, and iscsi shares. I'd double check hardware support, who makes the NICs on the

I don't find the "array doesn't expand well" argument particularly convincing for your case, VL, since you're starting with essentially a full complement of disks. If you're planning to add a SAS card and additional shelves of disks, let us know, but it seems unlikely that you'll run into raid expansion issues starting with 8 drives on a board and case that probably don't support more than 8 drives.

If you do use ZFS, iirc, there are performance benefits to having specific numbers of drives - I think it's... maybe I'm lying here, but I think it's the base stripe should be a power of 2 (2, 4, 8, 16, etc.). So, like, 10 drives in a RAID-Z2, where 2 are parity and the other eight are the stripe size (8 = 2^3).

Matt Ahrens has written about this:

https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

RAID-Z block layout[/caption] RAID-Z parity information is associated with each block, rather than with specific stripes as with RAID-4/5/6. Take for example a 5-wide RAIDZ-1. A 3-sector block will use one sector of parity plus 3 sectors of data (e.g. the yellow block at left in row 2). A 11-sector block will use 1 parity + 4 data + 1 parity + 4 data + 1 parity + 3 data (e.g. the blue block at left in rows 9-12). Note that if there are several blocks sharing what would traditionally be thought of as a single "stripe", there will be multiple parity blocks in the "stripe". RAID-Z also requires that each allocation be a multiple of (p+1), so that when it is freed it does not leave a free segment which is too small to be used (i.e. too small to fit even a single sector of data plus p parity sectors - e.g. the light blue block at left in rows 8-9 with 1 parity + 2 data + 1 padding). Therefore, RAID-Z requires a bit more space for parity and overhead than RAID-4/5/6.

A misunderstanding of this overhead, has caused some people to recommend using "(2^n)+p" disks, where p is the number of parity "disks" (i.e. 2 for RAIDZ-2), and n is an integer. These people would claim that for example, a 9-wide (2^3+1) RAIDZ1 is better than 8-wide or 10-wide. This is not generally true. The primary flaw with this recommendation is that it assumes that you are using small blocks whose size is a power of 2. While some workloads (e.g. databases) do use 4KB or 8KB logical block sizes (i.e. recordsize=4K or 8K), these workloads benefit greatly from compression. At Delphix, we store Oracle, MS SQL Server, and PostgreSQL databases with LZ4 compression and typically see a 2-3x compression ratio. This compression is more beneficial than any RAID-Z sizing. Due to compression, the physical (allocated) block sizes are not powers of two, they are odd sizes like 3.5KB or 6KB. This means that we can not rely on any exact fit of (compressed) block size to the RAID-Z group width.
 
  • Like
Reactions: VirtualLarry

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
I like Unraid personally. I think it is well worth the money. It is so nice to have mismatched drive sizes in the same array so you can upgrade gradually, or how easy it is to upgrade hardware (power off, install compatible hardware, power on). Also the VM functionality in the new version makes it as deep as any other Linux solution really, I just have an Ubuntu VM that handles all the network stuff I need. I love that worst case the drives are readable by themselves-no chance of data being locked into a corrupt array.

Normally I prefer free solutions myself, but Unraid has served me for six years now and I never regret going in that direction. Hell Unraid has probably saved me way more than it cost because I can upgrade drives gradually as they go on sale.

It is the best solution for home users along with Stablebit Drivepool.
 
  • Like
Reactions: Ranulf

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
RAID in the VM. You can import the array somewhere else if you need to, and you can have the bare metal system do double duty as a NAS and a VM host, but to my mind, the VM host is for VM hosting and the NAS OS VM is for NAS OS VMing.

Thanks for the answer Dave, thinking of going that road you proposed but with Proxmox and not vanilla Debian, Proxmox is debian based and comes with a a great web gui for firing up my virtual machines, hope you have tried and share some info. I like your idea of a bare metal hypervisor OS and running everything on top of it.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Thanks for the answer Dave, thinking of going that road you proposed but with Proxmox and not vanilla Debian, Proxmox is debian based and comes with a a great web gui for firing up my virtual machines, hope you have tried and share some info. I like your idea of a bare metal hypervisor OS and running everything on top of it.

OK, i'm up and running a fully featured Linux KVM Hypervisor using Proxmox and running my NAS as a Windows 7 VM with SMART data passthrough on the hard disks. Performance is great using SCSI and Ethernet VirtIO paravirtualized drivers, i presume OpenMediaVault will work without problems since its Debian based like the Proxmox Hypervisor and has all the guest features already builtin on the linux kernel. This is great.
 

PingSpike

Lifer
Feb 25, 2004
21,732
561
126
Have you taken a look at open media vault?
http://www.openmediavault.org/

I have been using it for over a year now with no problems.

OMV has a snapraid plugin, which is somewhat similar to the unRaid parity model. Although it has some advantages over it, and some disadvantages (even with the plug in, it seems harder to setup to me). Its worth looking into if you liked unraid though. Unraid has a whole KVM virtualization layer now though. Personally, I found unraid to be worth the money.
 

daxzy

Senior member
Dec 22, 2013
393
77
101
Unraid has a whole KVM virtualization layer now though. Personally, I found unraid to be worth the money.

So out of curiosity, for the price differential, why wouldn't you just run Linux and buy a HW RAID card, like the LSI 9260-8i (which is running around $100-140 on eBay)? You could probably buy a cheaper processor and get away with less RAM as well with the HW RAID card.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
So out of curiosity, for the price differential, why wouldn't you just run Linux and buy a HW RAID card, like the LSI 9260-8i (which is running around $100-140 on eBay)? You could probably buy a cheaper processor and get away with less RAM as well with the HW RAID card.

That is Apples and Oranges. Unraid uses a special kind of software RAID 4 that no hardware solution can duplicate. Unlike real hardware raid, Unraid allows for: mismatched drive sizes, drives that are readable outside of the array, the ability to upgrade a single drive in place, and the ability to fine tune how much is stored on each disk. The trade off compared to hardware RAID 5/6 is it is much slower (but plenty fast enough for say media).

With Unraid I can take out a 1.5TB drive, pop in a 3TB drive, and as long as my parity drive is 3TB or bigger it will upgrade my array to use the 3TB drive (even if every other non-parity drive is still 1.5TB or even 2TB drives) without losing any data or without any sort of migration headache. Also unlike hardware raid if I am streaming a Blu Ray rip off that 3TB drive all my other drives are powered down and not spinning to save on their life cycle (thereby making it easier to use consumer HDs).
 

daxzy

Senior member
Dec 22, 2013
393
77
101
Unraid uses a special kind of software RAID 4 that no hardware solution can duplicate....

With Unraid I can take out a 1.5TB drive, pop in a 3TB drive, and as long as my parity drive is 3TB or bigger it will upgrade my array to use the 3TB drive (even if every other non-parity drive is still 1.5TB or even 2TB drives) without losing any data or without any sort of migration headache. Also unlike hardware raid if I am streaming a Blu Ray rip off that 3TB drive all my other drives are powered down and not spinning to save on their life cycle (thereby making it easier to use consumer HDs).

Using this as a guide for RAID-4/Unraid:

https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_4
http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Add_One_or_More_New_Disks

Wouldn't it be impossible to stream the data off a single drive (and powering down all other drives), as RAID-4 is block level striping with a dedicated parity drive? So at best you'd be powering up n-1 drives instead of n drives for RAID-5/6. From what I gather, and the same recommendation goes on the FreeNAS forums, powering up and down drives (or head parking) is much more stressful than actually running a drive 24/7. They actually recommend disabling it or setting the parking timeout to 5 minutes.

Also your example of adding a 3TB drive to an array of 1.5/2TB drives seems inaccurate. As the parity drive must always be the largest. So if you only add a single 3TB drive in that example, you'd net no usable disk space.