• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

YANB (Yet Another NAS Build) - this thread is not like the others, I hope!

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Unitrends Free will backup free ESXi. So that is another option. Runs as a VA.

I use and recommend Nakivo It's even easier to use than Veeam so that's saying something. It's not free unfortunately.

Are you actually planning on backup on your movie/music collection? Or are you just wanting to backup certain 'core VMs' like Plex, router/firewall, etc...
 
Last edited:

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Ahh, didn't realize it had a size restriction. Guess he could still use it to backup his other VMs and figure out another solution for his media NAS.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Like Veeam? :p

No, it's not the most user friendly option but it's free and has no issues backing up my 9TB datastore.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
I was under the assumption he was working with free ESXi. Although looking back I think he is on the fence about either Essentials or free.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Unitrends Free will backup free ESXi. So that is another option. Runs as a VA.

I use and recommend Nakivo It's even easier to use than Veeam so that's saying something. It's not free unfortunately.

Are you actually planning on backup on your movie/music collection? Or are you just wanting to backup certain 'core VMs' like Plex, router/firewall, etc...

I'm not too concerned with backing up the actual media storage with this approach, just the VMs for peace of mind.

I don't have a plan to backup the media as of yet, I have to figure something out for that. The rest of what will be on the server will mostly be system backups from other systems and the VM backups. So outside of the only copy of the media residing on the NAS, everything else will already basically be a double copy. That works for me so long as some fire or tornado goes tearing through the place, then I'd be screwed. lol

My main system backup might also still reside on a 4TB external disk I have, but that doesn't do anything for disaster recovery.

I still have to look into some other means of backup in the long run. I've investigated tape a little, but it'd be expensive to get into capacities that make sense. Shuffling multiple tapes would be a pain in the rear.

One potential would be building a JBOD machine that only gets turned on for scheduled replication from the array. But that's still under the same roof.
Unless I change ISP, we have a data cap with our local cable provider, and a slow upload speed, so an online service doesn't make much sense.

I was under the assumption he was working with free ESXi. Although looking back I think he is on the fence about either Essentials or free.

I really, really don't want to cough up $500+ for a license. It doesn't seem to get me any extra capabilities in the hypervisor itself, it just brings vCenter into the mix.

I'll definitely be getting ESXi Free, but I'll have to get some time with that to determine if paying for any more features is warranted.

Like Veeam? :p

No, it's not the most user friendly option but it's free and has no issues backing up my 9TB datastore.

From what I've read, Veeam does not support ESXi Free, because they moved away from whatever method they used to use and now utilize the storage APIs which are disabled in the free edition of ESXi.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I still have to look into some other means of backup in the long run. I've investigated tape a little, but it'd be expensive to get into capacities that make sense. Shuffling multiple tapes would be a pain in the rear.

I know, broken record here, but you can get surplus autoloaders for very reasonable prices.

From what I've read, Veeam does not support ESXi Free, because they moved away from whatever method they used to use and now utilize the storage APIs which are disabled in the free edition of ESXi.

Not just Veeam, any VM level agentless backup. I just forgot about that detail as I'm not running the free version. Prior to Veeam, I just used FreeFileSync to replicate the contents of the media servers repository onto a second server.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Not necessarily.

Option A: DVR/Transcoding on the NAS.
Option B: DVR/Transcoding on a VM using the NAS as storage.

While option A may seem more efficient, that may not be the case. Yes, the throughput will be limited by the network connection. If you're running a single GbE off the NAS, it's going to be a noticeable choke point. If you're running 10GbE of the NAS, it's not going to be.

However, your host was going to have a beefy CPU regardless. If you want the NAS to do the heavy listing for the media, it needs a beefy CPU (and therefore more power usage) as well and now your host is probably mostly sitting around doing nothing. Therefore I'd argue option B would actually be the more efficient option.

While the network storage option does mean two boxes, it does mean you have more flexibility with the case for your host. We are talking about such an low level of power consumption either way, personally, I wouldn't consider that a factor in running two boxes.

Also, I'll again recommend Veeam for your VM backups. You're hard pressed to beat it IMO.

I'm not really following the logic.

For the NAS to do heavy lifting, which is planned, I'm looking at Xeon D packages, with the ones I'm eyeing listed at 35W TDP, and that includes the PCH.
Most of the COTS NAS systems with 8 disks are going to have about the same power consumption at idle as my planned system, and even during activity, it shouldn't use much more than one of those systems, perhaps 80W if my estimates are right, compared to a DS1815+ w/ 8 disks at about 70W.

And then factor in a second system with enough beef to transcode full Blu-ray, if you plan to make it a cheap system, it won't be nearly as efficient as the Xeon D packages. Factor in that it's not a SoC package, network controllers and the platform itself will consume a decent amount of wattage during full load. Idle might be nice, but if both systems are idle, that should still add up to what is likely to be my planned system at full load.

Add in the fact that it is unlikely to actually save me anything up front, considering 8 bay COTS NAS units are around $1000 diskless, I just don't see how it ever works in my favor, with likely double the energy expenditure and now two boxes instead of one tidy one. Should they could both be smaller than a 2U rack unit, but what's that get me in the end? That's still two boxes that have to find a home, even if ESXi box was a NUC it's still gotta be plugged in, connected to a switch, and sit somewhere.

As it stands I'd still plan on a small rack regardless, that has been in my plans to clean up my media center since I have currently an 8-port switch fully utilized at the media center. Get a nice RF or wifi controller to replace my IR Harmony remote, and I can get everything out from the media center and just run a single HDMI cable from the receiver to the TV. Super clean and future SO approval factor met for sure. :D
So any servers will definitely go in the rack as well, so it's really no different in the end. Any standing mini-tower was already going to sit on a shelf in the rack anyway, probably right next to the HTPC (a Node 304).
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I completely forgot you were planning on doing an "inception" NAS. I was thinking you would be running two boxes in either situation. You really should consolidate everything into a single thread. LOL.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
If your aren't backing up the actually media like movies/music and stuff and you are wanting to use free ESXi then either Unitrends Free or Ghetto like you mentioned are just about your only choices for backup. Other than doing it manually through rsync i suppose which was mentioned earlier.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I completely forgot you were planning on doing an "inception" NAS. I was thinking you would be running two boxes in either situation. You really should consolidate everything into a single thread. LOL.
Haha that was the original goal of this very thread, but then I had other specific questions that I thought would be more suited in single-topic threads in the right subforums, mostly to get wider opinions... But it turns out all my answers in other threads are all being provided by the same crew in this thread. Had not expected that. lol
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
In regards to RAIDZ2, is an 8 disk array actually not a good idea?

I'm seeing suggestions that 6 disk or 10 disk is preferred, but I am also seeing this mentioned as specifically applying to Advanced Format 4K disks. So I assume this also applies to 512e disks, but what about 512n?
 
Feb 25, 2011
16,992
1,621
126
(Number of disks in an array) - (number of parity disks) should be a power of 2 for best performance.

2^2 + 2 = 6, 2^3 + 2 = 10.

An 8-drive RAIDZ2 will work, but it will give up a few %-points on the performance side of things.

Don't worry about it. It also has nothing to do with advanced or 4k sectors. As long as your controller cards and base OS support those, you can safely ignore it.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
(Number of disks in an array) - (number of parity disks) should be a power of 2 for best performance.

2^2 + 2 = 6, 2^3 + 2 = 10.

An 8-drive RAIDZ2 will work, but it will give up a few %-points on the performance side of things.

Don't worry about it. It also has nothing to do with advanced or 4k sectors. As long as your controller cards and base OS support those, you can safely ignore it.

Well in searching I stumbled upon one site that referenced this post:
That advice is not entirely incorrect.

The 'no more than 9 devices in a vdev' advice is outdated and should be nuanced. The problem is that 30 devices in a RAID-Z3 vdev will still have the random I/O performance of a single drive. IOps scale with the number of vdevs, load balanced.

In fact, 10 drives in RAID-Z2 is probably the very best ZFS pool configuration for many home users typically storing large files and want both good protection but good economy/cost as well. RAID-Z2 gives you double parity so pretty good protection at only 20% overhead: 2 parity drives on 10 = 80% usable storage.

Furthermore, some ZFS pool configurations are much better suited towards 4K advanced format drives.

The following ZFS pool configurations are optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives


The trick is simple: substract the number of parity drives and you get:
2, 4, 8, 16, 32 ...

This has to do with the recordsize of 128KiB that gets divided over the number of disks. Example for a 3-disk RAID-Z writing 128KiB to the pool:
disk1: 64KiB data (part1)
disk2: 64KiB data (part2)
disk3: 64KiB parity

Each disk now gets 64KiB which is an exact multiple of 4KiB. This means it is efficient and fast. Now compare this with a non-optimal configuration of 4 disks in RAID-Z:
disk1: 42,66KiB data (part1)
disk2: 42,66KiB data (part2)
disk3: 42,66KiB data (part3)
disk4: 42,66KiB parity

Now this is ugly! It will either be downpadded to 42.5KiB or padded toward 43.00KiB, which can vary per disk. Both of these are non optimal for 4KiB sector harddrives. This is because both 42.5K and 43K are not whole multiples of 4K. It needs to be a multiple of 4K to be optimal.

Hope this helps.

Cheers,
sub.mesa ;)
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
So, just to get a better idea of things:

How would a quad-core Xeon E3-1260L v5 compare to the Xeon D-1500 series? Especially the D-1528, a 6 core, 35W TDP model.

Could I reasonably expect a solid-enough performing quad-core to sufficiently provide enough oomph for FreeNAS w/high-bitrate transcode support, Sophos XG or UTM, AND perhaps have something left over for a third VM to play around with from time to time?

Is a 6-core Xeon more likely to handle that better? I'd like the 8-core Xeon D-1500 models but I don't think I'm prepared to drop $900-1000 just on the mobo/CPU package. But I might, might just consider a FlexATX Supermicro board with the D-1537, 8 core 35W TDP, as it also has an LSI 2116 SW controller onboard, which removes one additional cost if I go with any other package.
 
Feb 25, 2011
16,992
1,621
126
So, just to get a better idea of things:

How would a quad-core Xeon E3-1260L v5 compare to the Xeon D-1500 series? Especially the D-1528, a 6 core, 35W TDP model.

Could I reasonably expect a solid-enough performing quad-core to sufficiently provide enough oomph for FreeNAS w/high-bitrate transcode support, Sophos XG or UTM, AND perhaps have something left over for a third VM to play around with from time to time?

Is a 6-core Xeon more likely to handle that better? I'd like the 8-core Xeon D-1500 models but I don't think I'm prepared to drop $900-1000 just on the mobo/CPU package. But I might, might just consider a FlexATX Supermicro board with the D-1537, 8 core 35W TDP, as it also has an LSI 2116 SW controller onboard, which removes one additional cost if I go with any other package.

The higher-clocked Quad Core should provide equivalent performance most of the time (4x 2.9Ghz cores vs. 6x 2.0 GHz cores) including while transcoding. And it will probably idle close to the same (single-digit-wattage) anyway, so I wouldn't drool over the lower TDP too much, either.

With hyperthreading, the quad shouldn't have any trouble bouncing between the needs of a half-dozen or fewer VMs. I run 8 VMs on my dual core server, although only one is doing anything "oomph-ey" (PLEX transcoding) and I've never noticed an issue. (A Haswell Xeon E3 is on my upgrade-when-used-ones-are-cheap list, but only because... well... I want one, dammit.)
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
The higher-clocked Quad Core should provide equivalent performance most of the time (4x 2.9Ghz cores vs. 6x 2.0 GHz cores) including while transcoding. And it will probably idle close to the same (single-digit-wattage) anyway, so I wouldn't drool over the lower TDP too much, either.

With hyperthreading, the quad shouldn't have any trouble bouncing between the needs of a half-dozen or fewer VMs. I run 8 VMs on my dual core server, although only one is doing anything "oomph-ey" (PLEX transcoding) and I've never noticed an issue. (A Haswell Xeon E3 is on my upgrade-when-used-ones-are-cheap list, but only because... well... I want one, dammit.)

So with ESXi, you can over-provision the number of vCPUs in comparison to the number of physical cores/threads? For instance, say you make 6 VMs each with 2-4 vCPUs. That would be between 12-24 vCPUs assigned at once (if all are running).

Of course not all of them will likely be pushing max load at the same time so I get that. I just for whatever reason had in my mind that I would be making hard assignments, this VM gets access to 3 cores, that VM gets 2, and there would be 1 left to assign.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
And I don't think I'm ready for this yet, but VMUG Advantage seems to be a pretty damn awesome program. $200/year gets you discounted training materials and 365-day full-features evaluation copies for non-business use. Re-up each year and get the latest version.

You can get vCenter Server Standard and vSphere with Operations Management Enterprise Plus... all the features! Not a bad deal at all, especially if you are truly intent on learning with a lab environment. Depending on how things go in the future, I may find myself interested enough to make that investment.

https://www.vmug.com/evalexperience
 
Feb 25, 2011
16,992
1,621
126
So with ESXi, you can over-provision the number of vCPUs in comparison to the number of physical cores/threads? For instance, say you make 6 VMs each with 2-4 vCPUs. That would be between 12-24 vCPUs assigned at once (if all are running).

Of course not all of them will likely be pushing max load at the same time so I get that. I just for whatever reason had in my mind that I would be making hard assignments, this VM gets access to 3 cores, that VM gets 2, and there would be 1 left to assign.
That is correct. What you're describing is called "oversubscription" and every hypervisor I'm aware of* allows it. Sure, you're screwed if all your applications demand attention at once, but how often does that happen? ():)

*cue the comment from the guy who knows one that doesn't and a snide remark about my awareness. :D
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
That is correct. What you're describing is called "oversubscription" and every hypervisor I'm aware of* allows it. Sure, you're screwed if all your applications demand attention at once, but how often does that happen? ():)

*cue the comment from the guy who knows one that doesn't and a snide remark about my awareness. :D

Well let's say I have the router getting stressed fairly well with performing AV and my download is maxed with large files, and those files are going to the NAS, and I am also concurrently watching something from the NAS storage and it is being transcoded by Plex for subtitles (most of the time I should rarely ever need transcoding for downscaling or anything, I don't watch on mobile or tablets or anything - I need my big screen! :p).

Think that's going to make the system crawl? With the quad-core mentioned previously? And with the 6-core Xeon D?

I figure that right there would be the heaviest use at any given time, and likely won't happen often. But I could get it setup to grab and stream through the NAS so that's similar, full download speed + concurrently watching it.

Just not sure what kind of CPU usage we're talking about here.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
I'd pick the E3 over the Xeon-D. I like a higher clock rate to be honest.

As far as vCPUs go, I doubt you ever stress that E3 enough to ever make it break a sweat. You should start all of your VM with a single vCPU anyways. You can adjust up if you need more. I doubt you will need more than one vCPU for a router VM. Perhaps two vCPUs for your NAS if it is transcoding for Plex or one vCPU if your aren't doing any transcoding. Throw in a couple more VMs of Linux or maybe Server 2012 as a DC to play around on each with a single vCPU and your processor will sit idle most of the time.

The limiting factor with how many VMs you can run on a host is 99% of the time memory bound and not CPU bound.
 
Last edited:

destrekor

Lifer
Nov 18, 2005
28,799
359
126
I'd pick the E3 over the Xeon-D. I like a higher clock rate to be honest.

As far as vCPUs go, I doubt you ever stress that E3 enough to ever make it break a sweat. You should start all of your VM with a single vCPU anyways. You can adjust up if you need more. I doubt you will need more than one vCPU for a router VM. Perhaps two vCPUs for your NAS if it is transcoding for Plex or one vCPU if your aren't doing any transcoding. Throw in a couple more VMs of Linux or maybe Server 2012 as a DC to play around on each with a single vCPU and your processor will sit idle most of the time.

The limiting factor with how many VMs you can run on a host is 99% of the time memory bound and not CPU bound.

Got ya - yeah when it comes time I'll just have to play around with the vCPU allocation and see if things get bogged down during "stress testing" with what I feel may be the most stressed moments this system would face.

I just don't know about the Xeon E3-1200 V5s... they are not cheap, at least not when you add in a C236 motherboard that has enough functionality.
A nice one from Supermicro, the X11SSZ-TLN4F, looks wonderful, and has the X550 10GbE connectivity that I'm basically now considering a must-have to future proof. Built-in is significantly cheaper than adding it with an Intel expansion card. All the other boards just have 2 gigabit, which wouldn't be enough so I'd have to add at least two more by expansion.

And I'll need an LSI HBA, as the only model of C236 board with onboard LSI controller, uses an LSI 3008 - which is not well-supported at all in FreeBSD at the moment.

Just for CPU, Motherboard, and HBA, that's already looking at about $800-900.

The X10SDV-6C-TLN4F, with D-1528 (6c/12t), and an HBA expansion card, would likely cost about $750.

The X10SDV-7TP4F, with D-1537 (8c/16t) (1.7GHz base, 2.3GHz turbo), with an onboard LSI 2116, should be about $900. The only downside to that model is the use of SFP+ for 10Gb connectivity, which in some ways is a perk due to lower power and lower latency compared to 10Gbase-T but still requires an added expense to connect those to something.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Got ya - yeah when it comes time I'll just have to play around with the vCPU allocation and see if things get bogged down during "stress testing" with what I feel may be the most stressed moments this system would face.

I just don't know about the Xeon E3-1200 V5s... they are not cheap, at least not when you add in a C236 motherboard that has enough functionality.
A nice one from Supermicro, the X11SSZ-TLN4F, looks wonderful, and has the X550 10GbE connectivity that I'm basically now considering a must-have to future proof. Built-in is significantly cheaper than adding it with an Intel expansion card. All the other boards just have 2 gigabit, which wouldn't be enough so I'd have to add at least two more by expansion.

And I'll need an LSI HBA, as the only model of C236 board with onboard LSI controller, uses an LSI 3008 - which is not well-supported at all in FreeBSD at the moment.

Just for CPU, Motherboard, and HBA, that's already looking at about $800-900.

The X10SDV-6C-TLN4F, with D-1528 (6c/12t), and an HBA expansion card, would likely cost about $750.

The X10SDV-7TP4F, with D-1537 (8c/16t) (1.7GHz base, 2.3GHz turbo), with an onboard LSI 2116, should be about $900. The only downside to that model is the use of SFP+ for 10Gb connectivity, which in some ways is a perk due to lower power and lower latency compared to 10Gbase-T but still requires an added expense to connect those to something.

This is where you start looking in the refurb market if you don't want to spend some cash. Enterprise hardware isn't cheap unfortunately. You are also looking at V5 which is Skylake and basically the newest most expensive out right now. I'd look at V4 or V3 (Broadwell/Haswell). I just upgraded an older host in my business to an E3-1230v3. I was using the 1220 variant, but it lacked HT so I upgraded. The 1230 with its 8 threads is now almost overkill for me. If it were me, I'd buy a yesteryear model processor used on eBay.

Unless you are building out a $10K+ SAN, you don't need 10Gbe. It just isn't needed in a home networking environment. One Gigabit is fine.