Advice for building a small virtual server host?

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
I've run several virtual machines (DNS/mail, web, file server) in a ancient VMWare Server installation at home for a looong time. I've updated the guest OS's occasionally over the years, but never saw a compelling reason to tear apart the underlying host. But the current guest OS's are out of long-term support now and the hardware is ancient, so I figure it's finally time to start fresh. However, I haven't followed the tech scene in general lately, and certainly not the virtualization niche, so I have little idea of just what's out there or what best practices are these days. I'm hoping some knowledgeable folks here can help.

My demands on the system are minimal. I need a VM to handle DNS and email for my little family domain. I've always run a web server VM too, mainly for webmail, but that never gets used now that we all have smartphones that can just connect to IMAP directly. I'll probably still keep a web server running for the rare times that I want to serve up a file without relying on a cloud service, but it will mostly just idle. And I need a VM to handle file service for our media library and documents. VM's will all run some flavor of Linux (likely Debian).

My current setup, apart from being ancient and out of support, has been great for me. The host OS is Linux and boots to a pair of mechanical drives in a kernel-based RAID 1 array. All of the guest OS's and their data live on that array, except for the file storage data. That lives on a single, large mechanical drive. All the data (email, web pages, media, etc.) backs up once a week to an eSATA external drive via rsync. The configuration of the various services almost never changes, so I just make an occasional tarball of each VM and back those up to the external drive. It's a lightweight backup and recovery solution, but it's been good enough for me. I do also have a virtual server out there on the internet to act as a secondary MX and DNS server in case of connection/hardware problems at home. The server itself is headless and managed via either SSH or VMWare's web-based console. I have no problem with command-line tools.

I don't have a specific budget in mind for this project. I can spend whatever needs to be spent to have an effective and trouble-free solution. But clearly my needs are minimal, so I don't want to waste money on power I don't need. So, questions...

1) Virtualization platform - That's the big choice. I've browsed a bit, but don't know all the pros and cons. Linux's KVM looks capable enough for my minimal needs. I'd prefer to stick with a solution that runs on top of Linux so that I can take advantage of the Linux networking and backup tools that I already know. Thoughts?

2) CPU - With very little work to be done, my main concern is low power operation. Probably I want something with 4 cores. What's cheap and good enough here?

3) Storage and Reliability - Way back when I built the current host, SSD's were still too expensive and small to consider, hence the kernel-based RAID 1 setup. But I've never been entirely comfortable with that - making that soft-raid array bootable is awkward, though I have recovery procedures documented very carefully (on paper) in case of a problem. With SSD's being more reliable, could I get away with a simpler single-drive setup (apart from media and documents, which would still live on a big mechanical drive)? Uptime is not tremendously important - if a drive died, the host could stay down for a day or two. Clearly, some kind of real hardware RAID solution would offer more ease of use and reliability, but it just doesn't seem worth the cost to me, unless decent RAID cards are a lot cheaper now than they used to be. If I do go with SSD's, are there any particular models that would be more suitable for always-on server use?

Thanks for any wisdom you all can provide. And if there are other questions I should be considering, please send a clue my way.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Looking around at RAID cards a bit... this Areca isn't too outrageous and might serve my needs well enough. It's old and only supports SATA II, but I can't imagine it would matter for me. $150 to have a simple hardware RAID 1 array that just plain works... I could probably get behind that.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
If you prefer to run your VMs on top of Linux, then I'd just use Linux containers. Super fast and lightweight.

But I really don't see a need to run virtualized unless you just *want* to do that. If you can run everything you want from a Linux base OS I'd just set it up that way. You're only talking DNS, email, and maybe a web server for a very lightly used family setup.

I'd avoid hardware based raid here for a simple home setup. A good hardware based card will be north of $300. Everything else will be fake RAID cards like what comes in motherboards nowadays.

I run a physical Plex sever at home. My root drive is a single Samsung SSD running Ubuntu. I back that up via clonzilla so if it ever craps out, I can slap another SSD in, recover the clone to the new SSD, and be on my way. My media is held on two 6TB drives being managed by MDRAID (RAID 1). The media backs up to an external 6TB drive via rsync once a week.
 
Last edited:

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Thanks for the reply.

If you prefer to run your VMs on top of Linux, then I'd just use Linux containers. Super fast and lightweight.
I'd never heard of containers before. Interesting. I am indeed happy with an all-Linux solution - I can't imagine needing to run a Windows or BSD kernel for any reason. I'll look into it some more. Thanks.

But I really don't see a need to run virtualized unless you just *want* to do that. If you can run everything you want from a Linux base OS I'd just set it up that way.
The main reason I want to stick with virtualization is keeping a split between public and private IP space. My email, DNS, etc. run on a /29 public IP block, but the file server needs to be accessible on the private network. If I put all the services on one real machine, then I have a single host running potentially vulnerable public services with access to the internal network. While I have no reason to believe that bad guys really care about my crappy little network, that's still too loose a security situation for me to be comfortable with. My current virtual host has an external, public address and a physical network connection to the internal switch with no IP address assigned to it. Only the virtual file server has an actual address on that cable. And the virtual host offers no public services beyond SSH. It's not bulletproof, but I think that's a safer design.

I'd avoid hardware based raid here for a simple home setup. A good hardware based card will be north of $300. Everything else will be fake RAID cards like what comes in motherboards nowadays.
The Areca card I linked is "real" RAID. Whether it's "good" or not is unclear to me - reviews are mixed, but there aren't many thorough ones out there. With inexpensive hardware like that, it's often hard to tell whether reviewers are using it properly in the first place. It looks like there are a couple of other similarly-priced cards like the Areca out there - usually only 2 SATA ports and not blazing fast, but I have no need for top speed.

I run a physical Plex sever at home. My root drive is a single Samsung SSD running Ubuntu. I back that up via clonzilla so if it ever craps out, I can slap another SSD in, recover the clone to the new SSD, and be on my way.
Yeah, I may go with a recovery solution like that. It's not that big a deal if the server goes down for a day or two while I buy a replacement disk. I'm still undecided.
 

frowertr

Golden Member
Apr 17, 2010
1,371
41
91
That's as limited as a raid card as you can get. Only two drive capable? Not much of an upgrade path there.

I'd only suggest hardware RAID if you want to run a bare metal hypervisor (e.g. ESXi, Xen, etc...)

If you want to run your VMs out of Linux (Type 2 hypervisor) just use MD. On modern hardware, its going to be faster than any RAID card you can buy and you don't have to mess with drivers, hardware compatibility, etc... Plus you have nearly infinite upgrade paths using MD when the time comes to add/remove arrays/drives.

You could setup two SSDs in RAID 1 that host your root/boot (and containers) while setting up an additional two mechanical drives in RAID 1 that contain your media. That along with backups would make it pretty bullet proof.
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
1) Virtualization platform - That's the big choice. I've browsed a bit, but don't know all the pros and cons. Linux's KVM looks capable enough for my minimal needs. I'd prefer to stick with a solution that runs on top of Linux so that I can take advantage of the Linux networking and backup tools that I already know. Thoughts?
KVM is great, if everything is linux kernel, and you're OK with them all being the same kernel, you might consider containers, as frowetr mentioned above! There are a couple of different types, but OpenVZ and LXC are big ones.

If you want to run your VMs out of Linux (Type 2 hypervisor) just use MD. On modern hardware, its going to be faster than any RAID card you can buy and you don't have to mess with drivers, hardware compatibility, etc... Plus you have nearly infinite upgrade paths using MD when the time comes to add/remove arrays/drives.

Totally agree.

Ignore hardware raid. Not worth the money. Linux software raid (mdadm + llvm) is excellent, easy to find documentation on, and hardware agnostic. You should be able to swap disks into a new motherboard transparently if you ever have a problem, and there is no raid-card as an additional point of failure. If you need SAS or you need A TON of disks, a HBA is fine, but I'd still do any raid/volume management with mdadm and llvm. Or ZFSonLinux if it is well supported by your distro (for reference, it should be baked into ubuntu 16.04).
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
I would prefer to spend less than $1000, but my budget is flexible - I'll spend what's necessary to have a system that will work reliably for a long time. My current virtual host machine has been running 24/7 for almost 10 years now. But I don't want to waste money on hardware I don't need and I know the machine doesn't really do all that much. I am planning to build this machine entirely from new parts, short of maybe some cables or a CD drive. So factor a case and PSU into the above also.

I'll probably stick with MD as advised. Expandability doesn't matter much to me, but the minor hassle of setting up and documenting a bootable MD array seems well worth saving the $150-$200+ a RAID card would run me.

Containers look perfect for me. I think I should be able to make my network and backup scheme work them. Just need to play around a bit.

Thoughts on a CPU/mobo combo? I see a number of low-power server boards out there, but it's not obvious what would be best in my case. This machine won't do a lot, but it will need to do a bit more than a barebones NAS would.

For storage, I'll likely go with an updated version of my current setup - a RAID 1 array of two small mechanical drives to hold the OS's and a single large mechanical drive to hold the file server data. That will all backup to another big drive in an external enclosure once or twice a week.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
OP, do you have a preferred disrro? I was reading about the current state of Linux containers and it seems like lxd/lxc is the way to go if you're on Ubuntu and Debian and it looks to be baked into Ubuntu 16.04 that should launch this week, openvz seems to be popular for gentoo and funtoo.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
I use a Haswell Celeron on a cheap motherboard with 16gb's of ram running 10 Pro with Stablebit Drivepool and 3 Hyper-V VM's with Plex,torrents,etc on a ARC 100 240gb SSD runs pretty great but the aforementionted linux based solutions are great too, at the minimum get an i3 though for Plex transcodes, Celerons are not meant for transcoding in Plex since the program doesnt use Quicksync, stick with Ubuntu and software raid or ZFS if you dont want to use Windows.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
You could setup two SSDs in RAID 1 that host your root/boot (and containers) while setting up an additional two mechanical drives in RAID 1 that contain your media. That along with backups would make it pretty bullet proof.

:thumbsup: This is the best advice for a bulletproof home setup, raid-1 mirrors all the way for fault tolerance, just like natures does it, 2 kidneys, 2 eyes etc...
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
Rough idea of a good start:

PCPartPicker part list: http://pcpartpicker.com/p/M9RBf7
Price breakdown by merchant: http://pcpartpicker.com/p/M9RBf7/by_merchant/

CPU: Intel Core i5-6400T 2.2GHz Quad-Core OEM/Tray Processor ($149.00 @ Amazon)
CPU Cooler: Cooler Master Hyper T2 54.8 CFM Sleeve Bearing CPU Cooler ($11.99 @ Newegg)
Motherboard: Gigabyte GA-B150M-DS3H Micro ATX LGA1151 Motherboard ($66.88 @ OutletPC)
Memory: G.Skill Ripjaws 4 series 16GB (2 x 8GB) DDR4-2133 Memory ($54.99 @ Newegg)
Storage: Crucial BX200 480GB 2.5" Solid State Drive ($109.99 @ Newegg)
Storage: Crucial BX200 480GB 2.5" Solid State Drive ($109.99 @ Newegg)
Case: Silverstone PS09B MicroATX Mid Tower Case ($38.99 @ Directron)
Power Supply: XFX XT 400W 80+ Bronze Certified ATX Power Supply ($30.98 @ Newegg)
Total: $572.81
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-18 15:06 EDT-0400

Could throw in two more SSDs and do a RAID 10 or other. Definitely would be a very quick, low power rig.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Forgive the thread necro - I never got around to rebuilding the server last year but I really do want to get to it now. All of the considerations listed above are basically unchanged, only that I'm keeping more of an eye towards Plex or some similar video streaming solution. My two kids are approaching the age where they'll have their own devices and may want to watch videos from our library, so some kind of centralized video storage solution is attractive going forward. There would never be more than 3 streams running simultaneously and even that many seems very unlikely.

Having explored the various server CPU and mobo options, I don't think the management, stability, and power-saving features are worth several hundred extra dollars of hardware to me in a two-bit home server like this one. But feel free to try and convince me otherwise.

With that in mind, how does this look for the core of a system?

CPU: Intel Core i3-7100 Kaby Lake - $120 - This has a Passmark score of ~6200 and the Plex guidelines I've seen recommend 2000 for each stream if you need any transcoding. So this seems sufficient to me without spending the additional money and power for an i5.

Motherboard: Asus H170M-Plus/CSM - $100 - I would do a 200-series board, but they were just released and I can't find any clear indication of where Linux support is at. Considering I'll use Debian, which tends not to have a cutting-edge kernel, sticking to the 100-series seems safer. I'm an Asus loyalist and have always been happy with their boards, but I could be convinced to use something else.

RAM: 8GB (2x4GB) G.Skill Ripjaws DDR4 2400 - $60 - 16GB seems like overkill? I'm running happily on 2GB right now.

SSD's: 2x SanDisk SSD PLUS 2.5" 120GB SATA III MLC Internal Solid State Drive - 2x $45 = $90 - This is the El Cheapo option, but I'm not sure I need anything more than this. There's no way I'll ever use 120 gigs on the system drive - I only use 30 gigs now. A higher-end SSD like a Samsung 850 Evo runs about double the price, but the system drives aren't doing any heavy I/O work. Would a more expensive drive be any more reliable, enough to be worth the premium if I'm already mirroring? Not sure... feel free to comment.

Storage Drives: 2x WD Red 4TB - 2x $145 = $290 - One drive installed in the system and the other in an enclosure taking weekly rsync backups. I might spring for a third drive to run a mirror + backup, but I've lived with a single drive + backup setup for the last 10 years just fine. We don't add content to the storage drive very often, so going a week back to the rsync backup isn't that scary. And with only two drives to purchase, it makes an eventual capacity upgrade easier to swallow later on. Clearly I could spend more money for 6 or 8 TB drives, but we don't have a huge video collection and the price difference is significant. I think 4TB of storage should take us a long time. Right now we don't have any video and we're using about 400GB.

Other hardware (PSU, case, fans, enclosure, etc.) may be reused from parts I have around or else purchased new, but I don't think those should impact functionality very much.

Comments and suggestions very much appreciated.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
Forgive the thread necro - I never got around to rebuilding the server last year but I really do want to get to it now. All of the considerations listed above are basically unchanged, only that I'm keeping more of an eye towards Plex or some similar video streaming solution. My two kids are approaching the age where they'll have their own devices and may want to watch videos from our library, so some kind of centralized video storage solution is attractive going forward. There would never be more than 3 streams running simultaneously and even that many seems very unlikely.

Having explored the various server CPU and mobo options, I don't think the management, stability, and power-saving features are worth several hundred extra dollars of hardware to me in a two-bit home server like this one. But feel free to try and convince me otherwise.

With that in mind, how does this look for the core of a system?

CPU: Intel Core i3-7100 Kaby Lake - $120 - This has a Passmark score of ~6200 and the Plex guidelines I've seen recommend 2000 for each stream if you need any transcoding. So this seems sufficient to me without spending the additional money and power for an i5.

Motherboard: Asus H170M-Plus/CSM - $100 - I would do a 200-series board, but they were just released and I can't find any clear indication of where Linux support is at. Considering I'll use Debian, which tends not to have a cutting-edge kernel, sticking to the 100-series seems safer. I'm an Asus loyalist and have always been happy with their boards, but I could be convinced to use something else.

RAM: 8GB (2x4GB) G.Skill Ripjaws DDR4 2400 - $60 - 16GB seems like overkill? I'm running happily on 2GB right now.

SSD's: 2x SanDisk SSD PLUS 2.5" 120GB SATA III MLC Internal Solid State Drive - 2x $45 = $90 - This is the El Cheapo option, but I'm not sure I need anything more than this. There's no way I'll ever use 120 gigs on the system drive - I only use 30 gigs now. A higher-end SSD like a Samsung 850 Evo runs about double the price, but the system drives aren't doing any heavy I/O work. Would a more expensive drive be any more reliable, enough to be worth the premium if I'm already mirroring? Not sure... feel free to comment.

Storage Drives: 2x WD Red 4TB - 2x $145 = $290 - One drive installed in the system and the other in an enclosure taking weekly rsync backups. I might spring for a third drive to run a mirror + backup, but I've lived with a single drive + backup setup for the last 10 years just fine. We don't add content to the storage drive very often, so going a week back to the rsync backup isn't that scary. And with only two drives to purchase, it makes an eventual capacity upgrade easier to swallow later on. Clearly I could spend more money for 6 or 8 TB drives, but we don't have a huge video collection and the price difference is significant. I think 4TB of storage should take us a long time. Right now we don't have any video and we're using about 400GB.

Other hardware (PSU, case, fans, enclosure, etc.) may be reused from parts I have around or else purchased new, but I don't think those should impact functionality very much.

Comments and suggestions very much appreciated.

I would get a Haswell build, mainly for mature bios and stable support in the linux kernel along with Windows 7 support, in general i avoid latest hardware with linux distros, especially Debian and Centos. Get a motherboard with an Intel network adapter and a cheap 250GB MLC SSD to run the basic NAS os and virtual machines, its no problem if its an older model, Hitachi HGST drives are better than WD Reds, hitachi hgst 4tb deskstar nas.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Going back to Haswell seems awfully conservative to me. Can you cite any specific and widespread evidence of problems with Skylake and 100-series chipsets on desktop hardware? There was the flare-up prompted by Garrett's comments on power management almost a year ago, but that was specific to mobile hardware. I see plenty of reports of Skylake running well on 170 chipsets, as well as reports that the Kaby Lake mobile chips released late in 2016 have improved the mobile power management issues that Skylake had.

Windows 7 support is irrelevant to me. I may well put an Intel NIC in - I might even have one lying around. I'll look into the Hitachi drives.
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,256
3,855
75
I have a Skylake build on Ubuntu (Xubuntu) (see my sig), and it's worked quite well for me. You probably do want Ubuntu 16.04 or later. And I had to turn off HW acceleration in Firefox, if that matters, while I was using the onboard graphics.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
I'll be running Debian, so I know I'll want to get the Jessie backport for the 4.x kernel. Between that, installing the root onto a mirror, and getting containers set up, I know configuration will require some effort. I'm not going into this expecting it all just works. But I know my way about kernel, module, and initrd configuration, and I know how to use the Debian kernel tools if necessary.

The machine won't run X. It will only be hooked up to a display long enough for me to get the basic hardware running. After that, I'll manage it via ssh.
 

grimpr

Golden Member
Aug 21, 2007
1,095
7
81
You may consider running something like Unraid which is a NAS with a Hypervisor ability, or run something like a bare metal KVM Hypervisor like Proxmox on the SSD and run Debian or Openmediavault as a NAS virtual machine. Unraid is the best for soho usage and just plain works, Proxmox along with Openmediavault are free, both based on Debian jessie, very stable and run great, performance is great too using VirtIO and SCSI under KVM. Proxmox supports Turnkey linux templates also in LXC containers.
 
Last edited:

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
You may consider running something like Unraid which is a NAS with a Hypervisor ability, or run something like a bare metal KVM Hypervisor like Proxmox on the SSD and run Debian or Openmediavault as a NAS virtual machine.
Can you elaborate on the advantages of doing something like this? I'm not dismissing the suggestion, I just don't know enough about the software involved. Right now I'm just serving files up via Samba on one of the three virtual machines. It's simplistic but also simple to setup and manage. Presumably a NAS solution would offer better throughput to the file storage for other clients on the network? But then if I'm serving video via Plex, the Plex server has to mediate that storage access anyway. It seems simpler just to give the Plex server direct access to the storage. But I may not be understanding the suggestion.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Or is the suggestion to just set up a NAS and ditch Plex completely? That's a possibility, I suppose, though I like some of the organization and management features that Plex offers, both in the free and paid versions. Especially if my kids want to watch a movie on a tablet, I'm thinking that's going to be easier to manage with Plex.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
I looked into Proxmox and Docker. Neat projects, but I think I'll stick with plain LXC 2.0. I don't have heavy management needs and don't need to move containers from one machine to another. I just need 2-3 lightweight virtual machines that will rarely be modified once they're up and running, and plain LXC seems fine for that. The main advantages of virtualization for me are the internal/external network separation on a single box and the ease of getting a new server image installed and mostly configured while leaving the old one running.

I went ahead and ordered the hardware roughly as I outlined above, except for swapping in a Hitachi drive for the main file storage. I'll stick with the cooler, slower spinning WD Red for the backup drive since it will be running in an enclosure. The exact software, storage, and virtualization scheme are all subject to change. The old server is so old that I don't need any of its parts, so I can keep it online and take all the time I like setting up the new one and experimenting. Probably it will be at least a month before I have the new one set up to my liking and am ready to swap it in. I'll report back if anything interesting happens.

Thanks for the advice everyone!
 
  • Like
Reactions: grimpr

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Should anyone be following this, a post-build report...

At this point, I've got everything important configured - the old email/DNS server and the file/plex server are running as containers on the new box. Overall, it all looks great and I have no regrets about hardware or configuration. Containers took a little getting used to only because I had never used them before, but running a Debian Jessie host with backported kernel 4.8 and LXC 2.0 and straight Jessie in the containers presented no serious problems.

The hardware I chose seems a reasonable fit for my demands. When the plex server is transcoding 1080p video, top reports about 150% CPU usage on the Kaby Lake i3 7100. From what I can tell, this is roughly equivalent to two of the four virtual cores (or one physical core), though the technical details get beyond me quickly. I plan to upgrade my network infrastructure and main TV player so that I can Direct Play/Stream 1080p apart from cases where I need to burn in subtitles. Once that's done, the server should easily be able to handle the 1-2, occasionally 3, concurrent streams I was aiming for. In the meantime, it can handle transcoding to the single TV without breaking a sweat, and it will be a little while before my kids start streaming video on tablets anyway. Power usage is a little under 25W at idle. The 3 PWM-controlled Noctua fans in the box are inaudible at any practical distance.

Overall, I'm loving the server for the ~$700 cost.
 
  • Like
Reactions: grimpr