Server build for a small business - primarly a vm host

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I'm replacing a dated server with a custom build and I'm looking for opinions about my component choices.

Relevant information

  1. Host/VM Configuration
    • Host - Vsphere hypervisor (free version)
    • VM0 (SBS 2011): DC + file server, AD, DNS, etc.
    • VM1 (Ubuntu): API server (pulling this off of EC2)
    • VM2 (Ubuntu): web server (pulling this off of EC2)
    • VM3 (W10 Pro): app server
    • VM4 (Ubuntu): API dev server
    • VM5 (Ubuntu): web dev server
    • VM4 (Ubuntu): MySQL server
    • VM4 (Ubuntu): Redis server
    • VM6-10: I would like to have enough capacity to add at least 5 more VMs
  2. Connectivity & Traffic
    • The office connection is 100/100 fiber
    • The internal network has 15-20 users running W10 Pro on desktops and laptops
    • The API server is used to feed internal W10 apps and external mobile apps
    • The web server will be used for the company's external wordpress site
    • API + Web traffic will be fairly low (a handful of requests per second)
  3. The budget is approximately $4000
  4. The current server will be used as a backup DC + file server

In a nutshell, I know I don't need a ton of computing power, but I don't want to do this again anytime soon, so I'd like to plan for light expansion.

My general thoughts are:
  • Dual socket Xeon E5-2630 V4 (10 cores @ 2.2 GHz, which I may OC just a little)
  • OBR10 w/ six Western Digital SE 2 TB drives
  • 128 GB DDR4-2133 (4x16 for each CPU)

Questions/Discussion
  • I may leave one of the sockets empty for now and then add another later if the need arises.
  • Should I switch the RAM from 4x16 to 8x8 for each CPU? I don't want to configure the memory incorrectly and end up with a bottleneck.
  • Is OBR10 preferable to a RAID 1 SSD setup for the OS and RAID 10 for the file server storage?
  • Where should the VM images live if the disks are divided and/or should the VM images get their own array?
  • Is there a better OS to use instead of Ubuntu for each of the Linux based VMs?
  • I've never used VSphere Hypervisor and the free version is a bit of a mystery to me. Should I be looking at Hyper-V or one of the other free hypervisors (KVM, Xen, etc.)?

Current Build
  • 1x or 2x Xeon E5-2630 V4 (10 cores @ 2.2 GHz)
  • Noctua NH-L12 heatsink(s)
  • Asus Z10PE-D16 WS motherboard
  • 2x Crucial 64 GB (4x16) Registered DDR4-2133
  • 6x Western Digital SE 2 TB

Your thoughts and opinions will be greatly appreciated.
 
Last edited:
Feb 25, 2011
16,980
1,616
126
Welcome to server-ville. Repeat after me:

"Warranties. Support. Reliability. Warranties. Support. Reliability. Warranties. Support. Reliability. Warranties. Support. Reliability."

Good, now keep chanting that until you throw up.




Done? Good. Now:

1) Don't build your own.
2) Don't overclock.
3) Don't make me smack you with a trout.

As to your questions:

1) You can leave expansion options open, but that will limit how much RAM you can install in most multi-socket boards. Just something to keep in mind.

2) 8x8 will give you quad channel to both CPUs, which is probably better.

3) OBR10 is not preferable since you'd be mixing disk types. OBR10 your data drives, RAID1 your boot volume.

4) Do not divide the disks. OBR10 the data drives, pass that through to the hypervisor as a datastore. VMs get some number of "virtual" disks which exist as files on that system. Don't pass through hardware directly to VMs unless you're really sure you want to do that. (You don't sound sure.)

5) Ubuntu vs. other Linux is a theological question. I've found Ubuntu generally friendlier to n00bs. It will probably come down to documentation - whoever wrote your applications will probably have better, more up to date documentation for their particular distro. Nothing wrong with, say, running your MySQL server on Ubuntu while your API server runs CentOS or something.

6) VMware ESX (vSphere is the desktop application that manages ESX) is the marketshare leader and is pretty easy to use/admin. Probably the easiest learning curve of the available options. I use it and like it.

Nothing wrong with Hyper-V either.

KVM is an excellent choice if you want to save some money. But I would suggest using CentOS as your bare-metal OS for everything in that case - see my above comment about documentation. (Directions for Red Hat Enterprise will more or less work on CentOS without drama. And RH has gone to a lot of trouble to make their Enterprise Linux product a viable alternative to VMware.)
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Welcome to server-ville. Repeat after me:

"Warranties. Support. Reliability. Warranties. Support. Reliability. Warranties. Support. Reliability. Warranties. Support. Reliability."

Good, now keep chanting that until you throw up.

Done? Good. Now:

1) Don't build your own.
2) Don't overclock.
3) Don't make me smack you with a trout.

lol. Point taken. I figured that would be one of the first responses as it's logical and reasonable. Price is a bit of a problem because the above server will cost $8000 from Dell or $4000 DIY. I completely understand the difference between DIY home computers and DIY servers, but I'm just having trouble swallowing the cost difference. I'm not as stuck in the middle as I may sound because time is definitely worth money, but I'm having trouble convincing myself not to give this a try because spending money to get good components has to be worth something.

As to your questions:

1) You can leave expansion options open, but that will limit how much RAM you can install in most multi-socket boards. Just something to keep in mind.

2) 8x8 will give you quad channel to both CPUs, which is probably better.

3) OBR10 is not preferable since you'd be mixing disk types. OBR10 your data drives, RAID1 your boot volume.

4) Do not divide the disks. OBR10 the data drives, pass that through to the hypervisor as a datastore. VMs get some number of "virtual" disks which exist as files on that system. Don't pass through hardware directly to VMs unless you're really sure you want to do that. (You don't sound sure.)

5) Ubuntu vs. other Linux is a theological question. I've found Ubuntu generally friendlier to n00bs. It will probably come down to documentation - whoever wrote your applications will probably have better, more up to date documentation for their particular distro. Nothing wrong with, say, running your MySQL server on Ubuntu while your API server runs CentOS or something.

6) VMware ESX (vSphere is the desktop application that manages ESX) is the marketshare leader and is pretty easy to use/admin. Probably the easiest learning curve of the available options. I use it and like it.

Nothing wrong with Hyper-V either.

KVM is an excellent choice if you want to save some money. But I would suggest using CentOS as your bare-metal OS for everything in that case - see my above comment about documentation. (Directions for Red Hat Enterprise will more or less work on CentOS without drama. And RH has gone to a lot of trouble to make their Enterprise Linux product a viable alternative to VMware.)

Thanks for the feedback.
  1. Yeah, that's what I was thinking as well. I'll probably populate both sockets if I end up going with a two socket setup.
  2. I couldn't find that written anywhere, but it's the reason I asked the question. 8x8 it is.
  3. Sorry - I didn't phrase my question correctly. I meant is it preferable to use OBR10 for everything OR would it be better/faster to go RAID1 (boot) + RAID10 (data)? In the latter case, I would purchase additional drives for the RAID1.
  4. See above
  5. Yeah, the theology part of this is real and I get that. I was mostly wondering if there's a good technical reason to choose one or the other, but you basically answered that by suggesting CentOS. I was going to ask about CentOS, but I figured someone would mention it if it was a good replacement.
  6. Thanks for the correction. I've never used ESX and the introduction slides aren't the best at making the point you highlighted completely clear. My only question at this point is if the free version of ESX is going to work for me specifically related to backups. I've been googling for backup solutions with the free version because I can't afford to license a two socket system with ESX. This is one reason I may go to a single socket with more cores.
 

frowertr

Golden Member
Apr 17, 2010
1,372
41
91
Dave has got you straightened on the warranties. Sure it's cheaper to roll your own, but warranties are well worth the additional price you pay from HP/Dell when something breaks (it always does!).

You need to be booting ESXi off a USB thumb drive. No need to have a RAID 1 boot drive at all. RAID 10 the data drives and off you go.

I've gotten off the Ubuntu bandwagon as of late but its fine. CentOS is the gold standard but it's a bit more tricky to learn if you have never worked with *nix.

The problem (as you noted) with free ESXi is that most backup programs won't have access to the backend API to allow you to backup your VMs. So this could be a problem. The Essentials License ($500) will give you the ability to purchase and use backup software.

You could also go the Hyper-V route. It is free by itself. Xen is also a great candidate.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
You can get vSphere Essentials for $560. That's enough for three dual socket hosts.

Rather than building a single server you need to be looking at 2-3 small, cheap servers - Something Xeon D based would be reasonable, the 8 core version is surprisingly powerful.

Look at these for example:

https://www.cdw.com/shop/products/S...4T-Xeon-D-1540-0-MB-0-GB/3788710.aspx?pfm=srh

Stretch your budget to get some entry level shared storage. When you need to expand you can add a host, increase your shared storage, etc.

$4k is an absurdly low budget for what you are trying to do but putting it all in one machine (particularly a self built one) is just asking for disaster. I would even look at used, somewhat recent generation hardware from ebay before doing what you were thinking of.

Viper GTS
 
Last edited:

jlee

Lifer
Sep 12, 2001
48,518
223
106
You can get vSphere Essentials for $560. That's enough for three dual socket hosts.

Rather than building a single server you need to be looking at 2-3 small, cheap servers - Something Xeon D based would be reasonable, the 8 core version is surprisingly powerful.

Look at these for example:

https://www.cdw.com/shop/products/S...4T-Xeon-D-1540-0-MB-0-GB/3788710.aspx?pfm=srh

Stretch your budget to get some entry level shared storage. When you need to expand you can add a host, increase your shared storage, etc.

$4k is an absurdly low budget for what you are trying to do but putting it all in one machine (particularly a self built one) is just asking for disaster.

Viper GTS

I couldn't agree more. When I saw 'everything on one host', my first thought was how much money would be lost per hour if that machine were to fail.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Dave has got you straightened on the warranties. Sure it's cheaper to roll your own, but warranties are well worth the additional price you pay from HP/Dell when something breaks (it always does!).

You need to be booting ESXi off a USB thumb drive. No need to have a RAID 1 boot drive at all. RAID 10 the data drives and off you go.

I've gotten off the Ubuntu bandwagon as of late but its fine. CentOS is the gold standard but it's a bit more tricky to learn if you have never worked with *nix.

The problem (as you noted) with free ESXi is that most backup programs won't have access to the backend API to allow you to backup your VMs. So this could be a problem. The Essentials License ($500) will give you the ability to purchase and use backup software.

You could also go the Hyper-V route. It is free by itself. Xen is also a great candidate.

Booting off of a thumb drive makes things easier, but is that (I don't know what word to use here...) legit for a server? So I leave the USB drive plugged in all the time and that's it? I hadn't come across ESXi Essentials, but it seems like a great way to go regarding ease of use and support.

I'm well versed in Linux, so I'm fine with CentOS over Ubuntu. Thanks for the tip.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Booting off of a thumb drive makes things easier, but is that (I don't know what word to use here...) legit for a server? So I leave the USB drive plugged in all the time and that's it? I hadn't come across ESXi Essentials, but it seems like a great way to go regarding ease of use and support.

I'm well versed in Linux, so I'm fine with CentOS over Ubuntu. Thanks for the tip.

Yes, it's legitimate though in my experience it can be slow (as in ESX taking 30 mins to boot) if you don't have a very good USB/SD device. Most servers will have an internal USB port/SD slot for doing exactly this as bare metal hypervisors do not require much disk space and devoting a full disk pair for them is pointless since they are nearly stateless. If one fails, your VMs are going to end up on another host anyway.

Viper GTS
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I couldn't agree more. When I saw 'everything on one host', my first thought was how much money would be lost per hour if that machine were to fail.

I'm not sure I understand this line of thinking quite yet. There are obviously single points of failure in a single system, so I certainly understand the premise of your argument, but is it really that bad? Keep in mind this is a small business with 15-20 people.

A lot of the more likely failure scenarios can be mitigated unless I'm mistaken. I would be using redundant power supplies on separate UPS units that are on separate circuit breakers. Raid 10 should pretty well cover the disks especially with a hot and cold spare in the box ready to go. I know I've touched on a handful of the infinite failure scenarios, but I'm wondering how many more 'big' ones there are barring theft, fire, etc.

Thanks for the opinions and advice. It is much appreciated.

Edit: I just noticed VSphere Essentials covers three physical devices/six CPUs, so that makes the multi-server scenario much more palatable.
 
Last edited:

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Yes, it's legitimate though in my experience it can be slow (as in ESX taking 30 mins to boot) if you don't have a very good USB/SD device. Most servers will have an internal USB port/SD slot for doing exactly this as bare metal hypervisors do not require much disk space and devoting a full disk pair for them is pointless since they are nearly stateless. If one fails, your VMs are going to end up on another host anyway.

Viper GTS

Ah ha! I saw a picture of an internal SD card slot and it confused the hell out of me. Now it makes much more sense.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
What network infrastructure do you have behind this?

A Cisco ASA 5500 series firewall (I don't know the exact model number at the moment) and a few managed switches. The current DC is the DHCP server among other things, but I'm probably going to move DHCP to the ASA. That's about it at the moment... Oh, and everything is connected with gige if it matters. I'd love to get a SAN, but I don't think that's in the budget right now.

There are 14 desktops, 7 laptops, 3 printers, and 1 server. I'd be adding a second (or third, fourth, ??) server at the completion of this project.

If that's not what you were asking, please let me know and I'll try to be more specific.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Is this type of server a bad deal for some reason? I know it's refurbished, but some of my server needs aren't mission critical and the low price is helpful. I could run all of the development VMs on a server like this.

I could get four or five of these servers and then a lot of the concerns mentioned in previous posts would be mitigated. At that point, I could potentially get a SAN as well.
 

jlee

Lifer
Sep 12, 2001
48,518
223
106
A Cisco ASA 5500 series firewall (I don't know the exact model number at the moment) and a few managed switches. The current DC is the DHCP server among other things, but I'm probably going to move DHCP to the ASA. That's about it at the moment... Oh, and everything is connected with gige if it matters. I'd love to get a SAN, but I don't think that's in the budget right now.

There are 14 desktops, 7 laptops, 3 printers, and 1 server. I'd be adding a second (or third, fourth, ??) server at the completion of this project.

If that's not what you were asking, please let me know and I'll try to be more specific.

That's fine - I wanted to make sure there everything wasn't running on a consumer-grade router (or switches).

I'm not sure I understand this line of thinking quite yet. There are obviously single points of failure in a single system, so I certainly understand the premise of your argument, but is it really that bad? Keep in mind this is a small business with 15-20 people.

A lot of the more likely failure scenarios can be mitigated unless I'm mistaken. I would be using redundant power supplies on separate UPS units that are on separate circuit breakers. Raid 10 should pretty well cover the disks especially with a hot and cold spare in the box ready to go. I know I've touched on a handful of the infinite failure scenarios, but I'm wondering how many more 'big' ones there are barring theft, fire, etc.

Thanks for the opinions and advice. It is much appreciated.

Edit: I just noticed VSphere Essentials covers three physical devices/six CPUs, so that makes the multi-server scenario much more palatable.

Any incident (hardware failure, update, etc) that requires a host reboot will take your entire environment offline, though without shared storage your redundancy options are quite limited.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
That's fine - I wanted to make sure there everything wasn't running on a consumer-grade router (or switches).

I see. I'm glad I passed the test!

Any incident (hardware failure, update, etc) that requires a host reboot will take your entire environment offline, though without shared storage your redundancy options are quite limited.

That makes sense. It's sounding more and more like a SAN (or something else if I'm not using the term correctly) is necessary to really harden the network. A combination of smaller servers and a SAN seems feasible with my budget...
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Another point to make when you are looking at configuration options. Like most people you are going way overboard on CPU and not nearly enough on memory. This is one host in my 4-node primary cluster:

esxhost.PNG


Those are similar CPUs to the ones you are looking at (dual socket, 10 core/20 thread), albeit 25% higher clocked. Given your budget limitations you should be trading CPU for RAM/disk performance whenever possible.

Viper GTS
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Is this type of server a bad deal for some reason? I know it's refurbished, but some of my server needs aren't mission critical and the low price is helpful. I could run all of the development VMs on a server like this.

I could get four or five of these servers and then a lot of the concerns mentioned in previous posts would be mitigated. At that point, I could potentially get a SAN as well.

The issue with those is they are very, very old and well past the end of support. An Rx10/Rx20 series would be far more desirable, and they're readily available as pulls.

I see. I'm glad I passed the test!



That makes sense. It's sounding more and more like a SAN (or something else if I'm not using the term correctly) is necessary to really harden the network. A combination of smaller servers and a SAN seems feasible with my budget...

You probably want a NAS type device, not a SAN. I'd go NFS for your datastore for simple use and management.

Viper GTS
 
Last edited:

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Another point to make when you are looking at configuration options. Like most people you are going way overboard on CPU and not nearly enough on memory. This is one host in my 4-node primary cluster:

esxhost.PNG


Those are similar CPUs to the ones you are looking at (dual socket, 10 core/20 thread), albeit 25% higher clocked. Given your budget limitations you should be trading CPU for RAM/disk performance whenever possible.

Viper GTS

I was actually trying to be mindful of that trade-off even though it may not seem that way. I don't think I'll need more than 128 GB of RAM as the current server is functioning (barely...) on 16 GB. The API and web servers have pretty minimal requirements (2 threads, 4 GB of RAM) and Redis will need 16 GB. I was having trouble coming up with a reason to get more, but that doesn't mean there isn't one.

To balance that equation, I looked for a CPU that had enough cores to handle all of the various VMs I could potentially run without breaking the bank. I considered going to a single 12 C/24 T setup to save a little bit of money and complexity.

In terms of disk performance, I wanted to get SSDs, but it makes the price go up quite a bit. My impression was 6x 2 TB drives in RAID10 would be pretty speedy. Is that not the case?
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Don't forget that VMware will happily schedule CPU time on far fewer physical CPUs than you have provisioned in your VMs. Over-provisioning is what makes virtualization work, if you are trying to buy a core for every one you provision in a VM you will run out of budget really quickly.

Dual quad core (like E5620 or better) would probably suit you just fine, 10-12 core CPUs are simply way outside your budget and completely unnecessary for your minimal workload.

Viper GTS
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
Don't forget that VMware will happily schedule CPU time on far fewer physical CPUs than you have provisioned in your VMs. Over-provisioning is what makes virtualization work, if you are trying to buy a core for every one you provision in a VM you will run out of budget really quickly.

Dual quad core (like E5620 or better) would probably suit you just fine, 10-12 core CPUs are simply way outside your budget and completely unnecessary for your minimal workload.

Viper GTS

Ok, that's a good point and it potentially changes a lot. I wasn't necessarily trying to provision a core or thread for every VM, but it's probably over-specced as you are suggesting.

I have a lot to think about obviously. My current train of thought is to get (as you initially suggested) 2 or 3 smaller servers and some type of NAS. In this scenario, is it even necessary to have local storage in the servers? I don't really know how that would work yet (network booting from a NAS), but it seems like that's what has been suggested so far.

I've used NFS for many years as an electrical engineer doing simulations and design work, but I've suddenly realized I don't understand how it works. I'll look into that as it will probably answer a lot of my questions.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Your local storage will just be for vmware to boot off of + store configuration. After that everything is done via the network storage. You could in theory boot from an iSCSI target but that's a lot of hassle and over-engineering when a USB stick or SD card will suffice.

Viper GTS
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I've given a lot of thought to the replies in this thread and it made me realize I'm in over my head. I understand what needs to be done and I'm familiar with all of the various steps involved in this process, but I don't have enough experience to do this without stumbling, which can't happen in a production environment obviously.

When I was younger, I may have ignored this feeling, but I've been in this type of situation enough times to have a healthy respect for what could go wrong beyond the scenarios I can foresee. I'll find a way to cut my teeth on a less critical installation.

Once I've had a chance to bin my list of priorities into needs and wants, I'll come back to see if any of you or any other helpful souls have further advice. I appreciate what all of you have given me so far regardless of if you have time to contribute in the future.

It was fun to think about building a new server, but this feels like the wiser choice.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I completely agree with the others about whiteboxing it for a business.

That said IF you were to whitebox it, don't do it with consumer components. A proper Supermicro setup gets you the reliability of a Dell/HP albeit without the warranty. My whitebox Supermicro ZFS SAN has been rock solid for years now, providing storage for two ESXI hosts.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
If you don't have lights out remote control you are screwed from the start. Dell with a DRAC, nothing worse that having to go into a physical site when you could have done everything from the couch at home.