• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

ESXi 6 - whitebox suggestion?

ts10012

Junior Member
I'd like to replace my (old) VM infrastructure with the newer whitebox running ESXi 6 Free.

I'd like a suggestion on supported motherboard, cpu and videocard (if not built-in).

The main requirements I have for it:

- it should support at least 32Gb (64-Gb is better, but optional) of RAM
- it should be quiet, preferably with less than 400W PSU. I plan to use a case with really quiet fans as well.
- it should be stable and relatively simple to install (preferably, direct support by ESXi 6 for main hardware)

Video or CPU performance does not matter. The box will be used primarily to run low-CPU high-RAM usage VMs with certain products that I will be using, such as Oracle, MySQL and/or SQL databases on Linux and Windows guests, Win7/Win8 guests with VPN clients.

Sounds support does not matter. All disks will be either internal SATA SSDs/HDDs or external USB 2 or 3 (I can convert them to eSATA if I have to).

I'd like the server to be cheap (e.g. using non-ECC DDR3 RAM), but stability is much more important. I do not plan to OC it either.

Thank you
 
Supporting 64GB of RAM on current-gen would require jumping to the Socket 2011 platform, which adds a at least a couple hundred to the overall cost (motherboard and DDR4).

Can you put a figure on "cheap"?
When you say "whitebox" are you looking for a rackmount server or a desktop-class PC?
 
Compatibility guide should be the first place to start:
http://www.vmware.com/resources/compatibility/search.php

Yes, I definitely looked there. But it really focuses on servers, not components. If it would lists common motherboards that support ESXi 6, it would make my life much easier. And most servers listed there are rack-mounted, which means small form factor, and small (very loud) fans.
I am not looking for performance. If I would find Intel Atom setup that would work with ESXi 6 and support 32-64Gb RAM, that would be ideal. But I do not think those actually exist 🙂
 
Supporting 64GB of RAM on current-gen would require jumping to the Socket 2011 platform, which adds a at least a couple hundred to the overall cost (motherboard and DDR4).

Can you put a figure on "cheap"?
When you say "whitebox" are you looking for a rackmount server or a desktop-class PC?

My preference would be a desktop-class PC, though I can go for a tower server as well.
Re: "cheap" - I would be willing to spend more on the right setup. My preference would be for motherboard+CPU to cost less than RAM 🙂 I do not have a set budget, but planned for about $700-800 for motherboard/CPU and 32Gb RAM, without HDD. I have a case with PSU, and HDDs/SSDs already. I could get the used server off Ebay for this, but those would be higher-performance boxes, that are LOUD.
 
does the free edition support above 32gb ram?

That limit has been gone since 5.5, though I haven't used a free version since 5.1.

Honestly the Essentials licensing is probably worthwhile if you want to do anything real with it as you won't have vCenter otherwise (and thus no web client, and no access to the newer hardware versions).

Viper GTS
 
Honestly the Essentials licensing is probably worthwhile if you want to do anything real with it as you won't have vCenter otherwise (and thus no web client, and no access to the newer hardware versions).

:thumbsup:

Yes, I definitely looked there. But it really focuses on servers, not components. If it would lists common motherboards that support ESXi 6, it would make my life much easier.

VMware pretty much does not care about validating consumer grade motherboards because there's no return on investment for them. So you have to infer based on what works for others.

If you stick to motherboards with Intel NICs and Intel SATA controllers, you'll be in decent shape.

Something like this should work:

Xeon E3-1231 V3 $257
ASRock H97M Pro4 $80
G.Skill DDR3 1866 32GB $227
Total: $564

Note that you have to use ESXi 6.0 for out of the box support for the Intel i218v NIC.
 
:thumbsup:



VMware pretty much does not care about validating consumer grade motherboards because there's no return on investment for them. So you have to infer based on what works for others.

If you stick to motherboards with Intel NICs and Intel SATA controllers, you'll be in decent shape.

Something like this should work:

Xeon E3-1231 V3 $257
ASRock H97M Pro4 $80
G.Skill DDR3 1866 32GB $227
Total: $564

Note that you have to use ESXi 6.0 for out of the box support for the Intel i218v NIC.

Awesome. That's exactly the type of advise I was looking for. Thanks for the note on NIC as well. I plan to use it to run VMs with software that is used daily, not for training. So, sticking just to ESXi 6 is not going to be an issue.

Another question if you do not mind. Any specific reason to use Xeon CPU instead of i3 or i5, or even G34xx series? They're at least $100 (or more) cheaper. Does it make a difference for ESXi 6.0 hardware support?
 
When it comes to ESXi, you need to be concerned with a very limited number of things as far as compatibility goes:

* Network card hardware
* SCSI card hardware / Local chipset hardware for storage

For the most part, the rest is very forgiving (although there can be some abnormalities).

Right now, I am running 5.5 on an old repurposed Athlon X4 (actually unlocked x3) from my previous desktop. I have added an LSI compatible SCSI adapter connected to 4 x 250GB SATA drives in RAID 5. I also have three larger drives connected to board directly without SATA.

The ONLY trouble I have ever had with this config is one of the drives disconnected a few months ago. I rebooted and it was back just fine, so probably something that would have happened on any operating system with the system being up as long as it had been.

I would agree that you could simply use an i3/i5 type of CPU, but it depends on what you intend to do with your platform. If you're putting production data on it, using a Xeon and ECC memory is recommended. If you're not, you can avoid it and take the risks. I take the risks myself as mine is simply running a MythTV backend and an Ubuntu fileserver. Both for home use.

As for the memory amount you are after, if you have an intended purpose, that makes plenty of sense. If not, you may be going overboard. With 8-16, you could still run 5-10 VMs with 1GB a piece with no issue. But it simply depends on the workload you anticipate. If you're looking at 2-4 cores on the CPU side, I can't imagine you're going to get a whole lot out of that RAM.

I think mfenn's build is good, but like you I would stray to the side of a desktop proc simply because it's a whitebox, but that is just me. You can pretty much use whatever CPU you want so long as you are using supported disk controllers and network controllers.

Now, as you building a purpose built system, it may be wise to just start with a Xeon and ECC capable platform... but it depends on your intended use and budget.
 
When it comes to ESXi, you need to be concerned with a very limited number of things as far as compatibility goes:

* Network card hardware
* SCSI card hardware / Local chipset hardware for storage

For the most part, the rest is very forgiving (although there can be some abnormalities).

Right now, I am running 5.5 on an old repurposed Athlon X4 (actually unlocked x3) from my previous desktop. I have added an LSI compatible SCSI adapter connected to 4 x 250GB SATA drives in RAID 5. I also have three larger drives connected to board directly without SATA.

The ONLY trouble I have ever had with this config is one of the drives disconnected a few months ago. I rebooted and it was back just fine, so probably something that would have happened on any operating system with the system being up as long as it had been.

I would agree that you could simply use an i3/i5 type of CPU, but it depends on what you intend to do with your platform. If you're putting production data on it, using a Xeon and ECC memory is recommended. If you're not, you can avoid it and take the risks. I take the risks myself as mine is simply running a MythTV backend and an Ubuntu fileserver. Both for home use.

As for the memory amount you are after, if you have an intended purpose, that makes plenty of sense. If not, you may be going overboard. With 8-16, you could still run 5-10 VMs with 1GB a piece with no issue. But it simply depends on the workload you anticipate. If you're looking at 2-4 cores on the CPU side, I can't imagine you're going to get a whole lot out of that RAM.

I think mfenn's build is good, but like you I would stray to the side of a desktop proc simply because it's a whitebox, but that is just me. You can pretty much use whatever CPU you want so long as you are using supported disk controllers and network controllers.

Now, as you building a purpose built system, it may be wise to just start with a Xeon and ECC capable platform... but it depends on your intended use and budget.


Very good points.

Here is what I use my existing VMs for -

I often connect to 3-5 clients at the same time, each with different VPN. The VPN software is primarily Windows-based. So far I could get by with Win2003 and 0.5 Gb per machine. However, lately many clients switched to the connections that require RDP version and features not supported by Win2003. So, I have to switch to Win7 and/or Win8. We have site licenses for workstation software, so that part is not an issue. The problem is - Win7/Win8 really need at least 2Gb of RAM each, and with 5 of those I would need at least 10Gb, not including the RAM used by a hypervisor, and whatever else I want to run.
In addition, I run a few status DB and app servers occasionally, where I test/develop things (primarily Oracle, SQL Server and MySQL). Most of those are Windows as well (Win2008 and Win2012). Anyway - everything is saved on local disks and backed up elsewhere daily. And even power loss is not really that big of an issue (while I will not be nearly as efficient, I can connect to one client at a time from a "physical" Win8 PC). I ran my existing VM box with non-ECC RAM for 7 years, and it has only 8Gb of RAM 🙂 So, getting 32Gb it's going to be a huge improvement. Since I switch VM boxes VERY infrequently, I think an advice for Xeon may indeed be a sound one.
 
Another question if you do not mind. Any specific reason to use Xeon CPU instead of i3 or i5, or even G34xx series? They're at least $100 (or more) cheaper. Does it make a difference for ESXi 6.0 hardware support?

As of v5.5, ESX does a core count check, so it won't install on a single-core.

For server/virtualization features, typically, Celeron/Pentium/i3 support ECC ram but not device passthrough, and i5/i7 (non-K) support Vt-d for PCI or device passthrough, but not ECC. So small home server people can use ECC RAM in their homebrew NAS, and i5/i7 desktop users can pass a GPU through to their gaming VM, etc.

The motherboard in question would also have to support those features. Dunno if the AsRock board supports Vt-d, but it doesn't support ECC RAM.

And I think ESX won't do device passthrough without a certain license. SO it may be a moot point.

As it happens, I know that ESX 5.5 free will install just fine on a Celeron-equipped motherboard with supported NICs.
 
As of v5.5, ESX does a core count check, so it won't install on a single-core.

For server/virtualization features, typically, Celeron/Pentium/i3 support ECC ram but not device passthrough, and i5/i7 (non-K) support Vt-d for PCI or device passthrough, but not ECC. So small home server people can use ECC RAM in their homebrew NAS, and i5/i7 desktop users can pass a GPU through to their gaming VM, etc.

The motherboard in question would also have to support those features. Dunno if the AsRock board supports Vt-d, but it doesn't support ECC RAM.

And I think ESX won't do device passthrough without a certain license. SO it may be a moot point.

As it happens, I know that ESX 5.5 free will install just fine on a Celeron-equipped motherboard with supported NICs.

I have used VMDirectPath to pass through storage controllers with no issues on the free version, and more recently on essentials. I'm not aware of any licensing requirement for this.

Viper GTS
 
Another question if you do not mind. Any specific reason to use Xeon CPU instead of i3 or i5, or even G34xx series? They're at least $100 (or more) cheaper. Does it make a difference for ESXi 6.0 hardware support?

Xeon E3's are basically i7's that cost less. I picked that because it's a powerful quad core that can handle as many VMs as you can fit into 32GB and the build was already coming in well below your target of $700-800.

You could certainly go for an i3 4160, etc. if you wanted to spend more in the $400 range instead of the originally quoted one.
 
Pretty good info here. The ASrock MBs I can confirm work great with ESXi.

My build a few years ago was 2 systems with the following:

ASRock z77 Pro 3
Core i5 3470
32GB memory
Intel dual gigabit nics

basically gave me 5 nics, had a Nexenta free edition box to do shared storage, everything worked great. I'd definitely suggest the asrock boards, they're simple and reliable.
 
Xeon E3's are basically i7's that cost less. I picked that because it's a powerful quad core that can handle as many VMs as you can fit into 32GB and the build was already coming in well below your target of $700-800.

You could certainly go for an i3 4160, etc. if you wanted to spend more in the $400 range instead of the originally quoted one.


Xeon it is then 🙂
ECC is really not a requirement for me. I have not had RAM failures in my VM servers that ECC could help with, and I do not plan to OC. Pass-through is very nice for disk I/O, but GPU - I can care less about. There will be no game VMs there, only project-related items.

Really good info in this thread. Thanks, everyone, for the help!
 
Xeon it is then 🙂
ECC is really not a requirement for me. I have not had RAM failures in my VM servers that ECC could help with, and I do not plan to OC. Pass-through is very nice for disk I/O, but GPU - I can care less about. There will be no game VMs there, only project-related items.

Really good info in this thread. Thanks, everyone, for the help!
ECC is also worthwhile if you ever think you will run a file system like ZFS. For me ECC is less about surviving a DIMM failure (particularly since I'm not using any of the technologies that would allow me to survive a failure) and more about knowing my data is going to be intact.

Viper GTS
 
Back
Top