First server build review needed!

KAJINOFE

Junior Member
Dec 30, 2014
5
0
0
Hello all,

I’m going to be building a server machine soon and wanted to get some opinion on the list of components that I’ll be putting together.

1. What YOUR PC will be used for. That means what types of tasks you'll be performing.
Everyday use (my only machine)
Multiple VMs at the same time (Windows 7/8/server, Linux)
Cisco network emulations: GNS3/IOU, VIRL, Nexus 1000v through nested ESXi on VMware workstation/player
Gaming (very light to none for now)

2. What YOUR budget is. A price range is acceptable as long as it's not more than a 20% spread
$3000

3. What country YOU will be buying YOUR parts from.
U.S.

4. IF you're buying parts OUTSIDE the US, please post a link to the vendor you'll be buying from.
We can't be expected to scour the internet on your behalf, chasing down deals in your specific country... Again, help us, help YOU.
N/A

5. IF YOU have a brand preference. That means, are you an Intel-Fanboy, AMD-Fanboy, ATI-Fanboy, nVidia-Fanboy, Seagate-Fanboy, WD-Fanboy, etc.
Intel Xeon build

6. If YOU intend on using any of YOUR current parts, and if so, what those parts are.
RAM (already have from old rig)
CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CMZ16GX3M4A1600C9

Case (already have from old rig)
COOLER MASTER CM Stacker STC-T01-UWK Black/ Silver Aluminum / Steel ATX Full Tower Computer Case

PSU (already have from old rig)
IN WIN COMMANDER IRP-COM1200 1200W ATX 12V 2.3 / EPS 12V 2.91 SLI Ready CrossFire Ready 80 PLUS Certified Modular Active PFC Power Supply

GPU (already have from old rig)
EVGA 04G-P4-3673-KR GeForce GTX 670 FTW+ 4GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready SLI Support Video Card

7. IF YOU plan on overclocking or run the system at default speeds.
No overclocking planned

8. What resolution, not monitor size, will you be using?
1920 x 1080

9. WHEN do you plan to build it?
Note that it is usually not cost or time effective to choose your build more than a month before you actually plan to be using it.
ASAP

The rest of the components are as follows:
CPU (buy 2)
Intel Xeon E5-2620 v2 Ivy Bridge-EP 2.1GHz 15MB L3 Cache LGA 2011 80W Server Processor BX80635E52620V2

CPU Heatsink/Fan (buy 2)
SUPERMICRO SNK-P0050AP4 Heatsink for Supermicro X9DR3-F Motherboard

Motherboard (buy 1)
SUPERMICRO MBD-X9DR3-F-O SSI EEB Server Motherboard Dual LGA 2011 DDR3 1600

SSD (buy 1)
SAMSUNG 850 Pro Series MZ-7KE512BW 2.5" 512GB SATA III 3-D Vertical Internal Solid State Drive (SSD)

If you see any compatibility issues or have suggestions on alternatives, I’m all ears. Thanks!
P.S.: I'll be increasing the RAM soon but just not yet. (end goal is 128GB)
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
So I'm a little confused here. You say that you want a server, but you have a GPU and such spec'ed out and plan to use VMWare Workstation. So are you looking for a server or a workstation with virtualization capabilities?
 

KAJINOFE

Junior Member
Dec 30, 2014
5
0
0
So I'm a little confused here. You say that you want a server, but you have a GPU and such spec'ed out and plan to use VMWare Workstation. So are you looking for a server or a workstation with virtualization capabilities?

GPU is there since I'll be salvaging from my current rig, I went with server CPU for maximum memory support to run many VMs/Cisco router/switch instances at the same time.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
As a Virtualization guy, let me say that your specs for studying use appear vastly over-rated in CPU, and under-rated in Memory.

For the most part, VM's do not use *that* much CPU. Out of all 32Ghz of CPU resource available in my ESX 5.5 Home environment, I'm only averaging about 10Ghz of CPU usage right now, 1/3 of which is due to a high-powered Minecraft Server. That's running about about 15 Virtual Machines.

What *is* constrained though is memory. I'm using all 64GB of memory in my hosts and would happily trade half my CPU resources in exchange for another 64GB of memory.

Consider getting just one processor and getting your memory way up. Keep in mind that most systems are limited to 32GB of Unbuffered non-ECC memory, which you seem to want to re-use. If that's the case, I'd definitely push for getting 32GB in there, especially if you're using a lot of Windows VMs.
 
Feb 25, 2011
17,000
1,628
126
That motherboard requires ECC ram; your existing memory won't work.

The motherboard's video is adequate to run VMWare. Sell the GPU to defray the cost of ECC RAM.

What's the SSD for?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
As a Virtualization guy, let me say that your specs for studying use appear vastly over-rated in CPU, and under-rated in Memory.
To elaborate: the old days are gone. Your CPU has hardware to handle an additional level of page tables, plus hardware exception support to go along with it, so reading a "physical" cache line from a VM is only slightly more complicated than a typical virtual address lookup from within a physical OS, and a virtual line from within a VM's program is pretty much just looked up twice, the first time. Usually, you'll get 90-100% the same performance as if the VM weren't a VM.

I rarely run more than 3 VMs on my desktop at a time, and 16GB just suffices, for the time being. If you want all the power the VM's OS can give you, you have to give it RAM like it's not a single-program appliance, and that adds up. Giving each 1-2GB can work, sure, but that's a Pyrrhic victory, when they start thrashing their respective page files/partitions, or slow down a lot due to not being able to keep files cached.
 

KAJINOFE

Junior Member
Dec 30, 2014
5
0
0
As a Virtualization guy, let me say that your specs for studying use appear vastly over-rated in CPU, and under-rated in Memory.

For the most part, VM's do not use *that* much CPU. Out of all 32Ghz of CPU resource available in my ESX 5.5 Home environment, I'm only averaging about 10Ghz of CPU usage right now, 1/3 of which is due to a high-powered Minecraft Server. That's running about about 15 Virtual Machines.

What *is* constrained though is memory. I'm using all 64GB of memory in my hosts and would happily trade half my CPU resources in exchange for another 64GB of memory.

Consider getting just one processor and getting your memory way up. Keep in mind that most systems are limited to 32GB of Unbuffered non-ECC memory, which you seem to want to re-use. If that's the case, I'd definitely push for getting 32GB in there, especially if you're using a lot of Windows VMs.

The end goal for memory is 128GB, sorry I didn't make that clear(OP is edited with this info). The reason I'm staying with my old 16GB desktop RAM for now is that I might be getting some Buffered ECC memory from a friend (in 1-2 months time), this is still pending so I may have to buy some after all but it hasn't been decided yet, hence my P.S. in the OP. The dual processors setup is my safety net for future labbing possibilities where it might involve different networking vendors (F5, Juniper, Avaya, Arista, etc...) virtualization scenarios along with the Cisco routers running at the same time. Thanks for your input!

Quick question for you since you're a VM guy. From a different forum, someone pointed out that 128GB RAM would be too much and I would run into disk I/O issues, any idea what this means?


That motherboard requires ECC ram; your existing memory won't work.

The motherboard's video is adequate to run VMWare. Sell the GPU to defray the cost of ECC RAM.

What's the SSD for?

The motherboard should be able to handle non-ECC UDIMMs:
http://www.supermicro.com/products/motherboard/xeon/c600/x9dr3-f.cfm
Up to 128GB DDR3 ECC/non-ECC UDIMM
GPU is staying for now as I may have a different use for it, the SSD is for the OS installation.

To elaborate: the old days are gone. Your CPU has hardware to handle an additional level of page tables, plus hardware exception support to go along with it, so reading a "physical" cache line from a VM is only slightly more complicated than a typical virtual address lookup from within a physical OS, and a virtual line from within a VM's program is pretty much just looked up twice, the first time. Usually, you'll get 90-100% the same performance as if the VM weren't a VM.

I rarely run more than 3 VMs on my desktop at a time, and 16GB just suffices, for the time being. If you want all the power the VM's OS can give you, you have to give it RAM like it's not a single-program appliance, and that adds up. Giving each 1-2GB can work, sure, but that's a Pyrrhic victory, when they start thrashing their respective page files/partitions, or slow down a lot due to not being able to keep files cached.

Again, my bad for the lack of info regarding the RAM, the OP is edited with this info, thanks!
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
The end goal for memory is 128GB, sorry I didn't make that clear(OP is edited with this info). The reason I'm staying with my old 16GB desktop RAM for now is that I might be getting some Buffered ECC memory from a friend (in 1-2 months time), this is still pending so I may have to buy some after all but it hasn't been decided yet, hence my P.S. in the OP. The dual processors setup is my safety net for future labbing possibilities where it might involve different networking vendors (F5, Juniper, Avaya, Arista, etc...) virtualization scenarios along with the Cisco routers running at the same time. Thanks for your input!

I agree with the other posters here: you are severely overestimating your CPU needs and underestimating your memory/IO needs. Networking devices don't have fast CPUs to begin with, and your traffic loads in a lab environment are going to be minimal.

So again, it really sounds like you're looking for a high-memory workstation with virtualization capabilities, *not* a server. You'll have a much better time overall if you stick with a workstation board and not a server board. The server board that you have picked out has no audio, limited USB 2.0 ports, and the HSFs you have picked out will be loud. It's just going to be an all around uncomfortable machine to work with.
 

KAJINOFE

Junior Member
Dec 30, 2014
5
0
0
I agree with the other posters here: you are severely overestimating your CPU needs and underestimating your memory/IO needs. Networking devices don't have fast CPUs to begin with, and your traffic loads in a lab environment are going to be minimal.

So again, it really sounds like you're looking for a high-memory workstation with virtualization capabilities, *not* a server. You'll have a much better time overall if you stick with a workstation board and not a server board. The server board that you have picked out has no audio, limited USB 2.0 ports, and the HSFs you have picked out will be loud. It's just going to be an all around uncomfortable machine to work with.

Thanks for your input, good points on the HSF and sound. I've been looking at some Noctuas and I have an old SB X-Fi sound card (pci-e 1x). Do you have any recommendation on a *workstation* config that's capable of more than 32GB of memory?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
If you are willing to forgo ECC, any X79 or X99 will work, with X79 being more cost-effective and mature. But, once DDR4 starts becoming mainstream, RAM upgrade costs may be lower with it than DDR3.

Then, you can build it like any other, with a good quiet case and beefy quiet HSF.

MSI X79A-GD45 Plus
Asus X79-deluxe

Those look good to me, for DDR3. The Asus has integrated AC Wifi, and BT, if that gives any value to you.

For X99, any of the ASRocks look good, to me, depending on wanted features and form factor (the Killer ones have an Intel and Atheros NIC, so you can treat them as single-NIC boards with an Intel).

All the X99s claim ECC support w/ Xeons. With ECC RDIMMs, I'd stock to either the QVL, or models selected by Crucial or Kingston, if available (I haven't looked for non-prebuilts like that in years, now--it's always OEM boxes).
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Meaning it only accepts ECC.

Nested into what? Also, eew.

From the OP, it sounds like he is going to be Nesting ESX into a VMWare Workstation build. A completely acceptable setup for lab builds. Since Windows 8 Pro supports up to half a TB of memory, it can host a ton of VM's if you really want it to.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
From the OP, it sounds like he is going to be Nesting ESX into a VMWare Workstation build. A completely acceptable setup for lab builds. Since Windows 8 Pro supports up to half a TB of memory, it can host a ton of VM's if you really want it to.

That's what I do for most of my lab work at home. Like others pointed out, memory is the biggest bottleneck that I run into.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
If you are willing to forgo ECC, any X79 or X99 will work, with X79 being more cost-effective and mature. But, once DDR4 starts becoming mainstream, RAM upgrade costs may be lower with it than DDR3.

Then, you can build it like any other, with a good quiet case and beefy quiet HSF.

MSI X79A-GD45 Plus
Asus X79-deluxe

Those look good to me, for DDR3. The Asus has integrated AC Wifi, and BT, if that gives any value to you.

For X99, any of the ASRocks look good, to me, depending on wanted features and form factor (the Killer ones have an Intel and Atheros NIC, so you can treat them as single-NIC boards with an Intel).

All the X99s claim ECC support w/ Xeons. With ECC RDIMMs, I'd stock to either the QVL, or models selected by Crucial or Kingston, if available (I haven't looked for non-prebuilts like that in years, now--it's always OEM boxes).

:thumbsup: Agree with this. You can fit 64GB of DDR3 in your typical X79 8-slot board (cheapest option), and you can scale much higher with RDIMMs in an X99 board, but at increased cost of course.