build system for virtualization

remusrigo

Junior Member
Mar 18, 2013
3
0
0
I want to buy a new system for virtualization. I wanted to buy a LGA1155 Ivy Bridge CPU and a motherboard that supports 64MB, but the LGA1155 supports max 32GB. I was wondering if I should stick to Intel or switch to AMD.

Any advise, tip is welcome.

thanks
 
Feb 25, 2011
16,975
1,607
126
Well, in terms of getting 64+ GB or RAM, you're either looking at an Opteron rig or an LGA2011 setup.

What's your intended usage model? (Software you're running?) Budget?

If you're not going to be limited by lower single-threaded performance, than go with the Opteron rig. (MySQL, for instance, scales quite well across multiple CPU cores.)

If single-threaded performance is important (some game servers, on the other hand, don't scale well across multiple cores) then you'll be happier with a (probably more expensive) LGA2011 setup.

AMD CPU/Motherboard will tend to be cheaper, so you can throw the "savings" at your I/O subsystem. VMs on SSD arrays is niiiiiiice.
 

remusrigo

Junior Member
Mar 18, 2013
3
0
0
I don't have a big budget, LGA2011 is too expensive, so i'm thinking abut an AMD system with a moherboard that supports 64GB memory. I'm gonna use it for virtualization (servers & clients, my own lab for testing and learning)
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
AMD is great bang for buck for home labs. I have one myself from when I couldn't afford to go Xeon. That said, what hypervisor are you going to run, or do you want to be able to run them all? Hyper-V is the most forgiving as it runs on Server 2008/2012 and so driver support isn't really an issue. Citrix Xenserver is the next most forgiving with a very open driver modding community and most of the time everything just working out of the box. VMware has very strict hardware compatibility lists so you'll need to do some big research if that's what you want to run, to make sure all your parts are compatible with VMware. We can help with that.
 
Feb 25, 2011
16,975
1,607
126
Yeah, I don't get to say this often, but go AMD.

Although if it's just a play-learning system, you probably don't need 64GBs of RAM.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
ESXi will run on just about anything these days. Yes the official list is fairly specific but there are plenty of things that will work just fine in a home environment even if they are not official blessed by VMware.

One thing to keep in mind if you plan to exceed 32 GB ESXi won't even let you power on a VM unless you're on a paid version. This can be surprisingly affordable ($560) but if you plan to use ESXi at some point don't go over 32 GB unless you are willing to either pay for vSphere or physically remove memory.

If you're budget constrained I too would look to AMD. You can get high core counts and massive memory support for cheaper than the Intel solutions, though the resulting performance won't be the same.

If you stick with single socket Xeon you don't have to spend a ton. Server boards generally give you IPMI which is a definite plus and the ability to use ECC memory.

Personally I would find an affordable dual socket 1366 Xeon board and find a pair of used E5620's on ebay. That should give you at least six DIMM sockets so 96 GB is easily within reach on affordable DIMMs ($135 apiece for DDR3 registered ECC).

Viper GTS
 

franzy

Junior Member
Mar 19, 2013
5
0
0
i think its better to go with a cheap server setup, and step away from the desktop thoughts; because you would spend way more money to a top-range desktop than what you would spend for a medium range server which has the same specs or capacity....
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
ESXi will run on just about anything these days. Yes the official list is fairly specific but there are plenty of things that will work just fine in a home environment even if they are not official blessed by VMware.

Yeah, ESXi basically uses slightly modified Linux drivers for network and storage, so they have tons of drivers in there now. Hell, they even support the rtl8169 that is integrated onto the cheapest H61 boards!
 

imackin

Junior Member
Mar 17, 2013
3
0
0
Long winded response, sorry.

running server grade hardware is awesome, been doing it for years in my home lab. However, I recently switched to some good consumer grade hardware and get better overall performance.

Noise :
server grade hardware is LOUD. you have to cool everything well especially the memory, lots of fans/heatsinks. Ecc/FBdimm's burn hot and they are not cheap to get in dense configs. a single 2gig fbdimm will use 20-40 watts depending on load. Have a bunch and the bank of ram is a space heater. 64gigs of ECC/fbdimm will cost a fortune and depending on density will still generate a lot of heat.

CPU :
really doesn't matter which CPU you choose unless you are worried about mobility between ESX hosts. Either AMD or Intel will be fine unless it doesn't support the feature set you want. x64 and vt-x, vt-d in the intel world and the corresponding AMD spec...
Also, i would rather have a single cpu with four cores than 2 cpu's with 2 cores.

Power :
my two servers were idling at around 500watts. The Current setup uses less than half of that (175 watts at the ups). I dont use monster power supplies now but good quality units and i dont have two rack mount 3kv UPS's, 1 1.5kw tabletop UPS is all i need for proper shutdown and 20 or so minutes of runtime.

Raid :
Well i still use hardware raid since bios level (software raid) kinda sucks. OS raid, depending on implementation can be okay. I guess i am a little old school with this.

Cost :
Expensive to power all that server grade hardware, and all the heat then needs to be taken care of by your house AC unit. i noticed a considerable difference in my power bill in not only being more efficient but my AC units dont run all the time (even at night).

Other considerations:
Computer case :
server motherboards usually are not atx,matx, etc. they are CEB, eatx (old spec), basically 12x13 inchs. You would need a server grade case to handle a server grade motherboard. This is a pain in the ass and can get really pricey especially when you get into hotswap bays and such. $200-$400 per case would be a decent price range for this with 6 to 8 sata hotswap bays. They are very nice though...
Power supply :
You can use generic PS's with server grade motherboards but they will require some additional power connector whips and/or converters. I have seen the Generic PS's blow up when attached to server boards, not fun when you just invested a crap load of money into your very nice server grade hardware. Also server grade PS's are a bit pricey.

Software :
I use both Microsoft and ESX hypervisors. 2008r2 runs perfectly on the below hardware, no issues. ESX runs perfectly also except for the setup, you have to put in a supported NIC to get it installed. You can take out the extra nic after the onboard nic drivers are installed. one neat feature that i got this hardware setup for is the support of VT-d. pretty cool to give a raid adapter or video card to a VM at a hardware level.
With dynamic Hypervisor memory utilization there is not a direct 1 to 1 correlation between unless you hard set it. you can basically oversubscribe your ram in hopes that not all of the machines will get busy at once. If they do you will swap like mad, but overall it is a home test lab

current setup:

-Windows 2008 R2 hyper-v server/file server
Intel i3-2100T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
16 gigs of ram
3ware SATA 8 drive raid5 setup.

-Vmware ESX 5.1 server
Intel i5-2500T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
32 gigs ram
no raid setup. all individual drives. backed up to the raid set on the other server then tape.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
If you go with Windows Server 2012 you can use dedup on non boot volumes. This can you save you a lot of space on VHD's if you're not using them for production. (have more VM's, backups of VM's or put them on a SSD instead of spindle drive)
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Long winded response, sorry.

running server grade hardware is awesome, been doing it for years in my home lab. However, I recently switched to some good consumer grade hardware and get better overall performance.

Noise :
server grade hardware is LOUD. you have to cool everything well especially the memory, lots of fans/heatsinks. Ecc/FBdimm's burn hot and they are not cheap to get in dense configs. a single 2gig fbdimm will use 20-40 watts depending on load. Have a bunch and the bank of ram is a space heater. 64gigs of ECC/fbdimm will cost a fortune and depending on density will still generate a lot of heat.

CPU :
really doesn't matter which CPU you choose unless you are worried about mobility between ESX hosts. Either AMD or Intel will be fine unless it doesn't support the feature set you want. x64 and vt-x, vt-d in the intel world and the corresponding AMD spec...
Also, i would rather have a single cpu with four cores than 2 cpu's with 2 cores.

Power :
my two servers were idling at around 500watts. The Current setup uses less than half of that (175 watts at the ups). I dont use monster power supplies now but good quality units and i dont have two rack mount 3kv UPS's, 1 1.5kw tabletop UPS is all i need for proper shutdown and 20 or so minutes of runtime.

Raid :
Well i still use hardware raid since bios level (software raid) kinda sucks. OS raid, depending on implementation can be okay. I guess i am a little old school with this.

Cost :
Expensive to power all that server grade hardware, and all the heat then needs to be taken care of by your house AC unit. i noticed a considerable difference in my power bill in not only being more efficient but my AC units dont run all the time (even at night).

Other considerations:
Computer case :
server motherboards usually are not atx,matx, etc. they are CEB, eatx (old spec), basically 12x13 inchs. You would need a server grade case to handle a server grade motherboard. This is a pain in the ass and can get really pricey especially when you get into hotswap bays and such. $200-$400 per case would be a decent price range for this with 6 to 8 sata hotswap bays. They are very nice though...
Power supply :
You can use generic PS's with server grade motherboards but they will require some additional power connector whips and/or converters. I have seen the Generic PS's blow up when attached to server boards, not fun when you just invested a crap load of money into your very nice server grade hardware. Also server grade PS's are a bit pricey.

Software :
I use both Microsoft and ESX hypervisors. 2008r2 runs perfectly on the below hardware, no issues. ESX runs perfectly also except for the setup, you have to put in a supported NIC to get it installed. You can take out the extra nic after the onboard nic drivers are installed. one neat feature that i got this hardware setup for is the support of VT-d. pretty cool to give a raid adapter or video card to a VM at a hardware level.
With dynamic Hypervisor memory utilization there is not a direct 1 to 1 correlation between unless you hard set it. you can basically oversubscribe your ram in hopes that not all of the machines will get busy at once. If they do you will swap like mad, but overall it is a home test lab

current setup:

-Windows 2008 R2 hyper-v server/file server
Intel i3-2100T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
16 gigs of ram
3ware SATA 8 drive raid5 setup.

-Vmware ESX 5.1 server
Intel i5-2500T
Intel Desktop Executive mATX DQ67SWB3 Motherboard
Antec 650watt 80 plus power supply
32 gigs ram
no raid setup. all individual drives. backed up to the raid set on the other server then tape.

If you're talking about FB-DIMMs you were obviously running older generation hardware. Doing power/noise comparisons isn't really fair when dealing with hardware from that generation. Yes FB-DIMMs use a ton of power. Nobody is surprised by this.

Here's the storage part of my setup:

storage_1.jpg


storage_2.jpg


That's a dual quad (E5606) on a Tyan server motherboard, 48 GB ECC DDR3, 24x1 TB 7200 RPM HDD, Areca 1680, etc. It's very quiet thanks to 100% Noctua fans and the AX1200 running at very low load levels. It draws about 400W from the wall under my normal load last I checked.

I ran it in my living room prior to moving to my new house and it now runs right beside my desk. My desktop PC is louder.

Now if you're running a 2950 or something yes it's going to scream. But if you build it yourself and build for silence there is no reason it HAS to be noisy.

Viper GTS
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
64GB of RAM just to play around with??? Yeesh, if you want to tinker with a setup and different OSes and such, you don't need anything close to that.

What's your budget OP? That'd give people a better idea of what to spec you with.
 

Dahak

Diamond Member
Mar 2, 2000
3,752
25
91
Personally I have been running an esxi machine for testing/playing around on a desktop machine and the biggest thing that annoys me is disk i/o. that is usually where the bottlenecks are.

So i would pull back on the ram say to 32GB and look at increasing disk i/o
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Is that a full-sized rack or are you just happy to see me? :awe:

Roughly half-sized - 25 or 26U I think. I have a lot of room for expansion, I've only got the 6U at the bottom + switch/PDU/KVM at the top taking another 2.

My movers hated me when they had to carry that up two flights of stairs.

Viper GTS
 

mvbighead

Diamond Member
Apr 20, 2009
3,793
1
81
Personally I have been running an esxi machine for testing/playing around on a desktop machine and the biggest thing that annoys me is disk i/o. that is usually where the bottlenecks are.

So i would pull back on the ram say to 32GB and look at increasing disk i/o

32GB is still insane for a non-production box. A 2012 box of any sort should be fine with 4GB of RAM. Unless you're running production jobs on 8 VMs, that just seems like massive overkill.
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
32GB is still insane for a non-production box. A 2012 box of any sort should be fine with 4GB of RAM. Unless you're running production jobs on 8 VMs, that just seems like massive overkill.

Depends on what he's testing. I run a full MDOP lab on mine (3 SCCM servers, 2 replicated database servers, 2 DC's, 2 read only DC's, 3 RDS servers, and 2 APP-V servers, 5 XP, 5 Windows 7, and 5 Windows 8 VMs) I love having 96GB of ram.
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Depends on what he's testing. I run a full MDOP lab on mine (3 SCCM servers, 2 replicated database servers, 2 DC's, 2 read only DC's, 3 RDS servers, and 2 APP-V servers, 5 XP, 5 Windows 7, and 5 Windows 8 VMs) I love having 96GB of ram.

Yep I used mine to do a full VCP lab with storage, multiple virtualized ESXi hosts, vCenter, guests running on those virtualized hosts, etc. With 48 GB I was comfortable, but would have liked 96.

My normal daily load though is far lower.

Viper GTS
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
32GB is still insane for a non-production box. A 2012 box of any sort should be fine with 4GB of RAM. Unless you're running production jobs on 8 VMs, that just seems like massive overkill.

4GB? That's crazy low unless you love to swap. VM swapping of course adds to the already-saturated I/O subsystem. Not a good way to go IMHO.
 

remusrigo

Junior Member
Mar 18, 2013
3
0
0
thanks to all for the replays, i guess that i'll remain to a Intel CPU and 32GB RAM will be ok