CPU for running Virtual Machine Server

Peroxyde

Member
Nov 2, 2007
186
0
76
Hi,

I am looking for advices to build a computer which will be used as Virtual Machine server. Basically, it will have Ubuntu x64 and will run VirtualBox. From 4 to 6 Virtual Machines (VM) will run permanently on this box. The actual users of the VM will use it via Remote Desktop.

The 3 most important factors in hardware to accommodate this scenario are: RAM, Disk I/O and CPU.

The VMs are used for software testings and development. The guest OS will be various Windows server and Windows development tools. There is no need of multimedia capacity (no game, no sound, no video).

To keep the budget reasonable, I think of a motherboard that can allow up to 16 GB. The disks will be RAID 0, stripping 2 or 3 SATA hard drives. Video is lowest priority because most of the time the box (the VM Server) will have the monitor turned off.

I am at lost regarding the CPU. There are so many of them, here is the specs, I hope you can direct me to the brand / model number:

- CPU must have 4 cores (more cores is better to run more VMs).

- Preferably, CPU must not heat too much. If all VMs are running actively, I expect that the CPU load will be probably located in the range of 80% to 100%.

- CPU doesn't need to be super powerful. There is no need for overclocking. In short, it's OK for the VM to perform a task 2 seconds slower but stable, rather than running 0.5 faster with an unstable machine because of heat or pushing the hardware to the limits.

From what I read, that CPU could be Phenom 2 X4, Intel Q9xxx or Intel i7. But which one exactly? From all the benchmarks I read, there is no scenario involving the usage of these CPUs as VM server.

If you happen to have experience with Virtual Machine, I will greatly appreciate if you can share some experience.

Thanks very much in advance.
 

ihyagp

Member
Aug 11, 2008
91
0
0
I'd drop a little extra for a dual socket board and a single xeon 5410. That way if/when you need more power, you can just drop in another cpu. You'd need fb-dimms, which do put out some heat, but you'd have plenty of room to add more later on (most dual xeon boards take 32-64 gigs).

Obviously if you're not in a huge rush, wait until the xeon 5500's become more available.

Also virtualbox might not be the best pick for this. Consider xen or kvm.
 

Peroxyde

Member
Nov 2, 2007
186
0
76
Just learnt something from you. fb-dimms = Fully Buffered DIMM and Xeon. I will look it up later for further documentation. Seems expensive though. Can you roughly estimate the cost?

Does FB-Dimm cost x2 than DDR2 or the same capacity? How much cost a MB + Xeon CPU?

Code:
Also virtualbox might not be the best pick for this. Consider xen or kvm
Hum, this is more worrying. I am going to read more about these two. But can you give quickly the main technical reasons?

 

ihyagp

Member
Aug 11, 2008
91
0
0
Originally posted by: Peroxyde
Just learnt something from you. fb-dimms = Fully Buffered DIMM and Xeon. I will look it up later for further documentation. Seems expensive though. Can you roughly estimate the cost?

Does FB-Dimm cost x2 than DDR2 or the same capacity? How much cost a MB + Xeon CPU?

newegg. go. FB-dimms are more expensive. Price vs. regular ddr2 varies by capacity and speed. They're ECC memory, which you will want for this project. A xeon 5410 is around $275 iirc. Board would be 2-300. Don't skimp on hardware that will grant you better stability for something like this. Especially if you have people doing software testing, where a hardware error may appear to the user as a software one, and send them on a wild goose chase. Let alone multi-hour compile jobs, if they're doing builds on this.

Also virtualbox might not be the best pick for this. Consider xen or kvm
Hum, this is more worrying. I am going to read more about these two. But can you give quickly the main technical reasons?

Not an expert on this one. #1 reason though is the lack of SMP on the guests. Each guest OS will see only one CPU. This would be especially wasteful if you went w/ i7, which presents 8 logical cores. It shouldn't be a cause for worry though.
 

Peroxyde

Member
Nov 2, 2007
186
0
76
Originally posted by: ihyagp
Also what's your reason for using RAID0? If IO is an issue, consider an SSD.

RAID 0 is to try to max out the bandwidth of the SATA2 bus. As it will be risky (no redundancy) so I will just create a job to backup the VM images periodically on an external eSATA disk (may be once a week). It's pretty cheap to combine RAID0 with 3x SATA of 500 GB which will give a total of 1.5 GB capacity. And I bet you it will beat any SSD drive. On the other hand, a 128 GB SSD will probably cost a fortune.

The guest OS in the VM I plan to setup doesn't need more than 1 CPU (although VirtualBox 2.x allow to emulate 2 CPU). However, the host does need multi-core CPU, hoping that it will know how to distribute the load among all the working VMs.

When I was looking for VM solution, most of the times VMWare and VirtualBox came up in the search results. If xen or kvm were better, I would have noticed it.
 

Mogadon

Senior member
Aug 30, 2004
739
0
0
VMware ESXi is available for free these days, might want to check that out.
 

Peroxyde

Member
Nov 2, 2007
186
0
76
Originally posted by: Mogadon
VMware ESXi is available for free these days, might want to check that out.

Oh super cool! I didn't know ESX has a free version. Have invested quite some learning in VirtualBox so far, so I will continue with VBX to keep the project timeline. Then I will evaluate the free ESX on a spare server, we'll see how it goes. Thanks for the info.
 

somethingsketchy

Golden Member
Nov 25, 2008
1,019
0
71
If you have a high budget ($1100 and up) I would say "Go for the Core i7, if you want a high end CPU". That being said if your budget is fairly limited (less than $1000) then go for an Q9550. While a dual socketed motherboard is very nice, it is considerable more expensive (since you are limited to Xeon processors and FB-DIMM memory). At that rate you might as well go with an i7 and have 8 threads handled at once.

But for the most part an q9550 will handle most of the VMs you'll need, unless each VM takes up 100% per core.
 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
4-6 VM's I would go for a 2 CPU setup, a very good RAID card with cache using RAID 1 for O/S and RAID 10 for the VM's with SAS drives. SATA blows for VMs comapred to SAS , even with a raid 0.

I would be going more than 16 GB of RAM. I would look at dedicating 2GB for Host O/S and 2-4GB for each Guest.

I would also look at dual NIC's if this is all goin to be RDP traffic.

My answers are based on using Hardware V and not Software V, and having constant loads on the VM's like they are someones desktops in a business enviro.

If you are looking at home and only going to be running 1-2 VM's, the only thing I would worry about then is disk I/O, which I would go SAS no matter what.
 

Peroxyde

Member
Nov 2, 2007
186
0
76
Wow, thanks gentlemen for the new inputs. Sounds like I am moving from an amateur setup to a more professional one.

Dual NICs makes sense. I will look into SAS drives in more details (just learn of this thanks to mooseracing)

Now I am confused about CPU + Mobo, two serious choices would be:

Option 1: 1 CPU, Quad Cores i7. Is there any Mobo for i7 that allows for more than 16 GB of RAM?

Option 2: 2 CPUs (2x Xeon 5410). I don't know which Mobo to pick but I assume it will allow more than 16 GB.

I don't know well hardware but Option 2 seems to be more expensive than option 1. If Option 2 (Xeon) is way more expensive (purchase and maintenance) and will just give a little bit more power, then I think I would favor the i7 option.

A couple of more questions.


Q1. Between Xeon and i7, which option is more reliable and more flexible?

Q2. No one seems to recommend the AMD Phenom2 X4. Anything wrong with this CPU?
 

xSauronx

Lifer
Jul 14, 2000
19,582
4
81
Originally posted by: Peroxyde

Q2. No one seems to recommend the AMD Phenom2 X4. Anything wrong with this CPU?

if you want a lot of performance it cant touch i7 or xeon. it can keep up with some c2q cpus, especially with some overclocking.
 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
A Dell server may be a great solution. The 1950 series (for 2 HDDs) or the 2950 series (for 6 HDDs) are amazing values, and can easily handle 2 Xeon processors, hardware SAS RAID, VMWare ESX, and 32GB of FB-DIMM.

If you need a really high end system (minus ESX compatibility), the new MacPro from Apple is the only available 8 core Nehalem powered system ATM. It's also pretty cheap ($3200)

~MiSfit
 

ihyagp

Member
Aug 11, 2008
91
0
0
Don't skimp on this. You don't want to throw a random desktop PC at this task and have it fail and take 6 VM's with it.

i7 is great, so is phenom2, but you want ECC memory. That means xeon or opteron. Again, if you can stand to wait for the bloomfield-based xeons to become more available, a xeon 3500 series will do the job well and not cost too much. Otherwise get a couple of the new opterons or xeon 5400's.

RAID0 is a mistake. Each SATA channel is its own bus, so dispel any idea of wanting to saturate it. Don't use southbridge RAID for this either if you're still planning to run linux. I've had DMRAID mirrors break from GRUB updates. MDRAID is a little better but eats CPU cycles (a real factor if you're switching VM contexts all the time). Save yourself the headache and get a real RAID controller. SAS is faster than SATA, but you'll need to decide for yourself whether its worth the extra cost.

Dual NICs are great and many dual socket xeon boards come with them. Its probably not worth any extra cost though, RDP is pretty lightweight.

i7's and their xeon counterparts use triple channel controllers, so plan on getting 12 or 24 gigs of ram.
 

Peroxyde

Member
Nov 2, 2007
186
0
76
Originally posted by: ihyagpRAID0 is a mistake. Each SATA channel is its own bus, so dispel any idea of wanting to saturate it. Don't use southbridge RAID for this either if you're still planning to run linux. I've had DMRAID mirrors break from GRUB updates. MDRAID is a little better but eats CPU cycles (a real factor if you're switching VM contexts all the time). Save yourself the headache and get a real RAID controller. SAS is faster than SATA, but you'll need to decide for yourself whether its worth the extra cost.

Hi again,

Thanks for the extra information. Look like my first task is going to review the budget / quality of the initial plan.

Have tried RAID from Bios + Vista driver. Was fast but I didn't like the hassle of the driver. Under Ubuntu, I'll pass on DMRaid (FakeRAID) would probably a driver nightmare. MDAdm (Linux software Raid) would be simpler, probably not recommended for server usage as you suggested.

You are right, RAID controller would bring peace of mind. Just looked at newegg, not cheap. Do you have any model to recommend? Is it still OK to go with RAID + SATA2 drives?




 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
If you get a Dell server, you will have a hardware PERC 6/i SAS/SATA RAID controller. Set it and forget it :)

~MiSfit
 

ihyagp

Member
Aug 11, 2008
91
0
0
There really aren't any driver hassles with any of these setups. You have to install a driver, that's it. Any modern linux distro will see any ARC card just fine. I've had good luck with the 2420sa, but there's got to be a PCIe counterpart by now.
 

mooseracing

Golden Member
Mar 9, 2006
1,711
0
0
Originally posted by: themisfit610
If you get a Dell server, you will have a hardware PERC 6/i SAS/SATA RAID controller. Set it and forget it :)

~MiSfit


Yep, I would recommend this if you are unsure of the hardware to get.

We have 2-2950's with Perc5's and 2 with Perc6's, they are good mid range cards.

If you want to build your own, I would look at Supermicro Motherboards, and stay away from desktop crap if you want good performance.