VMWare ESX server (Intel or AMD)

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Hello all, I'm wanting rid of a bunch of my computers and focus on putting together a speedy ESX server to run several virtual machines at the same time for various duties. Some of these include learning new OS's / applications, LAMP server, torrent server, media streaming server, Minecraft server, etc).

What I'm concerned about is which platform offers a better value for an ESX environment. My main concern is I/O speeds, I want the I/O speed to be as fast as possible (planning on using a SSD for the boot drive along with a some mirrored drives for storage) as traditionally this where virtual machines are painfully slow.

So, the 8320 looks like a nice option for the price but would that hold me back a lot over say a 2600K (or whatever the Ivy replacement one is?)

Trying to keep the budget to around 1K CAD or less (not including the Raid array)

Any help or suggestions here would be appreciated.

Thanks!
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
There is a reason why AMD has less than 5% of the server market.

Don't be fooled and equate "core" counts with performance. The whole system difference in price between an Intel and AMD solution is far less than the delta in performance for virtualization. Get an HT IB.
 

Zstream

Diamond Member
Oct 24, 2005
3,395
277
136
Hello all, I'm wanting rid of a bunch of my computers and focus on putting together a speedy ESX server to run several virtual machines at the same time for various duties. Some of these include learning new OS's / applications, LAMP server, torrent server, media streaming server, Minecraft server, etc).

What I'm concerned about is which platform offers a better value for an ESX environment. My main concern is I/O speeds, I want the I/O speed to be as fast as possible (planning on using a SSD for the boot drive along with a some mirrored drives for storage) as traditionally this where virtual machines are painfully slow.

So, the 8320 looks like a nice option for the price but would that hold me back a lot over say a 2600K (or whatever the Ivy replacement one is?)

Trying to keep the budget to around 1K CAD or less (not including the Raid array)

Any help or suggestions here would be appreciated.

Thanks!

Get the cheapest platform and some good storage and you're set.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
I recently completed a new server. I'm not running ESX but chose to go with Windows 2012 Datacenter with Hyper V instead. I bought this motherboard and currently am running a single E5-2620 in it (I'll add the second soon) with 64 GB of RAM. For drives, I have the OS volume running on two Samsung 830 SSDs mirrored on the board's SATA ports and I added an LSI MegaRAID 9261-8i for my data array (running 8 WD Red 3 TB drives).

I was skeptical at first, but I highly recommend a SuperMicro server board with IPMI. The board won't be that much more than a high-end desktop board and you'll get much more expandability and the IP KVM is worth the difference alone. I'm pretty sure all my components are on VMWare's HCL as well.

EDIT: Tweaking the RAID controller settings has given me about 900 MB/s write speed on a single RAID6 volume using the 8 WD drives. If you want to take a larger risk and go with RAID5, you'll get over 1 GB/s.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Awesome thanks for the suggestions!

Indy, is there any advantage of using Hyper V over ESX? What made you choose that for your solution?

I like that motherboard as it gives great expandability but I fear it would quickly add up to over 1K after factoring in the rest of the hardware. Hmm, will need to price this out.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
Awesome thanks for the suggestions!

Indy, is there any advantage of using Hyper V over ESX? What made you choose that for your solution?

Either should work, but I wanted a base OS with direct access to the RAID volume so I could run a media server (Plex) with a little better performance, so I went with Hyper V. In a single server environment, they're probably fairly equivalent for the most part in terms of capabilities. Hyper V is catching up fast and beat VMWare to the punch with shared-nothing migration, for example. I believe ESXi 5.1 has that.

Keep in mind one very important point -- the free ESXi has a RAM limitation of 32 GB (IIRC). So if you think you will go above that, you would need to pay $495 for the VMWare essentials license to up that limit. For me, I'm a Microsoft Sharepoint Engineer and do lots of dev and testing, so it made sense for me to subscribe to Technet and just use my Windows 2012 Datacenter license since the major purpose of my server is for Sharepoint lab and testing work.

I like that motherboard as it gives great expandability but I fear it would quickly add up to over 1K after factoring in the rest of the hardware. Hmm, will need to price this out.

I can give you a rough idea of what a core system will cost. The most important point is to watch for deals. Here is my main part list with estimated costs:

1. Supermicro board X9DR3-F-O: $375 (Newegg 15% off coupon).
2. LSIMegaRAID 9261-8i: $450 (Newegg 15% off coupon)
3. Initial 32 GB of RAM to start with: $260
4. Corsair HX850: $135 (Newegg 15% off coupon + $10 rebate)
5. Supermicro HSF: $50
6. Xeon E5-2620: $425
7. Two Samsung 830 SSDs for OS: $140
8. Xigmatek Elysium case: $150
9. Icy Dock four 2.5" hot swap bay (OS SSDs): $80 IIRC
10. Two Icy Dock 4 in 3 hot swap cages: $160
11. Cables for RAID card (SCA to SATA): $30
12. Optical drive: I had a spare laying around
13. 8 WD Red 3 TB drives: $1200

The items in red are "optional," meaning you can find cheaper alternatives or eliminate them altogether. Let me explain each one:

2. You could skip the RAID controller and use the board's onboard SCU (8 ports and it comes with SCA to SATA cables) or maybe just find a cheaper array controller. The 9261-8i seems to be a great controller, though.
3. You could start off by going with two 8 GB DIMMs instead (50% cheaper) and just expand RAM later.
6. You could choose a quad core instead of the hex and save about $200 on each CPU
7. You could probably go with cheaper spindle drives here.
9. and 10. You may be OK not using hot swap bays.
13. Lots of options here.

Here are some tips:

1. You need a power supply with dual EPS connectors (8 pin CPU power) for this board. I chose the Corsair HX850. Fully specced, the online PSU calculators were estimating about 600 W of use, so you don't need to go "huge" here.
2. A standard LGA HSF will probably not work, as you need a "narrow" type. This one will work: http://www.newegg.com/Product/Produc...82E16816101683.
3. The board is extended ATX so you need a big case. I use the Xigmatek Elysium. Note that the upper holes on the board did not have corresponding mounting holes in the Xigmatek case, so I had to use nylon standoffs.

Please feel free to PM me with any questions or we can keep this thread going. :)
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
How important is the Minecraft server?

Given your use cases your server is I/O limited except for the Minecraft server. Minecraft servers are horribly, horribly single thread performance bound. I run the default server with no mods on a 2500k clocked to 4.4ghz and with 3 people on the server doing relatively intensive stuff (cattle/chicken/pig farming or anything involving many entities) it will bring my tick rate to where it might start dropping below the 60 FPS mark meaning you'd see increasing lag on the client side. If you use mods it will get 10x worse. The Minecraft server is truly a turd, and I wouldn't hold out on them rewriting it for better performance any time soon.

If you don't care that much about the MC server, you should go ahead and use most the budget on I/O. Make sure you have a glut of memory to give all those VMs. If your data doesn't need to be backed up to the very moment of a HDD failure then I would get a large, fast SSD as your main drive and forgo the RAID on that drive. Instead, use that money to get more hard drives to use for your daily/periodic back-ups and for media for the Plex server.

Are RAIDs nice to have? Yes. Are they necessary 100% of the time? No. Only get a RAID 1 set up if you need maximum uptime or if daily backups are not frequent enough, otherwise you will run through a $1000CAD budget very quickly. A decent raid card is $400-600 alone. For that much money you could get enough HDD space to have rolling daily backups.

If you wanted the RAID for throughput (RAID 0), then instead get a beefy SSD, unless you need the throughput on the media drives where terabytes of SSD space would be too expensive.

EDIT: Either way don't skimp on the RAM
 
Last edited:

Mir96TA

Golden Member
Oct 21, 2002
1,950
37
91
I have run VM-ware on Xeon 3110 and 1055T.
I found, when you are running multi OS running 1055 had advantage over Xeon
Yes What I ran was based on LAB and 1055 had 8 Gig vs 4 Gig; .so I think having a couple extra core and memory did helped the 1055.
Things look is NIC and Disk controller.
And mainly NIC.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
How important is the Minecraft server?

Are RAIDs nice to have? Yes. Are they necessary 100% of the time? No. Only get a RAID 1 set up if you need maximum uptime or if daily backups are not frequent enough, otherwise you will run through a $1000CAD budget very quickly. A decent raid card is $400-600 alone. For that much money you could get enough HDD space to have rolling daily backups.

Strongly disagree. RAID volumes are for fault tolerance and redundancy and should never be confused with backups. Even with solid RAID components in place, you should ALWAYS back critical data up, preferably every night. Data corruption and malware are mirrored with RAID1 or parity striped with RAID5, 6, etc. Only a good backup can save you from that.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
Great info!

I'm going to do a bake off between Hyper-V and ESX by re-purposing one of my existing boxes. Once I have a better idea of what I want really need performance wise I'll sell another 30BTC or so and buy myself a proper server with RAID controller.

In the mean time I have this to work with. Do you think it's a good idea to buy a decent RAID controller card now that I can eventually carry over my future server or will I be able to use the on-board RAID with Hyper-V or ESX?

Quad Core A6-3670K running @ 3.4Ghz
Corsair A50 heatsink
Sapphire PURE Platinum A75 (PT-A8A75) motherboard Socket FM1
8GB of RAM (will toss in another two 8 sticks to bring me up to 24GB total).
SATA-3 Kingston 120GB SSD (Sandforce based)
3X 2TB WD Green HDD.
XFX 550W Power Supply (Seasonic)
Corsair Carbie 200R case

Thanks again!


To answer some of the other questions in the thread:

-A large Minecraft server is not totally necessary, I just have a few friends I play with that were interested.
-I am not treating the planned storage setup as a backup solution, I learned that lesson a long time ago. Encrypted data uploaded to cloud storage services is my preferred method these days.
-I mostly focusing on good performance. I know the above setup will not provide me great performance but if it lets me get away with a Plex setup while running other VM's doing security vulnerability assessments, bitcoin mining (with ASICS), and other tasks I would be very happy.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
I don't believe you're going to be able to use onboard RAID with VMWare. Hyper V is more forgiving and you'd likely be OK. Remember though, if you want to move your RAID volume to another system, a RAID card is the easiest way. Otherwise you'd need to ensure that both onboard chipsets are exactly the same. That, or you could use software RAID as it is easily portable but obviously, performance won't be good.
 

holden j caufield

Diamond Member
Dec 30, 1999
6,324
10
81
here is a decent setup that some people run for a vm lab

cheap cpu/mobo combo from microcenter, pic a mobo/cpu combo with vtd

install esx off of a usb drive, it's not going to run faster off of an ssd

ssd as a datastore to house your main vm, this will fill up fast so you'll need some kind of a raid controller(s)

for me the beauty of vt-d is I have my main esx server and main game machine in one box

pass through a video card and sound card. Now you're running basically a native machine with a vsphere client.

Or run vmware workstation and run esx inside of it. You can nest vms with the newer chips. I think everyone goes with intel at colos and their racks is that it consumes less power. And when you run racks of the stuff 24/7 it might mean thousands in savings.

I like supermicros and 64gb and ipmi are great but 439 is a bit steep. I got my cpu/mobo combo for $160 after rebate at microcenter.
 
Last edited:

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
I like supermicros and 64gb and ipmi are great but 439 is a bit steep. I got my cpu/mobo combo for $160 after rebate at microcenter.

I waited until Newegg sent me 15% off coupons on the server boards and RAID controllers before I bought. So I only ended up paying $375 for that board, which was about the price of one of the high-end LGA 2011 desktop boards. I WISH I could've gotten 15% off on the Intel CPU though.
 

Madpacket

Platinum Member
Nov 15, 2005
2,068
326
126
I don't believe you're going to be able to use onboard RAID with VMWare. Hyper V is more forgiving and you'd likely be OK. Remember though, if you want to move your RAID volume to another system, a RAID card is the easiest way. Otherwise you'd need to ensure that both onboard chipsets are exactly the same. That, or you could use software RAID as it is easily portable but obviously, performance won't be good.

Yeah that's what I was worried about. I should just splurge on a decent hardware RAID controller card.

here is a decent setup that some people run for a vm lab

cheap cpu/mobo combo from microcenter, pic a mobo/cpu combo with vtd

install esx off of a usb drive, it's not going to run faster off of an ssd

ssd as a datastore to house your main vm, this will fill up fast so you'll need some kind of a raid controller(s)

for me the beauty of vt-d is I have my main esx server and main game machine in one box

pass through a video card and sound card. Now you're running basically a native machine with a vsphere client.

Or run vmware workstation and run esx inside of it. You can nest vms with the newer chips. I think everyone goes with intel at colos and their racks is that it consumes less power. And when you run racks of the stuff 24/7 it might mean thousands in savings.

I like supermicros and 64gb and ipmi are great but 439 is a bit steep. I got my cpu/mobo combo for $160 after rebate at microcenter.

Very interesting. Looking at available hardware choices on wikipedia it appears there's now a fairly decent selection of hardware that supports IOMMU. Perhaps I could build one box to rule them all (VM + Gaming behemoth) with careful consideration. I need to learn more about this.
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
Quad Core A6-3670K running @ 3.4Ghz
Corsair A50 heatsink
Sapphire PURE Platinum A75 (PT-A8A75) motherboard Socket FM1
8GB of RAM (will toss in another two 8 sticks to bring me up to 24GB total).
SATA-3 Kingston 120GB SSD (Sandforce based)
3X 2TB WD Green HDD.
XFX 550W Power Supply (Seasonic)
Corsair Carbie 200R case


I know the above setup will not provide me great performance but if it lets me get away with a Plex setup while running other VM's doing security vulnerability assessments, bitcoin mining (with ASICS), and other tasks I would be very happy.

One thing I forgot to talk about was the server I recently replaced (good timing too, because it died yesterday and I still hope to resuscitate it). It had the following specs:

Q6600
Gigabyte P35 motherboard
8 GB RAM
Two 200 GB drives mirrored for OS (Windows 2008 R2 Enterprise)
Six 750 GB WD Black drives in RAID5 array (onboard RAID)
600 W OCZ power supply

That server ran 24/7 since August 2007 and at one point, I had 8 VMs on it (it regularly ran 5 to 6). I eventually dialed those down and the server was running 2 VMs, a Ventrilo Server, a Teamspeak 3 Server, and Plex. I have Plex set to transcode since I am streaming to Roku devices and it works well with that config. So I think you're probably OK.

Note, though, that the disk performance was abysmal and the bad thing is that you don't get a lot of options to tweak with an onboard RAID controller. Therefore, it might be worth your while to get a good RAID card which you can carry over to your next system.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
One thing I forgot to talk about was the server I recently replaced (good timing too, because it died yesterday and I still hope to resuscitate it). It had the following specs:

Q6600
Gigabyte P35 motherboard
8 GB RAM
Two 200 GB drives mirrored for OS (Windows 2008 R2 Enterprise)
Six 750 GB WD Black drives in RAID5 array (onboard RAID)
600 W OCZ power supply

That server ran 24/7 since August 2007 and at one point, I had 8 VMs on it (it regularly ran 5 to 6). I eventually dialed those down and the server was running 2 VMs, a Ventrilo Server, a Teamspeak 3 Server, and Plex. I have Plex set to transcode since I am streaming to Roku devices and it works well with that config. So I think you're probably OK.

Note, though, that the disk performance was abysmal and the bad thing is that you don't get a lot of options to tweak with an onboard RAID controller. Therefore, it might be worth your while to get a good RAID card which you can carry over to your next system.

I've bolded why your performance was awful with motherboard built in RAID functionality. RAID5 only works with a big fat write cache in front of it. That same system would have been fine with mirrored (1) or striped mirrors (1+0). With a 6 disk parity (RAID 5) array, any write that does not complete an entire stripe becomes 2 reads, perform a calculation, then do 2 writes. This will be horrible without a write cache to handle it. Even then, for many small writes it is not ideal.

Integrated RAID functionality is perfectly fine, you just have to understand what is going on. If you made it a 6 disk 1+0, you would have traded 1.5TB of volume for a massive performance and reliability increase (rebuild time would have been atrocious for you, leaving your data exposed).

At the consumer level, it's typically cheaper and faster to throw more money at drives (or increase the capacity of your drives to offset it) and use mirrors instead of parity.
 
Last edited:

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
I've bolded why your performance was awful with motherboard built in RAID functionality. RAID5 only works with a big fat write cache in front of it. That same system would have been fine with mirrored (1) or striped mirrors (1+0). With a 6 disk parity (RAID 5) array, any write that does not complete an entire stripe becomes 2 reads, perform a calculation, then do 2 writes. This will be horrible without a write cache to handle it. Even then, for many small writes it is not ideal.

Yes, I know exactly why performance was bad. The lack of cache was a killer. I am thinking that if the power supply is the only thing wrong with the old server that I might rebuild it and add a RAID card and maybe swap out the board and CPU. I wouldn't go big with a server board for this one; probably just a decent desktop board, quad core i7, and 16 GB of RAM to start. I just won't tell the wife. :D

Integrated RAID functionality is perfectly fine, you just have to understand what is going on. If you made it a 6 disk 1+0, you would have traded 1.5TB of volume for a massive performance and reliability increase (rebuild time would have been atrocious for you, leaving your data exposed).

Capacity and data protection were more important for me at the time, balanced with cost. It still had adequate performance for everything I needed, but when I built my new server, I was determined to use a dedicated card to improve performance and more importantly, get more bells and whistles such as online capacity expansion, additional RAID levels, etc. LSI has several good options under $500, as does Intel (whose cards are rebadged LSI cards).

At the consumer level, it's typically cheaper and faster to throw more money at drives (or increase the capacity of your drives to offset it) and use mirrors instead of parity.

And it is also less complex in many cases.
 

Mir96TA

Golden Member
Oct 21, 2002
1,950
37
91
For Raid 5 I will not do without Intel RS/LS controller.
In Raid 5 you need complex calculation on the FLY!
Raid 0/1 may be chaper and Faster :D
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
For Raid 5 I will not do without Intel RS/LS controller.
In Raid 5 you need complex calculation on the FLY!
Raid 0/1 may be chaper and Faster :D

RAID0 is way too risky (no fault tolerance) and RAID1 means you lose 50% of your total capacity.
 

Mir96TA

Golden Member
Oct 21, 2002
1,950
37
91
because half decent Raid 5 card; require serious dough ($$$)
I usally take a snap sooot and copy of VM machine else where.
I really don't use Raid; cause it is expensive setup.
However I had Host mahcine on SSD :D
 
Last edited:

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
Losing 50% of your total capacity doesn't mean jack when it is cheaper to buy the capacity back up than it is to buy a controller that can handle parity decently (and you'll get all the benefits of 1+0 as well).

Parity RAID makes very, very little sense for consumer setups unless it truly is archive with no performance needs (a situation which you claimed you had, but obviously didn't because it didn't meet your needs)
 

IndyColtsFan

Lifer
Sep 22, 2007
33,655
687
126
Losing 50% of your total capacity doesn't mean jack when it is cheaper to buy the capacity back up than it is to buy a controller that can handle parity decently (and you'll get all the benefits of 1+0 as well).

Let's do a quick analysis. Let's say I want 18 TB of usable space using 3 TB drives. For a RAID 10 setup, that's 12 drives and at $150 each, that's $1800. For a RAID 5 setup, that's 7 drives for a total of $1050. You can easily find decent RAID cards for under $750 (the price difference). Even if you want RAID 6, you still can find a decent LSI or Intel RAID card for well under $600.

Yes, the RAID 10 is faster. However, with it, comes increased power usage, the need for a larger case, and in the scenario above, more expense. Also, in the scenario above, good luck trying to find a consumer-level board with 12 SATA hookups for RAID 10 in this scenario. You won't, so at a minimum, you're going to need some additional cards for connecting them and that's an expense I didn't add.

Parity RAID makes very, very little sense for consumer setups unless it truly is archive with no performance needs (a situation which you claimed you had, but obviously didn't because it didn't meet your needs)

Are you referring to me? If so, can you please quote where I said it didn't meet my needs? I'm really stumped by this remark because if you're referring to my old server, it certainly did meet my needs. If I gave the impression otherwise, I didn't mean to. Performance was bad, but it never really affected the core usage of the machine. The only real reason I built a new server was because my old one was maxed on RAM and I needed more capacity. I also intend on re-architecting my entire network and having all PCs backed up to the server at some point, so that's why I have 18 TB of usable space. And yes, I did want to correct the disk performance issues in this new box.

I don't think we're discussing "normal" consumers here. I'm definitely not the typical consumer, and I understand that. I wouldn't advise Joe Public to spend $500+ on a RAID card for his HTPC because there are far more sensible and cheaper solutions available to him. He wouldn't likely need 18 TB of space either. I don't think the OP is a typical consumer either.

Raid 0/1 may be cheaper when it comes to cost!

See above. It definitely CAN be cheaper, but it depends on many factors. For Joe Public, yeah, it probably doesn't make sense to drop $500+ on a RAID controller.

For Raid 5 I will not do without Intel RS/LS controller.
In Raid 5 you need complex calculation on the FLY!
Raid 0/1 may be chaper and Faster :D

:D I have the LSI MegaRAID 9261-8i, which Intel rebadges as one of their RS series controllers. I paid like $440 for it and performance seems pretty good.

At any rate, I think we're veering off topic here and need to get back to the topic at hand. OP, have you looked at the Xeon E5-2620? Lots of cores for a decent price.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
You're seriously suggesting a $450 raid card for a media streaming HOME SERVER? Who cares about redundancy for Blu-ray rips?

You want him to spend twice as much on a raid card as for the processor? You could literally build another whole physical server to handle additional tasks for the cost of just that RAID card alone. Think about that.

Home server needs =/= enterprise server needs

EDIT: If you use this server for your livelihood or these sites you are going to host are mission critical, then follow his advice on RAID. It would be easily worth the money if its your livelihood. If not, it is overengineering for a very miniscule benefit
 
Last edited: