• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

ESX server build - fastest 1U possible

jordanl17

Junior Member
Apr 22, 2007
21
0
0
1 SUPERMICRO SYS-1026TT-TF 1U Rackmount Barebone Server
2 LSI MegaRAID SATA/SAS 9260-4i 6Gb/s PCI-Express 2.0 w/ 512MB onboard memory RAID Controller Card, Kit
12 Kingston 4GB 240-Pin DDR3 SDRAM DDR3 1333 ECC Registered Server Memory Model KVR1333D3D4R9S/4G
4 Intel Xeon L5640 Westmere 2.26GHz LGA 1366 60W Six-Core Server Processor Model B
8 OCZ Vertex 2 OCZSSD2-2VTX100G 2.5" 100GB SATA II MLC Internal Solid State Drive (SSD)

This is a 2 node 1u chassis, I will build a raid 10 container out of the 4 drives per node. I picked low wattage CPUs to keep the system cool and power efficient, plus with only 1 powersupply for both nodes, I want to keep the load light.

What are people's thoughts on using this as a killer ESX server (both nodes will have their own ESX license. )

We are too small of a company to afford a good quality SAN, but we can afford $10,000 to build the above server.

12 cores per node!!

any thoughts?

also, is there a better place to post this build?
 
Last edited:

jordanl17

Junior Member
Apr 22, 2007
21
0
0
yes, we've only got a few units open on our 1 rack. I'd like to keep some space for the future. Plus, I looked at supermicro's 2U options... i think this 1u is very impressive. What are your thoughts on the concept of the build?
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I think I would rather have a single quad-socket system than 2 dual-socket systems for virtualization. When you partition up your memory space like that, provisioning the proper resources for each VM node can be tricky. Of course, if your VMs are relatively static or homogeneous, it's not as big of an issue.

Is 200GB going to be enough for your VMs? I would also question the wisdom of using Sandforce drives in a mission critical box. They're just not proved enough yet IMHO.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
1) SATA = Crap for ESX. Even the "shear speed of SSD" doesn't mean much.
2) 200GB (after Raid 10) = high likeliness of not enough space.
3) Memory:Storage ratio seems off to me. Esp with the limited IOP that SATA can handle
4) do you need 12 cores per node? Again Cores:Storage would concern me.

For 10K I could build you 2 Dell R610's that would come with a 3 year warranty that would likely be comparable to better performance than that gear.

Without any idea of what loading your putting these machines you can't even get a good recommendation for the build.

I already run 12 VMs on 2 x 4 core 2.4 ghz nehalms with 12 gig of RAM. My needs might not be the same as yours.

**edit**
Be 100% that ESX even has drivers for your hardware. There is no "just install the drivers." It works or it doesn't. VMware will not help you with that.
 
Last edited:

jordanl17

Junior Member
Apr 22, 2007
21
0
0
So is ESX's file system just not optimized at all for SATA SSD ? Will it bottleneck the speeds down so much that it's just not worth it?!

Are you saying that I've got too much processor power and not enough storage space? .. as in: I can run lot's of VM's with 12 cores per node, there's no way 200gb will be enough. makes sense to me. Most of our servers that will be virtualized are win2003, user data is stored elsewhere. I think I can squeeze 3 or 4 VM's on 200gb.

So, 15k sas in raid 10 will outperform SSD in raid 10 in ESX ?!?!? It just seems crazy to me. I'm sure you have a lot more experience than I do, but I don't get it!!!

ESX has built-in drivers for all this hardware.
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Well I would need to look up and test the SSD perf compared to SAS. Many servers that have SATA/SAS run the SAS controllers in SATA encapsulation mode which reduces performance. Most SATA disks do not have a chips on them to handle as many transactions at once. On SSD you have to worry about wear leveling also. The 100GB disk is the enterprise disk? (I am not as well versed on SSD yet as we haven't had the need for them yet. The SAN keeps up fine even as magnetic disks.)

Yes your power to storage is out there. I personally find that many V machines run best as a single vCPU unless there is a true need for multiples. With a typical 2k3 boot disk taking 20-30gig you could run out of disk way before you over loaded ram or cpu.
 

jordanl17

Junior Member
Apr 22, 2007
21
0
0
ok, I agree with what you said. My original spec was too powerful / not enough storage. How about this, I setup the 4 100gb drives in raid 5, that will give me 300gb, and then I'll switch from 2 6core cpus to 2 4 core cpus, I'll keep the ram at 24gb, because that doesn't cost alot.

agree?
 

Chapbass

Diamond Member
May 31, 2004
3,147
96
91
Don't have much to contribute to this thread..but...

Holy crap, April 2007, and half of your posts are in THIS thread?

Lurker alert!
 

Chapbass

Diamond Member
May 31, 2004
3,147
96
91
Okay, i WILL contribute something after all... but in the form of a question..

What are these being used for? and how many VM's are you planning on running on this machine? Because yeah, even 300GB something tells me you're going to be mad at that later on.....but it all depends on what you're using it for.
 

jordanl17

Junior Member
Apr 22, 2007
21
0
0
Ha! I guess I am a lurker... this is a pretty big purchase for me, I don't want to mess it up.
We only have about 9 servers in the entire company, so I'm not talking about a huge server farm here.
I will be P2V a couple of older windows 2003 terminal servers, that will be the main VM in each node. other VMs may be a domain controller... maybe a print server,... probably another server to run our Blackberry enterprise server..

so you can see, not many servers.

How about this as a plan, I get the LSI raid card that has 4 internal ports and 4 external ports, so I can add an external array later.... thoughts?

edit: how about not using the 2 node 1U server, and instead use 2 1U servers. That will give me redundant power supplies, AND 8 2.5" hard drives. I could throw 8 100gb SSD drives in the server, raid 10 will give me 400gb, or raid 5 for 700gb... 16 ssd drives total would be real expensive....
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Here is the thing. I will warn you that vcenter server will not work all that well with DAS (direct access storage), migrations will need to move from disk to disk and it moves using the service console. So if a server needs to go down, it can take hours to move the vmachines. Also you don't *need* SSD. Depending on what your doing, Magnetic disks may exceed your needs. 10k or 15k 600gigs are not that expensive and would give you 1.2 TB in RAID 10 or 1.8TB in RAID 5.

You haven't still told us what the machines are really doing. I Have 12 running on 2 1 U machines with a quad core and 12 gig of RAM on a SAN and the cpus basically sit at 5% all day. RAM is my limitation at the moment at that location. Those are domain controllers, mail servers and file servers.
 

modestninja

Senior member
Jul 17, 2003
753
0
76
Ha! I guess I am a lurker... this is a pretty big purchase for me, I don't want to mess it up.
We only have about 9 servers in the entire company, so I'm not talking about a huge server farm here.
I will be P2V a couple of older windows 2003 terminal servers, that will be the main VM in each node. other VMs may be a domain controller... maybe a print server,... probably another server to run our Blackberry enterprise server..

so you can see, not many servers.

How about this as a plan, I get the LSI raid card that has 4 internal ports and 4 external ports, so I can add an external array later.... thoughts?

edit: how about not using the 2 node 1U server, and instead use 2 1U servers. That will give me redundant power supplies, AND 8 2.5" hard drives. I could throw 8 100gb SSD drives in the server, raid 10 will give me 400gb, or raid 5 for 700gb... 16 ssd drives total would be real expensive....

None of those types of server in my experience are limited by disk when you're using magnetic disks. Really, the only ones that I've personally seen (I'm sure there are others but I personally don't work with that many different types) be limited are high volume database servers which I'd never put on a VM in the first place.
 

SnOop005

Senior member
Jun 11, 2000
932
0
76
I will chime in with my 2 cents. I've deployed Citrix Xenserver farm for the place that I work at and my budget was not all that high either.

1. Given what you said about the 9 servers (BES, Print, DC, Terminal). These does not seem like high I/O or resource intensive tasks that requires SSD or even 15k RPM hard drives. How many users are you currently serving?

2. Virtualizing 9 servers in one box is a double-edged sword. Yes, you will save tons of space and power but you are also riding everything on 1 server. In the event of any failure all 9 of your servers will be down. I would really look into some redundancy for this setup. You're spec-ing out hardware enough to spread across 2 or 3 servers, you have 48 gigs of ram, four 6 core cpu which will gives you total of 24 phyiscal core and another 24 logical core because the L5640 has Hyper-Threading so you will have grand total of 48 usable CPU under ESX. You have to remeber not every single server will be 100% utilized requires that much computing power. ESX will dynamically allocate cpu resources to whichever server that needs it at any given time.

Is there no way to make more space? I would highly recommend splitting this configuration into 2 or 3 servers for load balance then if possible implement a shared storage (SAN, NAS, etc) so you can take advantage of the HA/fail over features that is built into ESX.
 
Last edited:

jordanl17

Junior Member
Apr 22, 2007
21
0
0
We currently have 2 terminal servers (Windows 2003 NOT citrix) for load balancing. at any give time I will have about 25 users on each server.
The current terminal server hardware is:
(per server) 2 dual core Opteron 240's, 4gb ram, raid 5 15k 36gb x3. Pretty sweet setup in it's day. You might be saying, "hey that's plenty powerful to handle 25 users" I know I need to spend some time optimizing the servers, BUT: We are going to migrate to Windows RDS 2008R2 in the future,
so, I figured phase 1: buy the hardware, create the ESX install, migrate the current terminal servers to stretch it's legs.
phase 2: migrate to Windows Server 2008 R2 RDS.

I have now (after reading all the posts in this thread) settled on getting 2 1U servers (each with redunant power supplies), each server will have 2 quad core CPUs (because we can afford 2 sockets), 24gb ram per server, and I'd like to fill all 8 2.5" drive bays with 147gb 15k rpm drives. (so that's down from the power/cost of 2 6cores CPUs, and I'll use magnetic drives. Do you think an 8 drive 147gb Raid 5 is best, or Raid 10? (I know that's it's own topic!)

How does that sound?
Really, I don't need all the HA features of a full VMware environment. If I have to manually move a VM it's not a big deal.

Thanks everyone for all the help.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
"Manually move a VM" You might want to test that in a test environment first... The Service consoles tend to move at a max of about 25meg a second due to VMWare's limits. Moving 500gig takes a long time. Without vCenter I recall it is a painful process to move since it won't do machine to machine (easily.)

Any way what is the loading with your 25 users? Does the machine crawl or do users not even notice? One thing to look at is that when you buy vendor, most machines are 'upgradeable' for the life of the machine. So you could start with 1 6 core and 24 gig of RAM and later call up Dell / HP / IBM and get the cpu kit to take it to 12 cores 48 gig. Most people look at the vendor price and cry long and hard that you can get it cheaper but they miss the fact that vendors assume the warranty on user added parts that came from them. So it might cost $20 more for a 4gb memory chip but Dell would replace it with in the server warranty. To give you an example I 4 hour replaced a "additional" memory chip on one of my servers. I told him that I bought it from Dell but it wasn't original. A replacement showed up hand delivered in 45 minutes.

So back to the thread. I personally would look at something like an R610 from Dell. It looks like it will meet your needs easily and be well with in your budget. R610 can take 6 2.5 inch SAS drives. The server has an internal usb port for usb sticks so you can install ESXi on a 1 gig memory stick and later run the machine diskless if you ever felt the need to.

For about $5400 so far you can get a quad X5650, 24gig of RDIMMS and 6 x 10k 146 gig drives. If you don't have a business account with them, scrounge to the web for dell coupons. Many times you can find a good deal. This is all off the website pricing. If you call them and set up an account they will often give you a discount right from the start. If you can convince the company to buy desktops from them also they will give you some really nice deals. Sadly I am bound by nondisclosure and can't really tell you my pricing... I will say: "well worth the time spent."

Buy VMware essentials. It is worth the $1000. Up to 3 servers (ESX or ESXi), 2 sockets / server. Includes vcenter, basic support, vstorage etc. Only costs like $250 a year to maintain the sub after also. It vastly eases management of the system.
 
Last edited:

Bashbelly

Member
Dec 12, 2005
111
0
0
I can definitely say from experience...MORE RAM. Those dual quad core boxes are so overpowered in the CPU portion...especially like everyone else has said normal servers end up consuming less than 1 gig of ram and run into the brick wall known as "DAMNIT I NEED MORE RAM". The other major con I see is that you are local storage...man backing up your VM's and downtime per host is gonna hurt. I highly recommend getting a ISCSI SAN if at all possible...something like dell equallogic (so damn easy to configure and use). For backup I can recommend VRanger Pro..good stuff and cheap. Oh and dont forget to get as many ethernet ports as possible (6+ recommended!)

Lastly if at all possible put vcenter on its own box.
 

jordanl17

Junior Member
Apr 22, 2007
21
0
0
Ok, I'll put 48gb per ram, I'll stick with the 2 gbe ports for now. (I can add more later)

I'm going to be spending about $6000 on hard drives for both these servers. Can I get my hands on an equallogic for around that? or maybe $10000? I now iSCSI is the way to go here, but local storage is just cheaper!

-Jordan
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Ok, I'll put 48gb per ram, I'll stick with the 2 gbe ports for now. (I can add more later)

I'm going to be spending about $6000 on hard drives for both these servers. Can I get my hands on an equallogic for around that? or maybe $10000? I now iSCSI is the way to go here, but local storage is just cheaper!

-Jordan

You should have more than 2. Normally you use 2 or more just for the Ethernet vswitch. You might be able to go SAS shared storage... I would need to look at that one.

If you don't need 1u and can afford 2u your disk cost should go down as you can use more smaller disks. Something like R710 can have 8 drives so just under 1 TB with 8 146gigs for example.

I personally not convinced you need 48 gig of ram / machine with your current requirements.

Those servers also come with 4 gige iscsi offload ports.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Something like this:

http://www.dell.com/us/en/business/storage/pvaul_md3000/pd.aspx?refid=pvaul_md3000&s=bsd&cs=04

Would work for shared SAS. MD3000 is just a disk enclosure that is about $5500. (add disks) It has for SAS ports and can be set up redundant to 2 servers or single channeled to 4 machines. You export LUNs via the interface.

Buying something like this changes the design of the servers a bit. For example you can use esxi on a 1 gig memory stick to diskless boot the machine and have the VMFS shared on the Shared DAS. The DAS also has 15 disk slots so you have room to grow.
 
Last edited:

jordanl17

Junior Member
Apr 22, 2007
21
0
0
I'll stick with internal storage for now. If I use Backup Exec 2010's esxi backup feature, that will give me enough availability for my needs. I can do nightly backup's of my VMs and then restore somewhere else if a host fails. That's all I need, and pretty cheap.

I'm down to just a few things:
1) should I get the adaptec 5805 or LSI 9260 (both 8 port), but the adaptec has it's connectors on the back of the card, that will work best with the chassis I'm looking at.

2) I'm going to get 2 1U servers, what hard drive configuations do I use? 8 10k 300gb drives, or 8 15k 147gb drives for performance? Should I do raid 5, or 10?! are 8 10k 300gb drives in Raid 10 better than 8 147gb 15k drives in raid 5?

I'm learning alot about ESX, can someone recommend a good book for me to read? I've really underestimated it's power. The hardware I've spec'ed out (8cores / 48gb ram) can house LOTS of VMs. So, like was said earlier, I will want a lot of HD space, that makes perfect sense now. 8 146gb 15k drives in raid 5 gives me over 1TB of storage. Not a bad starting point, AND by the time I get close to filling that with VMs, we'll be buying a dual port 10gbe iSCSI box!!

Thanks for the advice thus far. It's been great.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
remember that you will need an ESXi subscription (vstorage is not free) to use the 2010 backupexec module. The other issue is that DAS will need to be accessed via NBD which is limited to 25meg a second. I needed the SAN option to hit decent speeds.
 

jordanl17

Junior Member
Apr 22, 2007
21
0
0
Imagoon,

What are your thoughts on a used/refurb MD3000i, w/
Hard Drives: 15 x Dell 600GB 15K SAS Hot-Plug Hard Drives
Controllers: 2 x Dual-Port Gigabit Ethernet Controllers

It can be had for about 12,000 that is doable for me, as I'm already about to pay 6,000 for das. one big question for me. I'm having issues believing that my VMs will operate as fast over 2gb worth of ethernet. What kind of performance hit will I take.. especially if I end up with 5 VMs on the iscsi box.... I just don't think it will run as fast as direct storage.
am I wrong?

I must be wrong, everyone is doing the iscsi thing. is 2gb enough, does the MD3000i use all 4 ports, and fall back to 2 ports if a controller fails?
 

SnOop005

Senior member
Jun 11, 2000
932
0
76
Imagoon,

What are your thoughts on a used/refurb MD3000i, w/
Hard Drives: 15 x Dell 600GB 15K SAS Hot-Plug Hard Drives
Controllers: 2 x Dual-Port Gigabit Ethernet Controllers

It can be had for about 12,000 that is doable for me, as I'm already about to pay 6,000 for das. one big question for me. I'm having issues believing that my VMs will operate as fast over 2gb worth of ethernet. What kind of performance hit will I take.. especially if I end up with 5 VMs on the iscsi box.... I just don't think it will run as fast as direct storage.
am I wrong?

I must be wrong, everyone is doing the iscsi thing. is 2gb enough, does the MD3000i use all 4 ports, and fall back to 2 ports if a controller fails?

There's a couple of issues that you need to be aware of. 1. In order to bond the gigabit ethernet on the MD3000i tofully use the 2gb you will need a switch that supports LAG (802.3AD). 2. With 2GBe your maximum sustained transfer rate is about 250mb/sec which I think 15x 15k SAS drive will easily surpass which brings up another question, will any of your servers be doing intensive I/O that requires 15k drives? You're giving up usable space for speed, at the same time you need to consider scalability. You have total of 9tb of storage, if you decide to go with a RAID10 setup you will end up with half of that amount. 4.5 TB of usable space may be enough for today but also consider the possbility that one day you're network will outgrow 4.5TB the cost and down time involved in expanding or replacing storage.
 

jordanl17

Junior Member
Apr 22, 2007
21
0
0
I will get switches that support LAG, not a problem, but I still have an issue with this concept:
the MD3000i is housing 5 VMs, one of them is a file server, a person drags a file from the file server to his local machine (1gb file).
At the same time a person sends a 40 page full color print job to a color printer on another VM that's a print server. That's 1.3gb worth of data pouring in to the printer.
and at the same time 45 people are working in their Terminal Server accounts and are working w/ office apps....

what happens?! There's a crunch at that point, right? or is iSCSI so good, that it's still almost as fast as internal storage?

this is what I don't get. all of the above will going over 2 1gig ethernet links?!?!?!?!?