• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Server build for a small business - primarly a vm host

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I'm working on getting a refurbished Dell R810. Here are the specs:

  1. 4x Xeon X7560 (8C/16T @ 2.26 GHz) - It's $140 to add a 3rd and 4th CPU, so why not?
  2. Dell H700 6 GB/s RAID w/ 512 MB Cache
  3. RAID 10 (no drives - I'll add 6x new WD SE 1 TB drives)
  4. 64 GB PC3-10600R (they don't have an option for more unfortunately - this is the only problem with the build AFAIK)
  5. iDRAC6 Enterprise
  6. Quad Gigabit NICs
  7. Dell R810 Sliding Rails
  8. Redundant 1100 Watt PSU
  9. 5 year warranty

The price is $2,382.00 without disks, which will be ~$600 for a total or ~$3,000. That leaves enough in the budget to get a decent UPS, a new 27U rack, and some other odds and ends.

I've decided to leave the API, web, and MySQL servers at Amazon until the new server is configured and stable. At that point, I may be able to get enough money to purchase a second server, which would primarily be for redundancy.

As it stands, we currently have a single server as the DC + FS. When it goes down, obviously everything goes down, but most people are unaffected by that scenario unless it lasts longer than half of a day or so. The nature of the work being done at my office is such that file I/O is infrequent.

I'll probably put Hyper-V on this host as the external IT company I use when things are beyond my abilities is more familiar with that platform. It's not my first choice, but I think I should go with the setup that's most familiar to the people who will potentially need to assist me.

Does this sound reasonable or have I gone astray again?
 
Last edited:
Way too much CPU and not nearly enough RAM. That's also incredibly old..those CPUs were discontinued in 2012 (released in 2010).
 
If money is really that tight, but ongoing revenue will support it, maybe you should look at leasing a server (or servers) in somebody else's data center?

I mean, okay, you budget is $4k, and the realistic life of a server is 5 years, right? So that's $4000/60 = $67/mo.

If you can put your apps and databases on the cloud, and use a Drobo or Synology for local storage and network services, you'll probably be looking at <$1k to get everything deployed and working, and maybe $50-$100/month for hosting all that other stuff on somebody else's boxes. (Hosting plans can be <$10/month that would run your web and web-dev environments, other applications you have could just as easily double up.)

If any of that stuff is customer facing, it should be in a proper datacenter anyway.

Amazon Cloud/EC2 is, honestly, kind of overpriced. I think it's popular mostly because there's no buy-in cost and it's pretty easy to get started.
 
Way too much CPU and not nearly enough RAM. That's also incredibly old..those CPUs were discontinued in 2012 (released in 2010).

Meh. It's Nehalem-EX. If you're going used/refurb anyways going even one generation newer raises the price substantially with very small gains.

That said, it's absolute overkill, dropping down to an R710 would save some money both up front and in electricity costs.
 
If money is really that tight, but ongoing revenue will support it, maybe you should look at leasing a server (or servers) in somebody else's data center?

I mean, okay, you budget is $4k, and the realistic life of a server is 5 years, right? So that's $4000/60 = $67/mo.

If you can put your apps and databases on the cloud, and use a Drobo or Synology for local storage and network services, you'll probably be looking at <$1k to get everything deployed and working, and maybe $50-$100/month for hosting all that other stuff on somebody else's boxes. (Hosting plans can be <$10/month that would run your web and web-dev environments, other applications you have could just as easily double up.)

If any of that stuff is customer facing, it should be in a proper datacenter anyway.

Amazon Cloud/EC2 is, honestly, kind of overpriced. I think it's popular mostly because there's no buy-in cost and it's pretty easy to get started.

I've been using AWS for several years and I'm pretty comfortable with it, but a lot of what you say is true. In this specific circumstance, the files we are using are pretty big, so storing all of it in the cloud is a non-starter. I've setup and deployed many apps that I wrote for my company on EC2 and/or Elastic Beanstalk and there haven't been any issues. Mostly I was looking to get some of it on an internal machine to facilitate better integration with the fileserver, but it's not an absolute necessity. I can keep things separated and it'll be fine.

The main reason I want a new server is because I want to have redundancy. We've been fine with a single server for 15 years because I proactively take care of it, but it's only a matter of time before something goes wrong that will take everything down longer than an acceptable period of time. Having a second DC is a significant part of the reason I'm investigating the purchase of a new server. Everything else is secondary, but still important.

Meh. It's Nehalem-EX. If you're going used/refurb anyways going even one generation newer raises the price substantially with very small gains.

That said, it's absolute overkill, dropping down to an R710 would save some money both up front and in electricity costs.

My previous career was a VLSI designer specifically for Xeons, so I'm not worried about getting a generation or two behind the current spec because I know very well what to expect out of them. I totally agree with jlee that the specs of that machine are out of balance, but it was an extra $140 to add the 3rd and 4th CPUs and 64 GB was the highest memory option. It definitely won't hurt to have too much CPU capacity and, like I said in another post, we're currently running off of 16 GB and everything is functional.

All of the advice in this thread has been really helpful. I'm wondering if the scale at which this server will be deployed has been lost in translation, though. I think a lot of you are responsible for very large networks that are complicated and expensive. The network and server under discussion here is very small and having it go down doesn't immediately halt all work. That doesn't mean I should be flippant about it (and I'm certainly not trying to be), but from that frame of reference it seems like some of the specifications I've suggested should be okay.

Viper GTS suggested a NAS and I didn't really internalize that until this morning. Putting a professional NAS (something from Qnap maybe?) on the network and integrating it with AD should be a much more reliable solution than sticking the FS and DC in the same box, right? If I trim down the new server specs specifically because there will be a NAS, that means I'll get the benefit of standalone storage plus two DCs (one of which will be the primary and much newer/faster). I believe I can get a good NAS with 2-4 TB of storage and a new server for under $4,000 based on the prices I've seen today. I'm not scared of using a refurb especially if it has a warranty and there's a backup DC, but I had Dell quote a new NAS + server for under $5,000, so there's probably a way to get it closer to my budget.
 
Last edited:
I'm currently considering a Dell NAS as well as a Synology NAS (link). They both have redundant supplies, rackmountable, and quad gigabit ports with link aggregation and failover. I suppose I'm wondering if you guys have good or bad opinions of Synology NAS products. The Dell NAS is $900 more, but the quality is probably obvious.

I'm going to buy 4x 2 TB Western Digital SE drives in either case. The Synology NAS doesn't come with any drives and Dell wants a ridiculous amount for their drives. I'm seeing a lot of really positive reviews for the Western Digital SE drives on various forums from people who use them in enterprise applications, so I'm not feeling like they're a bad option at this moment. I'm open for opinions, though.
 
My previous career was a VLSI designer specifically for Xeons, so I'm not worried about getting a generation or two behind the current spec because I know very well what to expect out of them. I totally agree with jlee that the specs of that machine are out of balance, but it was an extra $140 to add the 3rd and 4th CPUs and 64 GB was the highest memory option. It definitely won't hurt to have too much CPU capacity and, like I said in another post, we're currently running off of 16 GB and everything is functional.

I think you misunderstood what I was saying. I was pointing out that going slightly newer than what you have listed on your build doesn't gain enough to justify the extra cost when you're going used either way.
 
I think you misunderstood what I was saying. I was pointing out that going slightly newer than what you have listed on your build doesn't gain enough to justify the extra cost when you're going used either way.

Actually, I was trying to agree with your point, but that may not have been clear because I was responding to you and jlee at the same time. Sorry about the confusion. Anyway, yeah, I'm not sure the performance of the DC will matter much at this point because the FS will live somewhere else. I like the idea of using a NAS to host VM images, so this feels like a much more scalable solution that also has more reliability.
 
Try to get some SSD in that NAS purchase if you want to host VMs off of it. Four 2TB drives for file storage is fine but that is not going to provide good performance for VM storage. At minimum you want some SSD caching, if you can provide dedicated SSD storage for VM that is even better.

Viper GTS
 
Synology is good. Much better than QNAP in my opinion, but neither will have the build or service quality that Dell will have. That is the reason Dell charges a premium. If something happens, you call Dell and they fix it or send you a replacement pretty much immediately. If something happens with Synology, you e-mail them and *maybe* they respond within 24 hours. It's night and day difference, really, when it comes to service/warranty.

Dell may not want to support you if you don't use their hard drives though. Generally, you need to use their equipment for warranty work.

Those SE drives are SATA which could be fine I suppose for your needs. Really depends on the workload needed.

Hosting VM's on the NAS should work but if you want it to have any kind of speed to it you will need 10Gbe connections and SSDs.
 
Try to get some SSD in that NAS purchase if you want to host VMs off of it. Four 2TB drives for file storage is fine but that is not going to provide good performance for VM storage. At minimum you want some SSD caching, if you can provide dedicated SSD storage for VM that is even better.

Viper GTS

Synology is good. Much better than QNAP in my opinion, but neither will have the build or service quality that Dell will have. That is the reason Dell charges a premium. If something happens, you call Dell and they fix it or send you a replacement pretty much immediately. If something happens with Synology, you e-mail them and *maybe* they respond within 24 hours. It's night and day difference, really, when it comes to service/warranty.

Dell may not want to support you if you don't use their hard drives though. Generally, you need to use their equipment for warranty work.

Those SE drives are SATA which could be fine I suppose for your needs. Really depends on the workload needed.

Hosting VM's on the NAS should work but if you want it to have any kind of speed to it you will need 10Gbe connections.

Thanks guys. I showed this thread to the powers that be and it went a long way toward explaining why I need more budget to really make the network solid.

After the first NAS is up and running, I'll be able to purchase a second NAS for VM hosting. I'll be sure to get a 10 Gbe connection + SSDs. I suppose I could also get a better NAS now with more bays and then add SSDs later, but that will be subject to budget constraints. I'm kind of bought into the separation, though.
 
I've mostly read through your thread but if I were doing it, I'd probably do two identical hosts, split the VM's between them, and finally replicate the VMs between the two so that if one host fails, you can spin up the remaining VMs on the 2nd host.

You still need a backup strategy that would backup the VMs to another host/NAS. But in an ideal world with the proper budget, this is how I'd do it. Of course, budgets never ever seem proper in the enterprise world. Failing that, there are plenty of businesses that just run a single host and backup to a single NAS. Just make sure you keep the warranty valid and that will go a long way to keeping piece of mind even with one host.
 
Last edited:
I've mostly read through your thread but if I were doing it, I'd probably do two identical hosts, split the VM's between them, and finally replicate the VMs between the two so that if one host fails, you can spin up the remaining VMs on the 2nd host.

You still need a backup strategy that would backup the VMs to another host/NAS. But in an ideal world with the proper budget, this is how I'd do it. Of course, budgets never ever seem proper in the enterprise world. Failing that, there are plenty of businesses that just run a single host and backup to a single NAS. Just make sure you keep the warranty valid and that will go a long way to keeping piece of mind even with one host.

In your configuration, which machine would be the virtualization host? Is a centralized host even necessary? It seems like there's some kind of 'host' machine that manages all of the VMs across different physical hosts, but maybe that's not right.

I've been looking for an example setup, but I can't find any specific details about this type of thing. It would probably help me tremendously if I could find a tutorial/whitepaper/something that has an example implementation of a virtualized environment with multiple servers so I could see what type of hardware is typically used and how everything is connected. I've read a good deal of the vSphere Installation and Setup guide, but the concept still eludes me. Note: I'm going to buy a Vsphere Essentials license when I start setting up VMs.
 
In the case of VMware the 'centralized host' you are referring to is not a host at all but a physical or virtual machine running either the vCenter appliance or a Windows based vCenter installation. For a small environment like yours the vCenter appliance is more than adequate.

https://www.vmware.com/files/pdf/techpaper/vmware-vcenter-server6-deployment-guide.pdf

Essentials is a good place to start for VMware licensing but you should make it a goal to get to at least Standard when your organization can come up with the budget. Unfortunately Standard licensing at this point would consume your entire budget but Essentials is missing some of the key things that makes virtualization great. Being able to move a running VM from one piece of hardware to another with no downtime enables you to replace hardware, perform bios/firmware updates etc without any downtime. Essentials is much better than nothing but you're missing most of the good stuff which is why it costs basically nothing.

[EDIT]Just looked and Essentials Plus will enable vMotion + HA:

http://www.vmware.com/products/vsphere/compare.html

That's much cheaper than Standard.[/EDIT]

Viper GTS
 
Last edited:
In your configuration, which machine would be the virtualization host? Is a centralized host even necessary? It seems like there's some kind of 'host' machine that manages all of the VMs across different physical hosts, but maybe that's not right.

I've been looking for an example setup, but I can't find any specific details about this type of thing. It would probably help me tremendously if I could find a tutorial/whitepaper/something that has an example implementation of a virtualized environment with multiple servers so I could see what type of hardware is typically used and how everything is connected. I've read a good deal of the vSphere Installation and Setup guide, but the concept still eludes me. Note: I'm going to buy a Vsphere Essentials license when I start setting up VMs.

In my scenario, you are splitting the load of VMs between two hosts. So host A runs 5 VMs and Host B runs 5 VMs. You would run replication software (e.g. Veeam) that would replicate the hosts to each other so in the event of a failure of a single host, one host could run all 10 VMs. So, in other words, both hosts have 10 VMs loaded however 5 of them are always spun down. Does that make sense? So if Host A failed, Host B would be able to run Host A's VMs in addition to the regular VMs that Host B runs anyway.

This isn't really HA (High Availability) in the sense of the word, but it's damn close. HA costs quite a bit of money. You need 3 hosts to do it with VSphere and the license alone is about $5000.

This is where VSphere loses it's competitive edge in my opinion. It costs too damn much money for what Xen and Hyper-V can do for free.
 
Last edited:
You can do vSphere HA with Essentials Plus for $4500 list (three hosts + vCenter).

Your description does not reflect how shared storage works (unless you were referring to vSAN?), with shared storage and a host failure the VMs just power right back up on another host. There is no replication between hosts, it just picks the VM up off shared storage and brings it right back. In neither case is any 3rd party software (veeam) necessary.

There's also FT but I have yet to encounter anyone actually using it.

Yes, VMware is expensive. But in a large deployment the licensing cost difference ends up being trivial. Hyper-V can be compelling for extremely cost sensitive deployments.

Viper GTS
 
You can do vSphere HA with Essentials Plus for $4500 list (three hosts + vCenter).

Your description does not reflect how shared storage works (unless you were referring to vSAN?), with shared storage and a host failure the VMs just power right back up on another host. There is no replication between hosts, it just picks the VM up off shared storage and brings it right back. In neither case is any 3rd party software (veeam) necessary.

There's also FT but I have yet to encounter anyone actually using it.

Yes, VMware is expensive. But in a large deployment the licensing cost difference ends up being trivial. Hyper-V can be compelling for extremely cost sensitive deployments.

Viper GTS


There is no shared storage in my scenario. VMDKs are replicated to each host. The VMs would then run off the local storage in the host. That's why two identical machines would be needed. You just need to make sure that the hosts are sized properly in RAM, CPU, and drive space, to allow all 10 VMs to run on a single host should one of the hosts fail.
 
Last edited:
For the first step of this process, I'm looking at a Dell NX430 NAS with 4x 2 TB drives. We're currently using 900 GB of storage, so my thought is RAID 10 will give us 4 TB of space, which should be plenty.

This is the configuration:


  • Dell Storage NX430 Performance Base
  • WSS2012 R2 Standard Edition
  • RAID 5, H330/H730 for SAS/SATA
  • 4x 2TB 7.2K RPM SATA 6Gbps 3.5in Hot-plug Hard Drive
  • On-Board LOM 1GBE Dual Port (BCM5720 GbE LOM)
  • Dual, Hot-plug, Redundant Power Supply, 350W
  • NEMA 5-15P to C13 Wall Plug, 125 Volt, 15 AMP, 10 Feet (3m), Power Cord, North America
  • ReadyRails&#8482; Sliding Rails With Cable Management Arm
  • DVD ROM, SATA, Internal
  • iDRAC8 Enterprise with OpenManage Essentials, Server Configuration Management
  • 3 Years ProSupport with Next Business Day Onsite Service

A dual port 1 GbE connection will be plenty for our file server needs. I'm not sure I need OpenManage Essentials, but iDRAC8 Enterprise has a ton of features I could definitely use (remote file sharing, VNC, etc.) and OME only adds $60 for what looks to be a lot of nice features. I confirmed with Dell support that I can switch to RAID 10 after I get the unit. RAID 5 sucks, yes?

This is basically going to eat my entire budget, but I think it's a very necessary first step. Getting our mission critical data into a unit specifically meant to deal with data should be a big step up in terms of reliability and if the DC goes down we aren't nearly as inhibited. As I said before, once this is all setup and functioning, I'll be able to purchase a server and maybe even another NAS. The second NAS would be for VM hosting and it would have an SSD cache at the very least + 10 GbE connections to each server. I think we'll be all set after that stage of the process.

Edit: I'll backup the NAS to S3 using Cloudberry on the DC, which is what we currently use and it seems to be capable of handling network shares. I also backup our current FS to rotated USB drives every night and I keep them in a fireproof safe a few miles from the office.
 
Last edited:
You generally can't go from RAID 5 to RAID 10 without deleting the array and starting over. Don't have Dell do the RAID configuration. Just do it yourself when you get the unit in.

4TB isn't a huge array even by RAID 5 standards, but I'd still avoid it if at all possible. If 4TB is plenty of storage for the forseable future, stick with RAID 10.
 
You generally can't go from RAID 5 to RAID 10 without deleting the array and starting over. Don't have Dell do the RAID configuration. Just do it yourself when you get the unit in.

4TB isn't a huge array even by RAID 5 standards, but I'd still avoid it if at all possible. If 4TB is plenty of storage for the forseable future, stick with RAID 10.

I don't think they'll honor that request, but it's okay because I'm planning to wipe it as soon as it arrives unless you're telling me I can't switch it after the initial configuration. I asked a Dell rep that exact question, but I didn't get a clear answer. The datasheet says I can change it, so hopefully that's the case.

Yeah, 4 TB isn't huge, but it should be more than enough for many years because the data set is only 900 GB after 15 years. I'm figuring I'll have plenty of room to grow the company's data, store multiple backups of each VM, and still have at least 1 TB to spare. My production VMs are about 3.5 GB and 95% of that is the OS.

My only choices for drives in the NX430 are SATA and NLSAS. It looks like NLSAS has some benefits, but I really don't think I would notice any of them given the expected usage. I don't have an exact number for the duty cycle of the drives in my current server, but I'd guess it's below 20% including backups.
 
If you tell them not to configure the RAID array, they won't. But it really doesn't matter. Just wipe it when you get it in.

I'd use SAS (and do) for production drives. The SAS protocol is much more 'robust' that SATA. Any idea what your IOPS requirement is for this NAS?

And I think it's a good idea staying off RAID 5. You avoid the lengthy rebuild times, UREs, and slower writes staying off parity. I'm surprised Dell even gives you the option to use RAID 5. I thought their official recommendation was RAID 6 nowadays.
 
Last edited:
You can do vSphere HA with Essentials Plus for $4500 list (three hosts + vCenter).

Your description does not reflect how shared storage works (unless you were referring to vSAN?), with shared storage and a host failure the VMs just power right back up on another host. There is no replication between hosts, it just picks the VM up off shared storage and brings it right back. In neither case is any 3rd party software (veeam) necessary.

There's also FT but I have yet to encounter anyone actually using it.

Yes, VMware is expensive. But in a large deployment the licensing cost difference ends up being trivial. Hyper-V can be compelling for extremely cost sensitive deployments.

Viper GTS

This might be overkill, but have you looked at VCE's VxRail hyper-converged appliance? https://store.emc.com/Product-Famil...il-Appliance/p/VCE-VxRail?CMP=listenGENvxrail
 
Last edited:
I finally got the NAS. It's pretty awesome especially with the LCD on the front bezel; that was an interesting surprise.

I've been looking through the various options for a few hours and it's somewhat overwhelming, but I suppose most of the settings make sense. I nuked the RAID5 config and then setup RAID10. The drives are currently initializing, which is taking far longer than I expected. I used the same VD configuration that Dell used, which is 120 GB for the OS on VD0 and * for data on VD1.

I need to start a new thread about managing my network infrastructure, but I thought I'd update this thread at least one more time. My next purchase will be a Dell server to run a few VMs.

BTW, Storage Server 2012 R2 is actually pretty nice, at least from what I can tell so far.

Pic
 
I'm interested to hear what kind of performance you get running VM's off the NAS. I would think it would be terrible compared to SSD direct attached on your VM host.
 
Back
Top