• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Fail Over Cluster Server Performance

Status
Not open for further replies.

Nak3dAdm1n

Junior Member
Here is the question: Does anyone notice a difference as to how services run or performance of a server behaves in a two server fail over cluster setup?

Back story: I have multiple two server fail over cluster setups and it often appears that there is a "better" running host than the other one. I've found this to be true with many different services from our clustered print servers to our file share servers. I found this especially true after we installed a new core switch (Cisco Nexus 7k). As time goes on I've found myself just shutting off one of the servers and not using the fail over services at all, but I would like to identify why this is happening. Any thoughts or suggestions would be appreciated.

Environment:
(2) MS Windows 2008 R2 Enterprise virtual machines
Cisco Nexus 7k
VMWare ESXi 5.5 using vCenter server with a two cluster setup
 
Are you talking about using Microsoft Cluster service in VM's? or ESXi HA/FT for the windows 2008R2 vm's?
 
Do your hosts route into the same physical switch? Do the servers run through stacking setups? In my setups I've never seen a poorly running host in a failover, just a misconfigured one. The most common I've seen is a bad storage network setup, so that once host will run MPIO to ISCSI storage, and the other one has fallen back to using only one or two links. Of course this is only if you're using MPIO capable storage.
 
Yes, the hosts machines are using a 10gig fiber connection back to the core and the storage network is using a 8 gig fiber channel card (i think) to our SAN. I am unsure what the stacking setup is that you are referring to perhaps we are using different terminology?

When I say performance I should clarify that it doesn't appear that the VMs I/O operations or CPU operations appear to be different from one VM to another, but rather the services that are on the cluster act differently from one VM to the other. Most notably is the print server when services are hosted on print server 1 things just seem to run a bit more smooth than if the services are being hosted on print server 2. We have experienced the print spooler service gets wonky when its hosted on server 2.
 
Yes, the hosts machines are using a 10gig fiber connection back to the core and the storage network is using a 8 gig fiber channel card (i think) to our SAN. I am unsure what the stacking setup is that you are referring to perhaps we are using different terminology?

When I say performance I should clarify that it doesn't appear that the VMs I/O operations or CPU operations appear to be different from one VM to another, but rather the services that are on the cluster act differently from one VM to the other. Most notably is the print server when services are hosted on print server 1 things just seem to run a bit more smooth than if the services are being hosted on print server 2. We have experienced the print spooler service gets wonky when its hosted on server 2.

Specific to the print server, this means you don't have the drivers synced and are running and alternate versions on that one server or the other. Put the odd server in to maintenance and drain the printer server. Use the utility to purge the spooler entirely (back to new) then use print management to import the printers and drivers from the other cluster member. If you update drivers on one host, make sure it happens on the others.
 
How many NICs does each host have. Are the hosts identical? Is the A node on one host and the B node on the other?

Do you have all A nodes on one host and all B nodes on the other?

What NICs do the hosts have? What all services are tied to each NIC? Is the management NIC separate from the VM traffic NIC(s).

There are a lot of questions, but with virtual infrastructure there is a bit more complexity so you'll need to determine what those differences are. For all you know, one of the hosts may be running at gigabit/full duplex while the other is running at 100/half duplex because of a switch/NIC port mis-configuration and auto-negotiation not happening correctly. Just some things to look for.
 
How many NICs does each host have?:

Each host has 1 network card with two ports both trunked to the core.

Are the hosts identical?

Yes and no, the physical machines are slightly different, but they all are using ESXi 5.5 and are HP models of varying generations.

Is the A node on one host and the B node on the other?

The A node is in cluster 1 and B node is in cluster 2, but can vary between hosts because of vMotion.

Do you have all A nodes on one host and all B nodes on the other?

No.
 
Status
Not open for further replies.
Back
Top