risingflight

Junior Member
Dec 21, 2015
5
0
0
Hi everyone,

I am using HP blade servers (HP Enclosure C7000). I have a Full Height Server(BL 660C), I am using Virtual Connect and i using two interconnect bays Bay1 and Bay2 for networking.(both going to trunk ports of the switch and i am using vmwares distributed switch)

When installing ESXi 5.5 should i just select just one network card and give the management IP and deselect other 3 network cards or all the four network cards should be connected.

Also i have half height BL 460 C Gen9 server please guide me experts

I am new to Hp and vmware so please guide me.
 

Red Squirrel

No Lifer
May 24, 2003
71,333
14,092
126
www.anyf.ca
From my understanding you want a dedicated nic for management, setup a vswitch for that nic and setup the management on that one, then on any other nics you do your teaming/trunking and setup your vlans etc and those are actually for networking. I think the nic that is for management can also be for vmotion and stuff.

It's been a long time since I worked with vsphere clusters though so I'm just going from memory. You'll probably want to wait for someone who actually works with it day to day. I do have a ESXi server at home but it's stand alone. 1 nic is for management and other is for VM networks.
 
Feb 25, 2011
17,001
1,628
126
The reason to use a dedicated NIC for management (or for iSCSI, or for whatever) is mostly to have reliable, dedicated paths for each task, or at the very least, guaranteed minimum bandwidth. That's what VMWare recommends, and if you have a bunch of extra NICs and switch ports, there's no reason not to do it that way. Except laziness! :D

I would NOT just use a single link though - at the very least, you want failover tolerance. (2 NICs connected with one designated as a failover NIC.)

That's the hard way. The easy way is to connect all of your NICs (or at least as many as you have switch ports for) and blob them all into a single vmware/vswitch port group. Set them all to active, and vmware will handle the rest.

Mindless, and completely adequate in (I'd imagine) the vast majority of circumstances.
 

risingflight

Junior Member
Dec 21, 2015
5
0
0
The bottom line is make sure all your NICs are connected.
I will give ip to one of the NIC( i.e Magament IP) and make sure other 3 NICS are also connected.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
Create one vSwitch, and use it for both management and virtual machine connections. This gives you the most flexibility.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
So much bad information.

Best practices are to have two NICs for management, two NICs for vMotion, and then whatever your requirements are for storage (if iSCSI, 2+ NICs) and data (2+ NICs.) Obviously, each set of NICs would be connected to both of your fabrics.

The purpose is to isolate failure domains and guarantee that things like vMotion don't impact your ability to manage the kit or your ability for your VMs to hit the network.

That said, I almost never dedicate a network for vMotion. If you have 100+ hosts running DRS for 1000s of vApps, then you probably will want to. If you have a half dozen hosts and 20-30 VMs, it's not worth it, usually.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
A lot of those things went away when 10Gbe became popular/somewhat cheap. Most of the time vMotion wont be doing too much. If you have Vms moving ALL the time, you probably want to invest in more hosts.

I do agree with having 2 NICs for storage when using IP storage, but I prefer FC.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
A lot of those things went away when 10Gbe became popular/somewhat cheap. Most of the time vMotion wont be doing too much. If you have Vms moving ALL the time, you probably want to invest in more hosts.

I do agree with having 2 NICs for storage when using IP storage, but I prefer FC.

Only real change with 10gbe is either a) putting vMotion and management on the same 10gig portgroup otherwise it is still the same basic design. Portgroup of Data, Portgroup of Management and portgroup (or multipath I/O etc) for storage. 1GBe recommended vMotion and management being separate. As for FC... fairly hefty extra cost to gain a tiny amount of performance that can be exceeded with just another Ethernet port.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
I prefer FC because network people can't mess it up.

Also, you are confusing portgroup and vSwitch. ESXi management traffic is basically negligible and I would prefer to have the NICs and bandwidth available for use for VM traffic than to basically not use them. Also, you can balance all of this in software with network resource pools, and custom failover and use policies if you want.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I prefer FC because network people can't mess it up.

Also, you are confusing portgroup and vSwitch. ESXi management traffic is basically negligible and I would prefer to have the NICs and bandwidth available for use for VM traffic than to basically not use them. Also, you can balance all of this in software with network resource pools, and custom failover and use policies if you want.

Not confusing it at all. I simply was using the terms from the switching side rather than the VMWare side. An network portgroup will attach to a single vSwitch. VMWare 101. Also the management traffic *may* be negligible but isn't always and it isn't a good situation to be in when the various vSphere agents start getting flooded off the network. Traffic goes up heavily when adding in remote monitoring (like OPS Manager), replication control at high VM:Host ratios. And while you can create all the resource pools and custom policy until your blue in the face, you can also simply toss a few 1 gig connections to your 10GBe connected hosts and reduce the complexity.

As for FC, I have rarely seen it not messed up because due to poor storage design. Add the word "Zone" and most FC fabrics I have seen have been the wild wild west. Normally stemming from a storage guy insisting "network people can't mess it up." Anyway, I generally see it fading away anyway as it no longer really offers any business advantage for the cost premium. I am not seeing that many FC boards in the VMAX 40k's anymore.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
I my rather large environment, all the storage we use is FC block from various high end EMC devices. Too many network issues to trust the network team to not inadvertently rip away the storage network and essentially cause a PSOD. I also have basically all the agents and crap going to, but most of those agents hit vCenter itself not really the hosts so vCenter is your bottleneck, not comms to the hosts.

Plus sometimes getting the additional 1Gb ports on switches is a challenge.

But then again everyone's environment is different.
 
Feb 25, 2011
17,001
1,628
126
I prefer FC because network people can't mess it up.

Why aren't you running your iSCSI network on a separate fabric that only you have the keys for? :thumbsdown:

(Ok, ok, hypocrite here - I only partially am, but that's only because I'm piggybacking on the IT trunks for replication between SANs in different racks.)
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Why aren't you running your iSCSI network on a separate fabric that only you have the keys for? :thumbsdown:

(Ok, ok, hypocrite here - I only partially am, but that's only because I'm piggybacking on the IT trunks for replication between SANs in different racks.)

It sucks not being able to trust your team. I luck out and I manage my iSCSI devices and am one the leads on the network side. We also do simple things like make sure the hostnames for storage networks are unique from normal network devices etc.