Etherchannel/802.3ad question

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
So first things first, I'm a storage/backup guy...not a networking guy but trying to figure some stuff out and I know there's some guys on here with enterprise experience.



Getting ready to do some lab testing of a Sun T5220 with dual quad port cards (X4447A-Z) and a HP DL380G5 running RHEL4 also with dual quad ports (NC364T) and a pair of Catalyst 6509 switches. Need to get etherchannel bonding working with as many (hopefully all 8) ports as possible with the goal of increasing backup ingest rates. All of the backup clients will still be on single gigabit but we will be bringing in a ton of simultaneous streams.

Question is...#1 is 4 ports from card 1 to switch 1 and 4 from card 2 to switch the right way to do it or is it better to cross two from each? Fault tolerance in the event of an NIC or switch failure is required but we can live with reduced performance for a time period and I wouldn't worry about slamming one NIC if a switch fails. #2 are there any good BDPs or white papers written on this? Been reading some stuff on Cisco's site but more the merrier.

I have a couple networking guys on this with me but trying to make sure we do this the right way the first time. Answers to the above or thoughts in general are appreciated.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
6509 can support up to 8 in a etherchannel or LAG (link aggregation). Ideally you'd just have two 10 gigabit ethernet ports to two different switches but you may not have that capability. If you don't then a single channel of 8 gig ports to a single switch would be fine.

You're going to have to watch load/traffic distribution in the channel - it doesn't just use every link. There is an alogrithym used to select which port/link to use and that is set at the switch level globally. You should also be able to select this on the NICs as well, this is concerned with how traffic is egressed. Most times source and destination IP and tcp port are used to get a good balance. Backups are a little different in that they always use a single tcp session to do the heavy moving and this results in only one link being used. If you have one or two backup servers that are the destination for the traffic and the clients are lots of servers then you'll get some what of an even load.

Just make sure you guys understand this because a common complaint is "hey! It's not combining all of them to give me 8 gig!". That's the way it works.

-edit-
I see you have two 6509s. Are these equipped with VSS supervisors and setup as a virtual switch? If so you could do 4 ports to one switch, 4 to another and still have it be one etherchannel or lag. If not then four from NIC1 to switch one, four from NIC2 to switch two to have two etherchannels. Then use IP multipathing on the host.
 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
We found a major issue with IPMP in the latest version of Netbackup where it was creating traffic with both ip addresses, confusing the hell out of the master server and crippled our environment for 2 days so IPMP is definitely off the table. :) I do not know yet if they are setup as virtual switches...I was reading about this earlier. "The SMLT protocol removes this limitation by allowing the physical ports to be split between two switches. Cisco's Virtual Switching System allows the creation of a Multichassis Etherchannel (MEC) allowing ports to be aggregated towards different physical chassis that conform a single "virtual switch" entity." Maintaining failover in the event of a switch failure is definitely important so hopefully our Cisco guys already have this taken care of.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You need the new VSS supervisors to take advantage of VSS and MEC. They've only been out a year so it's pretty new stuff.
 

Cooky

Golden Member
Apr 2, 2002
1,408
0
76
FYI - the Unix & RHEL hosts may not support LACP so you may need to turn the ether-channel on "static".
At least that was the case a year ago w/ a Solaris box we had.

Another thing we watch out for is if you have blade enclosures, make sure the load balance algorithm is set to src-dst-ip as Spidey has pointed out above.
Some of those blade servers have even number of MAC address burnt-in (HP ones do) and you'd get end up utilizing only one link.

On a side note:
We just stood up our first VSS pair...very exciting stuff; will see how well it performs.