• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

does nic bonding increase throughput?

or is it application-specific?

I've got a box running solaris that's pushing a ton of traffic over its nics (let's say average is ~2TB worth of files getting moved onto it every day between 12 am and 4 pm) and have been debating whether or not it might be worth it to setup bonding on it for performance's sake (it's got the extra nic and I could hook it up to a second switch pretty easily).
 
Yes, not sure how it works on solaris, but in linux you bond them, in windows you use something like HP utility. Also, your switch has to support it.
 
Most consumer level switches wont do this. You have to start getting into the Cisco/Juniper game for this to work.
 
Most consumer level switches wont do this. You have to start getting into the Cisco/Juniper game for this to work.

What feature would I look for on the switch to tell? Off the top of my head, I think that the box in question is connected to a Cisco 6509 (with a second one available for the secondary connection)
 
Last edited:
It's called Link Aggregation (LAG) or Cisco Etherchannel. The distribution algorithm can be changed on the switch to suit your needs (hash on SRC/DST ip, or SRC/DST IP and port. Keep in mind however that if these transfers occur between two single hosts still only one NIC would be used.
 
It's called Link Aggregation (LAG) or Cisco Etherchannel. The distribution algorithm can be changed on the switch to suit your needs (hash on SRC/DST ip, or SRC/DST IP and port. Keep in mind however that if these transfers occur between two single hosts still only one NIC would be used.

Translation:

If you're serving to 1 computer, you will be stopped @ 1 gigabit. If the machine is serving to many hosts, it can get close to 2gb assuming the hardware can supply the data to the NICs.
 
Translation:

If you're serving to 1 computer, you will be stopped @ 1 gigabit. If the machine is serving to many hosts, it can get close to 2gb assuming the hardware can supply the data to the NICs.
thanks 😛

in the enviornment in question, I've got a couple hundred servers pushing out data over a private network to one machine (the receiving machine being the one that I want to setup aggregation on)
 
thanks 😛

in the enviornment in question, I've got a couple hundred servers pushing out data over a private network to one machine (the receiving machine being the one that I want to setup aggregation on)

😛 Just remember that 2gb/s = about 250MB/s so make sure that the hardware can handle the feed otherwise there is no point. IE if you're pushing only 80MB/s (say HDD limited), a LAG group will have no effect. You may also need to set up LAG groups between some of the switches depending on your layout.
 
thanks 😛

in the enviornment in question, I've got a couple hundred servers pushing out data over a private network to one machine (the receiving machine being the one that I want to setup aggregation on)

In that case you won't run into any load balancing restrictions. I think the default on 6500s is source and destination IP. You can check with "show etherchannel load-balance"
 
Back
Top