NIC Teaming?

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
So my new UD3P has 2 gbit nics and the motherboard says I can do "teaming" if I use 2 cables. Is that sort of like SLI/Crossfire for NICs and would I see any benefit to doing that when playing games/downloading/surfing? If there's even a small boost, I'll grab a cable from monoprice but if it's simply hassle and no help, I won't bother. Thanks!
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You need a switch that supports it and even then unless you have multiple clients pulling from it you won't see a performance increase. The technology is called link aggregation (LAG) or etherchannel (cisco's version). Most managed switches will support it.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
Teaming is only useful if you have multiple PCs simultaneously communicating with a single PC (the one with the teaming enabled). The total traffic with the multiple clients can be greater than a single NIC could handle. But the traffic between the PC with teaming enabled and any other device will be limited (at best) to the speed of a single NIC.

The most common uses for teaming are for a server that has many clients talking to it simultaneously and for redundancy if one of the NICs fails.
 
Last edited:

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,553
430
126
This is an example of what Intel card with few ports can Do.

It is a good read to understand the Teaming (combining few Network cards/ports) issues.

Read it an see what (if any) fits your need and the Network that you have (envision).

Teaming Features
Teaming Features include Failover protection, increased bandwidth throughput aggregation, and balancing of traffic among team members. Teaming Modes are AFT, SFT, ALB, Receive Load Balancing (RLB), SLA, and IEEE 802.3ad Dynamic Link Aggregation. Features available by using Intel's Advanced Networking Software (ANS) include:

  • Fault Tolerance
    Uses one or more secondary adapters to take over for the primary adapter should the first adapter, its cabling or the link partner fail. Designed to ensure server availability to the network.
  • Link Aggregation (this is the feature that the Nascar want to be are looking for) :D
    The combining of multiple adapters into a single channel to provide greater bandwidth. Bandwidth increase is only available when connecting to multiple destination addresses. ALB mode provides aggregation for transmission only while RLB, SLA, and IEEE 802.3ad dynamic link aggregation modes provide aggregation in both directions. Link aggregation modes requires switch support, while ALB and RLB modes can be used with any switch.
  • Load Balancing
    The distribution of the transmission and reception load the among aggregated network adapters. An intelligent adaptive agent in the ANS driver repeatedly analyzes the traffic flow from the server and distributes the packets based on destination addresses. (In IEEE 802.3ad modes the switch provides load balancing on incoming packets.)
    Note: Load Balancing in ALB mode can only occur on Layer 3 routed protocols (IP and NCP IPX). Load Balancing in RLB mode can only occur for TCP/IP. Multicasts, broadcasts, and non-routed protocols are transmitted only over the primary adapter.
Teaming Modes
  • Adapter Fault Tolerance (AFT)
    Allows mixed models and mixed connection speeds as long as there is at least one Intel® PRO server adapter in the team. A 'failed' Primary adapter will pass its MAC and Layer 3 address to the failover (secondary) adapter. Implemented in Microsoft Windows*, NetWare* 4.111 and above, UnixWare* 7.x with ddi8, and Linux* (32 bit). All adapters in the team should be connected to the same hub or switch with Spanning Tree (STP) set to Off.
  • Switch Fault Tolerance (SFT)
    Uses two (total) adapters connected to two switches to provide a fault tolerant network connection in the event that the first adapter, its cabling or the switch fail. This is determined by a link failure. Do not put clients on the link partner switches, as they will not pass to the partner switch at fail. Available in Windows NT* 4.0 and 2000, as well as in NetWare1 and Linux. Spanning Tree (STP) must be On.
    Note: Only 802.3ad DYNAMIC mode allows failover between teams.
  • Adaptive Load Balancing (ALB)
    Offers increased network bandwidth by allowing transmission over 2-8 ports to multiple destination addresses, and also incorporates Adapter Fault Tolerance. Only the primary receives incoming traffic. Only the primary transmits broadcasts/multicasts and non routed protocols. The ANS software load balances transmissions, based on Destination Address, and can be used with any switch. Simultaneous transmission only occurs at multiple addresses Implemented in Microsoft Windows* 2000, Windows Server* 2003, and Windows NT 4; NetWare 4.111 and above; UnixWare 7.x with ddi8; and Linux. This mode can be connected to any switch.
    • Receive Load Balancing (RLB)
      • Offers increased network bandwidth by allowing reception over 2-8 ports from multiple addresses.
      • Can only be used in conjunction with ALB.
      • RLB is enabled by default when an RLB team is configured.
      • Only the adapters connected at the fastest speed will be used to load balance incoming TCP/IP traffic. The primary, regardless of speed, will receive all other RX traffic.
      • Can be used with any switch. Any failover will increase network latency until ARPs are re-sent. Simultaneous reception only occurs from multiple clients.
      • Available for Microsoft Windows.
The above is a quote from. Intel dual port NIC's software info.
 
Last edited:

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
No gain, your internet connection isnt enough to saturate even a single link, much less need 2
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
Thanks for all the great replies. I think I'll head down to Subway with the $5 instead... :)
 

hclarkjr

Lifer
Oct 9, 1999
11,375
0
0
my damn board i just bought has 4 NIC's on it for this. wish they would have put something useful there instead
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,553
430
126
my damn board i just bought has 4 NIC's on it for this. wish they would have put something useful there instead

It cost them few dimes to put each port, and the robot does the work.

Just another step in the Marketing to the ignorant.

"We are the Brand names of hardware for the Home users.

Our boards are the best they have Not just one NIC, but two or more.

Our 802.11g Wireless does 300 feet indoor, our new draft_N does x10 more".


They do not dare to cross to the Twilight Zone yet, so they are just implying that their Wireless can do 3000 feet indoor.
 

Jamsan

Senior member
Sep 21, 2003
795
0
76
When does NIC teaming become beneficial from a performance standpoint? Is there something specific to look for in performance monitor on the network interface that helps identify when the NIC starts becoming the bottleneck? Output queue length?
 
Last edited:

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
When does NIC teaming become beneficial from a performance standpoint?

When the server is servicing multiple clients and the cumulative bandwidth approaches the bandwidth limit of your network interface.

Is there something specific to look for in performance monitor on the network interface that helps identify when the NIC starts becoming the bottleneck? Output queue length?

http://technet.microsoft.com/en-us/magazine/2008.08.pulse.aspx?pr=blog#id0120043
 

MarkXIX

Platinum Member
Jan 3, 2010
2,642
1
71
That's it in a nutshell. There is next to nothing that you likely do at home to saturate even a single 100Mbps or gigabit connection.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
I saturate my gige link at home all the time when doing transfers from my laptop to my server. I have 2 SSDs in raid 0 and I max the pipe easily.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I saturate my gige link at home all the time when doing transfers from my laptop to my server. I have 2 SSDs in raid 0 and I max the pipe easily.

You might be hitting the upper limits but how have you gotten past the upper write speed of about 105MB/s that even the best SSD's have? 105MB/s is still lower than gig's max speed. I know that they read at 200MB/s + (which is faster than gig) but you have to put that data someplace.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You might be hitting the upper limits but how have you gotten past the upper write speed of about 105MB/s that even the best SSD's have? 105MB/s is still lower than gig's max speed. I know that they read at 200MB/s + (which is faster than gig) but you have to put that data someplace.

Memory.
 

TemjinGold

Diamond Member
Dec 16, 2006
3,050
65
91
You might be hitting the upper limits but how have you gotten past the upper write speed of about 105MB/s that even the best SSD's have? 105MB/s is still lower than gig's max speed. I know that they read at 200MB/s + (which is faster than gig) but you have to put that data someplace.

With Raid-0 maybe?
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
switching ip hashing usually ends up feeding one of the teamed nic's so you end up with 1gig in one direction.

true mpio probably works better to utilize both nics purely from bandwidth standpoint.

which is why everyone is waiting for decent affordable real 10gbe cards to become a reality so we don't have to deal with teaming or mpio limits with gigabit ethernet.

redundancy is another matter of course.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
You have to remember that SSDs are measured in MB per second while network traffic is Mb. So a gig-e connection is actually only 125MB per second. Most SSDs reach that easily.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,217
540
126
I use it for one of my servers. It is running Solaris 10 and uses ZFS for the disks. One of the things with ZFS is that it can be configured to cache to system RAM for writes, so if you are streaming a ton a data to the system, it will be extremely fast until it runs out of RAM, but with 8GB of RAM, I have never hit that limit except when doing initial benchmarking/performance tuning. My normal use doesn't deal with files much larger than 6GB (HD video). You need to own a switch that supports this functionality (which is not your "consumer" level switch, but business class). I picked up a 16 port managed GbE switch a few years back for around $150 which was a steal and worth every penny.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I use it for one of my servers. It is running Solaris 10 and uses ZFS for the disks. One of the things with ZFS is that it can be configured to cache to system RAM for writes, so if you are streaming a ton a data to the system, it will be extremely fast until it runs out of RAM, but with 8GB of RAM, I have never hit that limit except when doing initial benchmarking/performance tuning. My normal use doesn't deal with files much larger than 6GB (HD video). You need to own a switch that supports this functionality (which is not your "consumer" level switch, but business class). I picked up a 16 port managed GbE switch a few years back for around $150 which was a steal and worth every penny.

With a single file on a single user you would only be using 1 port, even if you have a team set up. Teaming doesn't improve 1 to 1 connections (at least with typical things like SMB.) You can do things like round robin on iSCSI or the like only because the protocol allows blocks to be rotated through many connections via MPIO but the speed is never / will never be 2Gbs.

I was still looking at the SSD's a bit because a single good SSD can go just over a 1Gbs link and in RAID 0 they benchmark faster but I have seen many "real world" tests that show they also behave like typical RAID 0 magnetics where they benchmark at 250MB/s but real world is closer to running them as singles. I will need to pick up a pair of SSD's someday but if they are anything like the magnetic RAID 0, I would expect that they wouldn't really "feel" 2 x as fast in day to day use.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the latency of iscsi (or aoe or whatever) reduces most of the gain from the SSD.

just wait for 10gbe or be happy with 1 gig or manual load balancing. we're almost there (10gbe). just need some more affordable switches
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,217
540
126
I have automated processes pushing files from the HTPC to the server as well as reading files from the server on the HTPC as well as other systems and media players in the house. So, in my case, it does make a difference because I usually have a few different simultaneous connections to my server reading or writing data.