howto pair/team gigabit NICs?

gherald

Member
Mar 9, 2004
99
0
0
Suppose I want to get 2000mbps (4000mbps duplex) between two WinXP PCs

a) Can I do this using two single port NICs on each PC and running two crossover cables between them?

b) How about if I had a gigabit hub and ran two cables from 1 PC to the hub, and 2 cables from the other PC to the hub?

c) How about if I ran 1 cable from each PC to the hub, and 1 crossover cable between the PCs?

Maybe this shows what I am asking better:

scenario A:
PC1 NIC1 --> crossover <-- PC2 NIC 1
PC1 NIC2 --> crossover <-- PC2 NIC 2

scenario B:
PC1 NIC1 --> HUB PORT 1
PC1 NIC2 --> HUB PORT 2
PC2 NIC1 --> HUB PORT 3
PC2 NIC2 --> HUB PORT 4

scenario C:
PC1 NIC1 --> HUB PORT 1
PC2 NIC1 --> HUB PORT 2
PC1 NIC2 --> crossover <-- PC2 NIC2

Oh, and I'll be using cheap NICs

Also, suppose PC1 were to run Linux while PC2 had XP. Will this affect configuration at all?
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
The reason is that you can't even get 1000Mbs out of a standard gigabit nic. Windows can't keep up. Also you need special software and hardware. Its just not worth it. Maybe with a server OS, but not with XP.
 

gherald

Member
Mar 9, 2004
99
0
0
But would it work at the network layer? Would it be _any_ faster??

I didn't realize the OS/other hardware would be such a factor. Well, suppose PC1 looks like this:

OS: Linux
Disks: RAID 5 with six 250gb SATAs


So, on a two NIC link would PC2 (an XP client) be able to stream files faster than through 1 NIC ? What % difference would you guestimate?

See, PC2 is going to be a "thin" SFF client with only two 36g raptors for space. I want it to be able to get data off the RAID 5 as fast as cheaply possible...

I'd also like to know if there is any significant difference between Scenarios A,B,C.... also, it would be interesting to know how A,B,C would pan out if all we were talking about was a 100mbps network.
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
The most I have ever seen out of a gigabit link was 800Mbs and that was with a highly modified version of linux. Bonding isn't usually used for that with gigabit links. It was used when gigabit was priced beyond reason and people would bond multiple links between switches and some times servers. The hardware was/is expensive but it was cheaper than an alternative. I don't even know if you could bond gigabit links. I do know that the increase would not be worth it if any. One placer where bonding would be faster is between routers and switches.
 

gherald

Member
Mar 9, 2004
99
0
0
Hmm, bonding... well at least I have a technical term to go on, that'll make googling for info easier.

So forget my question about gigabit. How would A,B,C compare with just 100mbps NICs? Is there an easy way to configure this bonding? I can't just wire two NICs on each PC and expect it to be faster, right?
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
There is more than one term for it.(Bonding, Teaming, and a couple more.) It will never happen, Just use a single gigabit link. Maybe I could help If you explian why you want this.
 

gherald

Member
Mar 9, 2004
99
0
0
On the one hand, I am looking to assemble a file/p2p server with > 1 TB of redundant storage. (PC1)

On the other hand, I want a SFF XP client with only a RAID 0 of 36g raptors (PC2) to be able to transfer files back and forth as quickly as possible (and indeed, network mount most of the file server's space)

I'm open to other options like firewire or even fiber, but gigabit seemed like the obvious choice, especially considering both motherboards already have a NIC onboard. $18 apiece for two additional ones seemed like a cheap upgrade, if it's feasible.

Simply put: You're talking to a RAID fanatic who's wondering if he can do the same thing with his NICs.

 

AFB

Lifer
Jan 10, 2004
10,718
3
0
You could try. Hell, I don't even know where you should start. But,.... I still doubt it will work.
 

gherald

Member
Mar 9, 2004
99
0
0
Ah, okay, well hopefully someone else around here has some more insight. If not, I guess I'll crosspost to Tom's
 

SoulAssassin

Diamond Member
Feb 1, 2001
6,135
2
0
We use NIC teaming w 2 GBoE NICs at work. All Compaq/HP hardware. You can very easily configure them to work together using NLB, however, even w quad p4 2.8/s, Emulex 9802 2Gb/s HBA's and EMC disk you'll never max out the NIC. We use them in a failover mode so at any given point only one NIC is actually active, each goes to a seperate switch in the event of a switch or card failure, we're covered.

Bottom line, don't waste your time trying to test. Your bus and drives won't come closing to moving the amt of data you need to tax even a single GB nic.
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
Wait, You might try BBR They have some big networking forums. I think you are going to get told the same thing.
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
Originally posted by: SoulAssassin
We use NIC teaming w 2 GBoE NICs at work. All Compaq/HP hardware. You can very easily configure them to work together using NLB, however, even w quad p4 2.8/s, Emulex 9802 2Gb/s HBA's and EMC disk you'll never max out the NIC. We use them in a failover mode so at any given point only one NIC is actually active, each goes to a seperate switch in the event of a switch or card failure, we're covered.

Bottom line, don't waste your time trying to test. Your bus and drives won't come closing to moving the amt of data you need to tax even a single GB nic.

Failover=Yes
Combined=No
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Intel has really nice nic drivers. This is where the bonding or teaming would come in hand.

But like everybody has said...the computer can't keep up with one gigE nic, let alone two. Also the bonding and load distribution is normally handled by MAC address. If there is only one conversation going on then it will follow one path only.

We have some 16 proc sun boxes with teamed gigE cards. Most we've ever seen was 1.2 Gigabits/sec. And that is on a $200K server.
 

AFB

Lifer
Jan 10, 2004
10,718
3
0
I know when I do multistream downloads, the benift ends after ~6 streams. It starts hurting pefrormance after~8.
 

nightowl

Golden Member
Oct 12, 2000
1,935
0
0
Well, chances are that the NICs that you are looking at do not have the support for any type of bonding. Also, as others have said, your PC will not be able to keep up with 1 GbE NIC. Your PCI bus is going to be a limiting factor and most server NICs are either 64bit PCI, PCI-X, or some other variation that I have not thought about. Also, you do need need a switch that will support the bonding if I remember correctly. So, your best bet will to get a single GbE NIC for each PC and make sure that it is a quality NIC like an Intel to ensure you get the best performance. Also, remember that SMB is not the most efficient network protocol especially on a Windows box. If you were to use Linux you would probably get better network performance than with Windows.
 

nightowl

Golden Member
Oct 12, 2000
1,935
0
0
Well for sharing files SMB is the best way to go but it is less efficient when compared to FTP. It is just something to keep in mind when looking at transfer rates. I do not know of a better protocol for sharing files. Also, NFS can be troublesome when integrating with Windows machines, at least that has been my experience.