Gigabit LAN

arcenite

Lifer
Dec 9, 2001
10,660
7
81
Hello all. I just upgraded my network to gigabit and am getting speeds averageing around 12 - 15 MB/s. I used netgear switches and cards using the realtek chipset. HELP :D

Thanks,
Bill
 

spike spiegal

Member
Mar 13, 2006
196
0
0
Is that bits per second or bytes per second?

If it's BYTES per second, you are already running at gigabit speed. A 100meg network will bottleneck at about 10megaBYTES per second of transfer speed because of over-head, collisions, and other things. If you're running at 12-15megaBYTES per second, then you are already there and you obviously have everything set correctly.

Note that your vanilla onboard gigaNIC isn't that efficient and can't move data like the higher end Intel Pro 1000 cards can. My dual Xeon and dual Core AMD servers hit around 30-40megaBYTES per second sustained. That's if the system being written to has a very fast disk system.
 

arcenite

Lifer
Dec 9, 2001
10,660
7
81
Thank you both for your reply :) It appears everything is in order then.

Thanks,
Bill
 

BlueWeasel

Lifer
Jun 2, 2000
15,944
475
126
From EZLan
Giga means that the Internal Clock is running 1000MHz. Trying to attribute it to "Speed of Transfer" is a Marketing thing.

Well, you learn something new everyday. All this time, I figured GigaLAN was a ~10x improvement over a 100mpbs network.

 

stardrek

Senior member
Jan 25, 2006
264
0
0

From EZLan
Giga means that the Internal Clock is running 1000MHz. Trying to attribute it to "Speed of Transfer" is a Marketing thing.

Hahahah, silly EZLan if only he/she knew that Giga is just a prefix that means billion (10^9).
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,545
422
126
Originally posted by: stardrek
Hahahah, silly EZLan if only he/she knew that Giga is just a prefix that means billion (10^9).
LOL. I can assure you that EZLAN knew that Giga is 1,000,000,000 at least 30 years before you were born (may be even more). :shocked:;):beer:

:sun:
 

BlueWeasel

Lifer
Jun 2, 2000
15,944
475
126
Originally posted by: JackMDS
Originally posted by: stardrek
Hahahah, silly EZLan if only he/she knew that Giga is just a prefix that means billion (10^9).
LOL. I can assure you that EZLAN knew that Giga is 1,000,000,000 at least 30 years before you were born (may be even more). :shocked:;):beer:

:sun:

:laugh:
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: spike spiegal
Note that your vanilla onboard gigaNIC isn't that efficient and can't move data like the higher end Intel Pro 1000 cards can. My dual Xeon and dual Core AMD servers hit around 30-40megaBYTES per second sustained. That's if the system being written to has a very fast disk system.

30 MB/s is not a problem for modern IDE drives and on-board NIC's, which may be superior to add-on devices by avoiding the PCI bus. I'm getting it consistently, and even hitting up to 70 MB/s now when going from RAID to a 2x RAID 0 array. It also stands to reason that if 2x RAID 0 can do 70 MB/s that 1 IDE drive can do at least 35 MB/s.

One of the biggest problems that I have with such testing is that source file caching gets in the way. When the sources are cached, I also get 50-70 MB/s now going from "IDE" to RAID. Easy to solve by re-booting, but who wants to do that?
 

ColdZero

Senior member
Jul 22, 2000
211
0
0
There's a lot of other things to consider besides just nic's and hard drives for gigabit lan. Are the switches you are going to use support Jumbo Frames? Without them, gigabit is not that much of an improvement above 100 Base-T. If they do support jumbo frames and you are going to turn that on, then all of your devices on that layer 2 segment need to support jumbo frames as well, meaning 1000 base all around with support for jumbo frames. If they don't, then they won't be able to communicate with anything using jumbo frames. If you need to split the lan apart, you will need a router to go between the two network segments.

It sounds like you aren't using jumbo frames and that you are hitting the limit of the PCI bus the cards are on. Or the switch could just not be very fast. Providing a 1000mb/s signal doesn't mean the switching is happening at wire speed.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
I hit 70 MB/s without jumbo frames through a consumer switch. Get jumbo frames if you can, but you can still get significant benefits without them.

Edit: I agree there may be several other things to consider. Are your drives compressed? Heavily fragmented? Constantly being accessed for other tasks? Is there any encryption in the communication? Are your cables bad? Is your virus scanner stressing your system? Etc. etc.
 

ColdZero

Senior member
Jul 22, 2000
211
0
0
70MB/s is pretty impressive for a network without jumbo frames. Whats your CPU utilization at?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: ColdZero
70MB/s is pretty impressive for a network without jumbo frames. Whats your CPU utilization at?

I don't know offhand, because I run folding@home almost all the time -- this takes up the idle cycles and gives me 100% CPU utilization all the time. When I measure this again, I'll let you know what the CPU utilization is like without folding@home.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
I did many more tests, and was able through caching and using RAID on the receiving side to get performance up to 80 MB/s (peak 78.1 MB/s on a set of files totaling 1.25 GB, 83.4 MB/s on a single 568 MB file). Without (significant *) caching, I get 65-70 MB/s on a single 4.5 GB file going from RAID to RAID.

(* In this case, at most 1 GB would be cached on the source side, because that's how much RAM it has; further, the caching would have to be smart, in order to not roll over completely with every transfer of the 4.5 GB file; I saw some indication of such smarts, but this wouldn't make a significant difference overall because of the sizes and rate limits involved.)

I noticed that pushing seems to be significantly faster than pulling (i.e. when transferring files from machine A to B, it's faster to initiate the transfer logging on to A than by logging on to B), and then focused on the best transfer mode instead of averaging the two or mixing them up, etc. I guess that this is because the file system / OS does some further optimization when reading the file locally; conversely perhaps does not do so when those requests come remotely. Of course, this may be simply due to some other unknown inefficiencies; it's just a guess.

On a 4.5 GB file, I saw on the order of 20% CPU on the sending side (2.8 GHz Pentium) and 55% CPU on the receiving side (2.0 GHz Athlon 64). These figures are just eyeball estimates; the CPU utilization was somewhat erratic, going +/- 5-10% around the given figure. Network utilization was around 52%. I think some of the utilization differences are due to the processor differences; differences in the storage and network implementation on both sides might also figure.

In order to factor out the storage, I did some tests with a pure transfer utility. I chose a version of the "open source" TTCP. AnandTech and others reference the Microsoft version NTTTCP, which, in keeping with traditional practice for a lot of benchmarking, seems to no longer be available. I used PCAUSA's PCATTCP 2.01.01.08, with settings as close to AT's as I could get: -l 250000 -n 30000. This version comes with source.

With this utility, I was able to get transfers up to around 110 MB/s, with 90-95% network utilization. CPU utilization was high, around 50% on the sending side, and around 50% on the receiving side. Consecutive runs reported around a 5% difference.

Well, I'm not sure how this data might help you. My interpretation is that I can run the network pretty hard when there are no other bottlenecks; may be running into CPU bottlenecks with application processing during transfers (and I'm sure jumbo frames, if I could use them, might help there), and currently cap somewhere around 70 MB/s effective throughput with fast drives, and further benefit from lots of RAM for file caching, just like everyone else.
 

Fullmetal Chocobo

Moderator<br>Distributed Computing
Moderator
May 13, 2003
13,704
7
81
Originally posted in the 'Peer to Peer Giga Networks' link:
If you install Giga on Double Xenon Computers with fast SCSI RAID, and Server Software you might get 400% (x4) improvement.
.

Xeon. X-E-O-N. I don't have two light bulbs powering this system.
 

Zoinks

Senior member
Oct 11, 1999
826
0
76
Someone help me understand gigabit! What type of transfer rate should I expect? Is 110 MB/s for real? I'm only getting 95KB/s. It doesn't seem realistic that I could only be getting 1/1000th of the bandwidth I should be.

I'm testing on two Dell Preceision workstations with built in Intel Pro/1000 MT NIC's. Both are connected to a SMC 8 port gigabit switch (8508). Both are set to use 9104 jumbo frames.


C:\>pcattcp -t 192.168.13.2
PCAUSA Test TCP Utility V2.01.01.08
TCP Transmit Test
Transmit : TCP -> 192.168.13.2:5001
Buffer Size : 8192; Alignment: 16384/0
TCP_NODELAY : DISABLED (0)
Connect : Connected to 192.168.13.2:5001
Send Mode : Send Pattern; Number of Buffers: 2048
Statistics : TCP -> 192.168.13.2:5001
16777216 bytes in 172.08 real seconds = 95.21 KB/sec +++
numCalls: 2048; msec/call: 86.04; calls/sec: 11.90
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
zoinks,

make sure the network cards and switch are set to autonegotiate speed/duplex.

speeds of 300-500 megabits/sec are normal for gigabit ethernet (1000 Base-T)

I'd bet you have a mismatch going on or something with the cable. You'll need to use store bought cables to ensure optimal performance.
 

Zoinks

Senior member
Oct 11, 1999
826
0
76
Both cards are using Intel PROSET v 10.3.32.2
Link status says "speed 1000Mbps/Full Duplex.
Speed and Dulplex set to Auto Detect

Diagnostics:
Connection says: Passed. It doesn't say much else but the info says "if the adapter connects below its maximum speed the connection test reports the reason for this lower speed".
Test for best link speed options: "This adapter is running at maximum speed" and "Connected at maximum speed of link partner of 1000 Mbps Full Duplex"
Cable test: No cable problems detected".
Hardware test: all passed.

There's not much I can do with the switch, but both ports have a green light by 1000BaseT.

Could it be a cable problem despite these diagnostics?
300-500 megabits/sec = 37.5-62.5 MB/s right? So at nearly 1MB/s I'm not even getting 100BaseT! How could things be that screwed up?
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
I'm thinking it is probably the cable then, either that or you still have a duplex mismatch.

if both the switch and NIC are showing 1000/full duplex then it must be the cable. Intel's drivers are pretty top notch but you could always try there.

also try without jumbo frames, that switch may not support them. some nics/switches have different meanings of what a jumbo frame size is in byte, so try without.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
well without a figure of just how big the jumbo frames it supports means jack squat.

need to know how big, exactly how big because just a single bit over the limit and the switch will drop it.

you may try just 9000 byte frames if you can force it in the driver and retest. if still underperforming try 8000 byte frames, then 4, then 2.

If it can't handle 8000 byte frames then SMC lieing. And hence the confusion over jumbo frames - it isn't IEEE standard AFAIK.