Actual Speeds

warath

Member
Mar 3, 2002
31
0
66
I've tried to find information on this for a long time, and thought I'd ask here (sorry if its been discussed, as i can't find where).

100Mbit network, with a 100Mbit swtich:
100Mbit = 12.5Kbytes/sec (theory)

1000Mbit/Gigabit
1000Mbit = 125Kbytes/sec (theory)

I have NEVER been able to get these results or even close to them.

on the 100Mbit i've gotten windows to report about 50% utilization, and threw programs like bpftp a sustained 5-8Kb/sec
on the 1Gbit i've gotten only a max of 10% utilization, or 10-12Kb/sec

Why is this? I know there is overhead, but that shouldn't matter. Also, for the 1Gbit, I'm transfering between machines on the same switch, a gigabit one, and each computer has Sriped Raid drives that benchmark at 100Mbyets/sec on average, so no bottleneck there either.

Some network guru know why this happens? and ways to increase performance?
Thanks
 

freebsdrules

Member
Feb 20, 2005
137
0
0
For one, your numbers are off by a couple factors. 100Mbit=100mbps=12.5Mbytes/second & 1000Mbit/Gbit=1000mbps=125Mbytes/second.

If the most you've seen on a 100mbit network is 50% utilization, something's going on. You should be able to easily see 80-95%. Gigabit is a bit trickier, the most I've seen on pretty high end equipment on actual transfers was around 270mbps sustained. Using iperf, I was able to get around 918mbps, however.
 

warath

Member
Mar 3, 2002
31
0
66
sorry, your right, the K's should be M's, /bonk self :)

As for the utilization, i'll have to check the 100Mbit on again, as I haven't used for a long time, since I switched to Gigabit.
However, on the gigabit, why is the utilization so low??
 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
First off, if you're going for high network utilization rates a good NIC is important. I've seen Realtek NICs that can't push much over 70% on 100mbit switched ethernet. A good Intel PRO NIC will certainly help.

Secondly, you will get different utilization numbers depending on what you're doing with the network. For example, writing from one computer to another computer over a network connection will always be slower than reading. Generally, pushing data will always be slower than receiving it. If you're trying to write large files to your raid array that's most likely the cause of your lower network utilization rates.

Third, and this is the most deceiving part, is that gigabit is only truly gigabit under very optimal circumstances. There are many limitations to current network technology that prevent gigabit from performing at its peak. The first and most obvious are that even extremely fast raid arrays (in your case 100mbytes/sec arrays) cannot keep up with the peak speed of gigabit (125mbytes/sec). If your gigabit card is PCI based you will NEVER see peak gigabit performance because the PCI bus does not have the available bandwidth for gigabit (unless you're running some sort of PCI-X based card or you have an onboard card that is located separate from the PCI bus). And, to top it all off, the majority of lower end 'gigabit' switches are nothing more than faster clocked 10/100 ethernet switches, true gigabit switches will support jumbo framing.

If I were you I would start with trying to figure out why you can't max your 100meg switched connection, based on your current information I would say you should easily be able to do this at least on reads, maybe not for writes. As for maxing out a gigabit connection, lets just say you'll need some really fast, expensive hardware. =)
 

warath

Member
Mar 3, 2002
31
0
66
Thanks OmegaXero...

What Gigabit Switch (5 port is all I need, if a good one is available, I have a TRENDnet one atm) would your recommend? Or should I bother trying to update that since 2/3 of the machines are connected via PCI Lynksys cards? The 3rd is an Intel integrated Gigabit port.

Also, what is the PCI bandwidth, I thought it was 133Mbytes/s, or is it Mbits?
 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
Good question on PCI bandwidth, I'm 90% certain its rated in mbits (it is something like 13 years old now). Honestly, most switches are about the same for lower end (16 port and under) applications. Just make sure that if you do buy another gigabit switch that it supports jumbo framing, this is supposed to give the greatest benefit (provided that you have systems capable of maxing a gigabit link).

Most likely you're limited by your PCI cards. The trick here is to figure out if you could squeeze more performance out of your current setup, I wouldn't look at getting more hardware if you've already invested the time (and cash) into your current setup. Technically, even if you have PCI based GigE cards, you should be able to go faster than 100mbit ethernet. Its not uncommon for PCI GigE cards to hit about 230-280Mbits/second, which translates into about 31.2 megaBYTES of usable bandwidth (hey, that's still over twice as fast as 100mb switched ethernet, not bad eh?).

Just for testing purposes I would try the following. I'm assuming you probably have one of your raid arrays on the system with the onboard intel nic (the really good NIC in this case). Try copying a large file (200MB or more) from the system with the Intel NIC using one of your systems with the linksys NIC. This should easily max 100mbps ethernet and, if the extra bandwidth is available on the PCI bus, should start to approach something like 230mbits/sec if you're using your gigabit switch. Good luck. =)

EDIT: to correct what I had stated at the beginning of this post PCI is NOT rated in mega bits but rather mega bytes. However, take into consideration that this is all shared bandwidth (much like using a network hub vs a switch). You will rarely be able to use all of PCI's available bandwidth for just a single device. This will, in turn, limit the overall transfer performance of devices capable of using all of PCI's available bandwidth.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,471
387
126
Configured and set correctly Windows computer would yield functional "Speed" of 60-80 Mb/sec.

Which is about 7 to 10MB/sec. of actual "Speed" File transfer.

Giga is a little more complicated you can see the numbers here: Peer to Peer Giga Networks.

:sun:
 

freebsdrules

Member
Feb 20, 2005
137
0
0
Originally posted by: JackMDS
Configured and set correctly Windows computer would yield functional "Speed" of 60-80 Mb/sec.

Which is about 7 to 10MB/sec. of actual "Speed" File transfer.

Giga is a little more complicated you can see the numbers here: Peer to Peer Giga Networks.

:sun:

I would have to disagree with you here...speeds of 80-95 are easily attainable.

 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
Then again, if you follow the link that he left you'll also notice that he states notable speed improvments on servers with dual xenon processors. Can you imagine how bright that would be? Jack, proof your pages before you post and for god sakes link to something other than ezlan.net. ;)

"If you install Giga on Double Xenon Computers with fast SCSI RAID, and Server Software you might get 400% (x4) improvement."

He does mention jumbo framing.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,471
387
126
Originally posted by: OmegaXero
Then again, if you follow the link that he left you'll also notice that he states notable speed improvments on servers with dual xenon processors. Can you imagine how bright that would be? Jack, proof your pages before you post and for god sakes link to something other than ezlan.net. ;)

"If you install Giga on Double Xenon Computers with fast SCSI RAID, and Server Software you might get 400% (x4) improvement."

He does mention jumbo framing.
May be you can try spending some time researching and Publishing.

Hmm. Nah! it is much easier to criticize others.

:sun:
 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
Based on your grammar and proofing skills I'm sure all your published content is highly successful. Hmm. Nah!
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,471
387
126
Originally posted by: OmegaXero
Based on your grammar and proofing skills I'm sure all your published content is highly successful. Hmm. Nah!
Hmm. Based on more than one hundred publications in main stream Scientific Peer Reviewed Journals? Yeah, it is highly successful.

:sun:
 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
ROTFLMAO, for the sake of leaving your ego inflated and fully intact I'll stop arguing with you. Besides, this is warath's thread about networking, not a flame war. Try not to take a little constructive criticism too personally.
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
I've gotten over 35MB/sec out of my gigabit through a Trendware switch. I get the same throughput using an SMC switch with jumbo frames enabled. I also get the same throughput using 2 Intel NICs or an Intel and Broadcom or an Intel and Marvell NIC. Network hardware is likely not your gigabit transfer rate issue.

When using FTP I get about 10MB/sec faster transfer rates than using a drag and drop in windows (~35 MB/sec FTP, ~25 MB/sec drag and drop). Part of the transfer rate issues you have are likely protocol issues.
Investigate the options you have for your network card. I got significant speed gains by increasing the buffers and maximum interrupts in the options. Likely your combination of MaxMTU and max interrupts is essentially forcing your throughput to lower than your practical maximum levels.

In most cases gigabit transfer rates for normal home users will be limited by your HARD DRIVES and not the networking equipment. This is because hard drives are not capable of writing at speeds of anywhere close to 100 MB/sec. Unless you ahve a significant RAID array at both ends of your transfer, you don't need to worry at all about network hardware. Jumbo frames, CPU usage, etc... are all likely to be a non issue for any significant transfer, as the HD will be the bottleneck (not a bad thing in my mind)

If you can manipuilate the NIC options to get 20-25MB/sec with a drag and drop, you should be pretty close to the max you're going to get without making some serious investments in your storage subsystem and/or your networking software.

Unlike the link, I do not believe there will be significant improvements over this level with CPU or Jumbo frames. In my case Jumbo frames made ZERO difference and CPU usage was well under 100% during the transfer.

The 10 MB/sec difference in using samba (normal windows drag/drop) and FTP indicates there is definitely something to his indication that server software makes a difference. In my case I am using a Linux box as the fileserver, but I verified drag/drop transfers between Windows boxes are similar to the Linux --> Windows drag/drop speed.

Overall I disagree with the liked article's assessment that:
In other words, the current state of Giga for regular Home systems is mainly an attempt to make some money by catering to wishful thinking rather than real useful technology.

Going from 8-10 MB/sec to ~25-35 MB/sec is a tangible improvement. In the case of often transferring large files across the connection it is quite a noticeable improvment for only a modest investment (motherboards have built in NICs, switches are only about $70-100 more expensive for an 8 port). I would agree that most home users will be perfectly fine with 10/100 though.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
92
91
Originally posted by: OmegaXero
ROTFLMAO, for the sake of leaving your ego inflated and fully intact I'll stop arguing with you. Besides, this is warath's thread about networking, not a flame war. Try not to take a little constructive criticism too personally.

you call that constructive criticism? that was ignorant flaming and being a troll (you, not jack). any information you could offer on this subject, jack will know more. he has been around way longer and is very well known for his knowledge. you are just an idiot with no arguing skills, so you result to low-blows and pick on his grammar when you cant find a flaw with his content. nice try, lamer :roll:

 

yoda291

Diamond Member
Aug 11, 2001
5,079
0
0
What's the hurry? networks are too fast nowadays. I remember back in my day when you could fire up an ftp transfer, go outside, get a slice of pie or so, meet with some good people, share a few laughs, come back home and your transfer is done. Nowadays, you kids and your crazy jumbo frames and windows, you start up a transfer and now you've got to wait a solid 4 minutes doing absolutely nothing. Wasting your lives away I tell ya.

Honestly, I wouldn't even worry about the gigabit utilization. If you ever managed to get a significant percentage of maximum throughput spec'd out by gigabit ethernet, on a traditional PCI bus, you're gonna run into issues with your motherboard and IO controllers likely.

As far as fast ethernet utilization, you can drop in a line sniffer or run any number of network utilities that will tell you your actual bandwidth utilization. The indicators on most file transfer mechanisms (smb, ftp,etc) really only measure the payload not actual throughput. They don't much account for all that wonderful overhead associated with 3 way handshakes, SMB signing, frame scaling, and so on and so forth.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
I'd like to point out that networks operate at their rated speed. 100 megabit ethernet is 100 megabits/sec.

1000 Base-T is 1000 megabits sec.

There is no theory behind it, that's what the speed is. If the computer can't keep up, well that's a whole other story. With today's computers filling a 100 base-T is not hard at all. With gigabit its a matter of "how fast can I get data to the network card?"

The only other theory part comes from the interframe gap required by ethernet, there is a pause between each frame. The math has already been done to factor in the interframe gap yielding about 93% max utilization with 64 byte frames and 98.7% max with 1514 byte frames. This applies to 10 and 100 Base-T. Gig goes to 99.4% with jumbo frames.

If you want the most out of your LAN then max out the TCP window or use UDP for higher throughput.
 

warath

Member
Mar 3, 2002
31
0
66
Originally posted by: spidey07
I'd like to point out that networks operate at their rated speed. 100 megabit ethernet is 100 megabits/sec.

1000 Base-T is 1000 megabits sec.

There is no theory behind it, that's what the speed is. If the computer can't keep up, well that's a whole other story. With today's computers filling a 100 base-T is not hard at all. With gigabit its a matter of "how fast can I get data to the network card?"

The only other theory part comes from the interframe gap required by ethernet, there is a pause between each frame. The math has already been done to factor in the interframe gap yielding about 93% max utilization with 64 byte frames and 98.7% max with 1514 byte frames. This applies to 10 and 100 Base-T. Gig goes to 99.4% with jumbo frames.

If you want the most out of your LAN then max out the TCP window or use UDP for higher throughput.

Ok, so I should say Marketing Ploy numbers :p Yes I understand that the network can actual transfer 1000 mbits/s, but there is all kinds of overhead, etc. However, no matter the overhead costs, you should still get more than a 10% utiliziation out of it. And some people here have pointed out some very good things as to what else could be causing the slow down. Thanks for the info guys, It makes much more sence now.

Btw, anyone know for sure what the bandwidth of the PCI bus is in MBits/s ?


 

OmegaXero

Senior member
Apr 11, 2001
248
0
0
Originally posted by: warath

Btw, anyone know for sure what the bandwidth of the PCI bus is in MBits/s ?

Bandwidth of the a standard 33Mhz, 32bit PCI bus is 133mbytes/sec. I looked it up today after you asked about it in your previous post. :)
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.
 

randal

Golden Member
Jun 3, 2001
1,890
0
71
Originally posted by: Sunner
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.

Considering that a standard 1000mbps NIC is PCI@33Mhz/32bits, it becomes nearly impossible for the machine to saturate a gigE network; there simply isn't enough bandwidth on the machine's bus to accomodate moving tons of data from a HD/memory to the NIC and back without everything having to compromise/slow down.

Hence why big iron servers have ridiculous amounts of I/O & bus availability - going so far as to have completely separate busses for different devices.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: randal
Originally posted by: Sunner
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.

Considering that a standard 1000mbps NIC is PCI@33Mhz/32bits, it becomes nearly impossible for the machine to saturate a gigE network; there simply isn't enough bandwidth on the machine's bus to accomodate moving tons of data from a HD/memory to the NIC and back without everything having to compromise/slow down.

Hence why big iron servers have ridiculous amounts of I/O & bus availability - going so far as to have completely separate busses for different devices.

Most any kind of half decent server will have separate PCI buses these days, including <$2000 2-ways, but yes, I agree.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: randal
Originally posted by: Sunner
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.

Considering that a standard 1000mbps NIC is PCI@33Mhz/32bits, it becomes nearly impossible for the machine to saturate a gigE network; there simply isn't enough bandwidth on the machine's bus to accomodate moving tons of data from a HD/memory to the NIC and back without everything having to compromise/slow down.

Hence why big iron servers have ridiculous amounts of I/O & bus availability - going so far as to have completely separate busses for different devices.

hence why "real" servers (not the puny wintel crap) have no trouble filling multiple 1000
Base-t ports.

;)
 

yoda291

Diamond Member
Aug 11, 2001
5,079
0
0
Originally posted by: spidey07
Originally posted by: randal
Originally posted by: Sunner
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.

Considering that a standard 1000mbps NIC is PCI@33Mhz/32bits, it becomes nearly impossible for the machine to saturate a gigE network; there simply isn't enough bandwidth on the machine's bus to accomodate moving tons of data from a HD/memory to the NIC and back without everything having to compromise/slow down.

Hence why big iron servers have ridiculous amounts of I/O & bus availability - going so far as to have completely separate busses for different devices.

hence why "real" servers (not the puny wintel crap) have no trouble filling multiple 1000
Base-t ports.

;)

I fail to see how a server running an intel proc or windows as an operating system affects bus speeds. Last time I checked, most servers sold nowadays use an intel proc.
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: yoda291
Originally posted by: spidey07
Originally posted by: randal
Originally posted by: Sunner
The fastest real world transfer I've seen was between two servers directly connected to each other.
Both are HPaq Proliants with Broadcom bcm5700 NIC's, getting about 40-50 MB/sec, this is sending data from a Win2K box to a Linux box running Samba, no tweaking.
Of course, this measurement is HIGHLY unscientific, seeing as I just looked at the files transferring, somewhat easy though since the files were all 40-50 MB in size and took about 1 sec/file :)

And yes, PCI is 133 MB/Sec.
PCI-X goes from 66 - 133 MHz, ~ 532 MB/Sec - 1064 MB/Sec.
Then there's PCI-X 2.0, which increases this to a max of 533 MHz, ~4 GB/Sec.

Considering that a standard 1000mbps NIC is PCI@33Mhz/32bits, it becomes nearly impossible for the machine to saturate a gigE network; there simply isn't enough bandwidth on the machine's bus to accomodate moving tons of data from a HD/memory to the NIC and back without everything having to compromise/slow down.

Hence why big iron servers have ridiculous amounts of I/O & bus availability - going so far as to have completely separate busses for different devices.

hence why "real" servers (not the puny wintel crap) have no trouble filling multiple 1000
Base-t ports.

;)

I fail to see how a server running an intel proc or windows as an operating system affects bus speeds. Last time I checked, most servers sold nowadays use an intel proc.

Well, an IBM pSeries certainly has alot more I/O bandwidth than any Wintel server around.
But again, even lowly 2-way $2K servers use PCI-X buses, oftentime several of them, so they shouldn't have much of a problem, so long as you're not talking loads of GigE ports.