PCI gigabit NIC limited by processor?

mc866

Golden Member
Dec 15, 2005
1,410
0
0
I purchased a couple different cheap, $15-$20 gig nic cards to put into my WHS box, one was Zonet and the other Rosewell. For the longest time I was under the impression that the cable the previous owner ran to the room I have my WHS box in was possibly inferior and unable to support gig transfers, I know it's Cat5 not Cat5e and it has punch down receptacles on both ends. When I transfer a large amount of files, 10gig or more the fastest I was able to achieve was 12/MBytes/second. I also did some testing using the Jperf GUI tool to try and see if I could get better speeds but 12 seemed to be the max.

I cut out the lame cable and am now running a direct Cat6 patch cable that I bought from monoprice from my router to the back of the WHS box into the gigabit NIC. I did some real file transfers and also some Jperf tests, again the fastest I seem to be able to achieve is 13 MBytes/second. One thing I noticed this time around when running the Jperf tests is that the CPU would max out when I ran the test.

Am I not able to achieve better transfer speeds because of the fact that I'm running a SKT 754 based AMD 2800+? Can this limit the transfer speeds or is it just a matter of you get what you pay for in a cheap gig NIC card?

Just for reference to make sure it wasn't my router I also ran the test between my HTPC and my regular desktop and I was averaging 60 MBytes/second which I thought was more realistic for gigabit speeds.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You "could" be running into bus/processor limitations or possibly driver. If you've truly got the cabling/physical layer out of it (I don't know how long your patch cable is) then move on up the OSI model.

The fact that your processor is maxed out might make me think interrupts/bus. But again I am no PC expert so I really can't offer comment on that.

Also make sure your speed/duplex is set to auto on the network card.
 

ImDonly1

Platinum Member
Dec 17, 2004
2,357
0
76
I remember reading Gigabit NICs are limited by the PCI bus. If you look here at the 33Mhz/32-bit PCI bus you can see the maximum speeds it offers are about 1000 Mbit or 1 Gigabit. http://en.wikipedia.org/wiki/L...dwidths#Computer_buses

Notice this is the maximum and does not mean the real world performance. Also remember the PCI bus' bandwith is shared between other things in your computer. So yes it is a possibility that the PCI bus is a limitation for your gigabit network.

A PCI-e NIC will not be a limitation, but I don't know how much those cost. I never had to buy one since my motherboards usually have onboard gigabit. Also FYI some motherboards did/maybe still do use onboard gigabit LAN that is bridged through the PCI bus but others avoid this and skip the PCI bus limitation.


I would first try to mess with TCP window size optimizations and such to see if you can get faster speeds. (I never had to adjust them my gigabit LAN transfers at about 40-50 MB/sec I think without any setting adjustment). You can also try messing with the NIC card settings for flow control and offloading to see about the CPU usage.

Not too sure about the CPU usage.

Then I would try a PCI-e NIC if you have one available.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
The cheap PCI gigabit NICs are often Realtek based, and these have very high CPU utilization at high speed. Combine that with a drive controller also on the PCI bus, and you'd probably have slow speed, but not necessarily high CPU utilization.

However, 12 MB/s is low even for a socket 754 with a somewhat overloaded PCI bus -- I think I've hit 40 MB/s sustained over such conditions even with socket A.

iperf/jperf can give misleading results if you don't use good parameters. Try e.g.

server: iperf -s
client: iperf -c server -l 64k -t 15 -i 3 -r
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I need to find something to support this but I have heard lots of comments about WHS generally being "slow" because of the way it handles the file systems. The NIC only buffers so much before it is stuck waiting on the WHS system to push it all to disk. This should not effect iperf though.


-edit-

Ok, I have no experience with WHS however the reoccurring theme I have found on the web is that WHS can get very slow when disks in the disk pool are above 80%. They mentioned doing a test by copying a file to the system disk, if that works at decent clip then you need to add more disk and rebalance to get the disk usage down.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,552
429
126
All of the stories above while have a grain of truth to them do not explain 13MB/sec.

It is something else that is Not functioning or configured correctly.

I would try another PCI Card that known to run a better Giga on another computer.
 

ccbadd

Senior member
Jan 19, 2004
456
0
76
Your problems might be the fixed disk and not the NIC. What type of drive are you using and what is the setup(SATA, EIDE, RAID)? I don't remember what WHS calls it, but are you using there for of shadow copy for redundancy?
 

Eeqmcsq

Senior member
Jan 6, 2009
407
1
0
Man, 13 MB/s is weirdly low. If you want to rule out your cheapo NIC, try running an Ubuntu live CD and see if you can transfer files faster to your other PCs.
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
Hmm it looks like I have a few things to look into here, I know my disc utilization is up over 80% right now and I do have some files that I have redundancy enabled on. I think I may try a few other PCI slots to see if maybe that has something to do with it also. I'm also running "green" TB drives on all of my drives except the OS drive which is a 74GB raptor, could that slow things down?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: mc866
my disc utilization is up over 80% right now and I do have some files that I have redundancy enabled on. I think I may try a few other PCI slots to see if maybe that has something to do with it also. I'm also running "green" TB drives on all of my drives except the OS drive which is a 74GB raptor, could that slow things down?

There's no point tweaking the HDs / file system when iperf (using good parameters such as mine) gives poor results. In that case, there's something going on at the network level, and you need to improve that if you can before HD-level tweaks will give an improvement. OTOH, if you used default parameters with iperf, the results could be misleading and tweaks at the HD level might be just what you need to improve file transfer performance.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
make sure you do not have flow control on. iirc win vista liked to try to turn that on.

are you saying your disk copy if 60 MEGABYTES/sec? or 60megabit/sec?

quite a bit of difference there.

 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
Just to confirm flow control is turned off on all of my NIC's and I believe this is Megabytes, in Jperf it has the option of Mbytes and Mbits and I've been doing my testing with results provided in Mbytes which I believe is MegaBytes.
 

betaflame

Member
Jul 28, 2009
81
0
0
All the PCI bus gigabit cards i've seen cap out around 700mbit. IIRC the theoretical limit for the entire PCI bus was 800mbit

Edit: My Athlon64 3700+ is CPU limited to between 400-600mbit depending on how much offloading that card is capable of (100% CPU load)
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
Here is an odd question, would it matter that I haven't disabled the onboard NIC? I'm testing now with both ports plugged in, the gig NIC and the onboard 10/100 and I'm getting Jperf results in the 200Megabits range, which is good if I had 2 10/100 cards but one is gig. Am I limiting myself by leaving the onboard enabled by any chance?
 

ImDonly1

Platinum Member
Dec 17, 2004
2,357
0
76
See above where I said the PCI bus is shared. So if both are used and both go through the PCI bus, then yes?
 

chuck2002

Senior member
Feb 18, 2002
467
0
0
Toms did a review of gigabit and found that hard drives can become the bottleneck on transfers:
http://www.tomshardware.com/re...it-ethernet,800-3.html

Not sure how your WHS is configured, but if it is doing software raid plus slow drives plus high CPU utilization NIC, plus your CPU is reaching 80% utilization on transfers, you could have a combination of issues causing the slowdowns.
WHS is based on server 2003, which has a proven solid network stack.
Is your WHS doing any raid rebuilding or other processes while you are transferring? What addons are you runing?
I don't think it is a PCI bus limitation personally. I run PCI gigabit nics that transfer without high CPU load.
Disabling the onboard Nic probably won't do anything more than releasing the IRQ that it is given. It isn't doing any work, so probably no harm having it. Disabling can't hurt either though.

 

azev

Golden Member
Jan 27, 2001
1,003
0
76
Stick with intel based NIC card for server as much as possible. They have proven compatibility and mature driver, and also the most expansive (other than killer nic :))
PCI bus bandwidth is also your enemy, but still 12MB for gigabit connection is very slow. My experience with consumer grade networking gear & most built in nic card is around 20-30MB.
Check to see if you're actually connected at gigabit speed and not 100Mbps. 12MB seems to be the max for a 100Mbps link.
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
So I've disabled the onboard nic, I did more testing and am consistently getting ~200 Megabits/sec which is the same if I had them both enabled and plugged in. I disabled Avast and the numbers seem to go up a bit but not much. I've disabled flow control and have the link speed set to auto negotiate.

Here are the Add-in's I'm running:
WHS disc management
Avast!
Webguide4

Like I mentioned earlier, all of my discs are over 90% full which I know is bad, I'm doing some HD encoding to try and reduce that number, but currently I have 3 1TB green drives, the 74 GB raptor for the OS, and a 500 GB USB drive connected in the WHS drive space "pool"

Madwand1, jperf doesn't allow me to do custom commands, the reason I'm using Jperf is because I was unsure of how to run ipef would you mind providing a link or explanation of how to run iperf from the command line?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: mc866
jperf doesn't allow me to do custom commands, the reason I'm using Jperf is because I was unsure of how to run ipef would you mind providing a link or explanation of how to run iperf from the command line?

Jperf is essentially a shell for iperf, so there are equivalent parameters in it, and iperf underlying it so you can run that directly. For that, you just open up a dos box / command prompt, cd to the directory where iperf is installed, and type in the commands as I wrote -- iperf -s on the "server" side, and "iperf -c server -l 64k -t 15 -i 3 -r" on the client side, where server is the name or the IP of the machine running "iperf -s".

The important parameter in iperf is -l 64k on the client side.

Unfortunately, I haven't gotten consistent results with jperf and other such flavours beyond iperf version 1.7 under Windows, so only recommend running iperf 1.7. Of course, it is possible to get valid results with other versions, but there's also more that can potentially go wrong with them.

E.g.

F:\tools\bench\iperf>iperf -c intel-vista -l 64k -t 15 -i 3 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to intel-vista, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[600] local 192.168.0.100 port 2419 connected with 192.168.0.107 port 5001
[ ID] Interval Transfer Bandwidth
[600] 0.0- 3.0 sec 350 MBytes 980 Mbits/sec
[600] 3.0- 6.0 sec 350 MBytes 978 Mbits/sec
[600] 6.0- 9.0 sec 351 MBytes 981 Mbits/sec
[600] 9.0-12.0 sec 350 MBytes 979 Mbits/sec
[600] 12.0-15.0 sec 351 MBytes 981 Mbits/sec
[600] 0.0-15.0 sec 1.71 GBytes 979 Mbits/sec
[576] local 192.168.0.100 port 5001 connected with 192.168.0.107 port 49316
[ ID] Interval Transfer Bandwidth
[576] 0.0- 3.0 sec 335 MBytes 938 Mbits/sec
[576] 3.0- 6.0 sec 331 MBytes 926 Mbits/sec
[576] 6.0- 9.0 sec 331 MBytes 927 Mbits/sec
[576] 9.0-12.0 sec 334 MBytes 934 Mbits/sec
[576] 0.0-15.0 sec 1.62 GBytes 929 Mbits/sec
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
Alright, thanks for the syntax and the info, it was very helpful. I ran a few tests and it the results below look to be about what I'm getting on average.


C:\Users\Mike>iperf.exe -c 192.168.200.131 -l 64k -t 15 -i 3 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.200.131, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[380] local 192.168.200.102 port 55328 connected with 192.168.200.131 port 5001
[ ID] Interval Transfer Bandwidth
[380] 0.0- 3.0 sec 95.5 MBytes 267 Mbits/sec
[380] 3.0- 6.0 sec 95.1 MBytes 266 Mbits/sec
[380] 6.0- 9.0 sec 93.1 MBytes 260 Mbits/sec
[380] 9.0-12.0 sec 98.3 MBytes 275 Mbits/sec
[380] 12.0-15.0 sec 93.9 MBytes 262 Mbits/sec
[380] 0.0-15.0 sec 476 MBytes 266 Mbits/sec
[136] local 192.168.200.102 port 5001 connected with 192.168.200.131 port 1859
[ ID] Interval Transfer Bandwidth
[136] 0.0- 3.0 sec 283 MBytes 790 Mbits/sec
[136] 0.0- 3.0 sec 2.37 Gbits 790 Mbits/sec

 

azev

Golden Member
Jan 27, 2001
1,003
0
76
Originally posted by: mc866
Alright, thanks for the syntax and the info, it was very helpful. I ran a few tests and it the results below look to be about what I'm getting on average.


C:\Users\Mike>iperf.exe -c 192.168.200.131 -l 64k -t 15 -i 3 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.200.131, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[380] local 192.168.200.102 port 55328 connected with 192.168.200.131 port 5001
[ ID] Interval Transfer Bandwidth
[380] 0.0- 3.0 sec 95.5 MBytes 267 Mbits/sec
[380] 3.0- 6.0 sec 95.1 MBytes 266 Mbits/sec
[380] 6.0- 9.0 sec 93.1 MBytes 260 Mbits/sec
[380] 9.0-12.0 sec 98.3 MBytes 275 Mbits/sec
[380] 12.0-15.0 sec 93.9 MBytes 262 Mbits/sec
[380] 0.0-15.0 sec 476 MBytes 266 Mbits/sec
[136] local 192.168.200.102 port 5001 connected with 192.168.200.131 port 1859
[ ID] Interval Transfer Bandwidth
[136] 0.0- 3.0 sec 283 MBytes 790 Mbits/sec
[136] 0.0- 3.0 sec 2.37 Gbits 790 Mbits/sec

Hmmm could it be duplex mismatch ??? I wonder...
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
Been running the test a few more times, and seem to get time out errors every once in a while, I wonder if that has anything to do with it, would that mean it's the cable?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
That's a very large asymmetry that you're getting. I've seen something like that in the past with a socket A PCI bus (or earlier? probably VIA chipset), but still not that large IIRC. Just for another test, I suggest running the same test, but reversing the client and server roles.

Gigabit NICs should generally be on auto-negotiate.

To eliminate cabling issues, try switching to short factory terminated cables, moving one of the boxes closer temporarily if needed.
 

mc866

Golden Member
Dec 15, 2005
1,410
0
0
I moved the server up into the same room as my router and my main PC, it's now connected directly to the back of the router using a 7ft factory terminated cat6 cable.

Here are my results with my WHS box as the server:

C:\Users\Mike>iperf -c 192.168.200.131 -l 64k -t 15 -i 3 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.200.131, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[328] local 192.168.200.102 port 56800 connected with 192.168.200.131 port 5001
[ ID] Interval Transfer Bandwidth
[328] 0.0- 3.0 sec 102 MBytes 286 Mbits/sec
[328] 3.0- 6.0 sec 99.3 MBytes 278 Mbits/sec
[328] 6.0- 9.0 sec 107 MBytes 298 Mbits/sec
[328] 9.0-12.0 sec 103 MBytes 288 Mbits/sec
[328] 12.0-15.0 sec 103 MBytes 288 Mbits/sec
[328] 0.0-15.0 sec 514 MBytes 287 Mbits/sec
[408] local 192.168.200.102 port 5001 connected with 192.168.200.131 port 1056
[ ID] Interval Transfer Bandwidth
[408] 0.0- 3.0 sec 225 MBytes 628 Mbits/sec
[408] 0.0- 3.0 sec 1.89 Gbits 628 Mbits/sec

Here is the result in reverse for two runs:
------------------------------------------------------------
Client connecting to 192.168.200.102, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1884] local 192.168.200.131 port 1057 connected with 192.168.200.102 port 5001
[ ID] Interval Transfer Bandwidth
[1884] 0.0- 3.0 sec 202 MBytes 565 Mbits/sec
[1884] 3.0- 6.0 sec 205 MBytes 574 Mbits/sec
[1884] 6.0- 9.0 sec 208 MBytes 582 Mbits/sec
[1884] 9.0-12.0 sec 188 MBytes 526 Mbits/sec
[1884] 12.0-15.0 sec 216 MBytes 604 Mbits/sec
[1884] 0.0-15.0 sec 1020 MBytes 570 Mbits/sec

C:\Documents and Settings\Administrator>iperf -c 192.168.200.102 -l 64k -t 15 -i
3-r
------------------------------------------------------------
Client connecting to 192.168.200.102, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1884] local 192.168.200.131 port 1058 connected with 192.168.200.102 port 5001
[ ID] Interval Transfer Bandwidth
[1884] 0.0- 3.0 sec 221 MBytes 617 Mbits/sec
[1884] 3.0- 6.0 sec 197 MBytes 550 Mbits/sec
[1884] 6.0- 9.0 sec 198 MBytes 554 Mbits/sec
[1884] 9.0-12.0 sec 229 MBytes 639 Mbits/sec
[1884] 12.0-15.0 sec 227 MBytes 634 Mbits/sec
[1884] 0.0-15.0 sec 1.05 GBytes 598 Mbits/sec


 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
The three results below are roughly consistent. The space seems to be missing between the 3 and -r in the final two tests -- please retry that if you have a chance.