• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Very slow gigabit network

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Nexworks
Using Sisoft Sandra network benchmark I get 20MB/sec with a 900 latency.

I just tried Sandra, and got "6 MB/s" between the same two machines. It's a dated version, but not older than iperf, which gives much more useful results in this case.
 
Originally posted by: Nexworks
How about forcing duplex to full and speed to 1000 mbit/s?


was gonna suggest this myself. I seen lots of instances where 1gig cards autodetect down to 100, even when 2 computers are connected direct with a crossover cable.



 
Originally posted by: MikeShunt
was gonna suggest this myself. I seen lots of instances where 1gig cards autodetect down to 100, even when 2 computers are connected direct with a crossover cable.

In theory, gigabit should never be connected with a crossconnect cable anyway -- earlier cards couldn't detect that the pairs were swapped and would assume that it was 100mbit/s anyway.

 
I have similar problems with my Gigabit network. The NICS are Intel Pro 1000 GTs. Using I perf, 9000 Jumbo Frames, a TCP window of 128K I get 300 n- 310 Mbs, but file copies ect are only a little quicker that 100 Meg ethernet. The common facter in these networks seems to be the OS, i.e. XP SP2.

I still honestly think that there is something in the OS thats the problem. I might well be wrong about this, but a test speed of 200 - 300 Mbs seems to be the figure that comes up time and time again and its nearly always desktop OS to desktop OS, using XP SP2. The tweaks on the net that you google for rairly match the settings in XP SP2, even though they are listed for them.

I did try a Dlink PCI NIC, the D528, and instead of 300Mbs it gave 200 Mbs. In the UK you can get the OEM version of the Intel Pro 1000 GT NIC for less than the Dlink, or other name card based on the Realtek 8169 chipset. I myself prefer the Intel NIC over the DLink.

Rob Murphy
 
Originally posted by: MikeShunt
Originally posted by: Nexworks
How about forcing duplex to full and speed to 1000 mbit/s?


was gonna suggest this myself. I seen lots of instances where 1gig cards autodetect down to 100, even when 2 computers are connected direct with a crossover cable.





I cant force 1000Mbit. On the nForce 430 I get the option to Negotiate 1000FD, the Force settings only go up to 100FD. On my evga i680 both nics only have Force and Negotiate upto 100FD
 
Originally posted by: robmurphy
I have similar problems with my Gigabit network. The NICS are Intel Pro 1000 GTs. Using I perf, 9000 Jumbo Frames, a TCP window of 128K I get 300 n- 310 Mbs, but file copies ect are only a little quicker that 100 Meg ethernet. The common facter in these networks seems to be the OS, i.e. XP SP2.

I still honestly think that there is something in the OS thats the problem. I might well be wrong about this, but a test speed of 200 - 300 Mbs seems to be the figure that comes up time and time again and its nearly always desktop OS to desktop OS, using XP SP2. The tweaks on the net that you google for rairly match the settings in XP SP2, even though they are listed for them.

While there are significant performance differences between OSs for SMB file transfers, I haven't found XP to be crippled at the underlying network level as you suggest.

E.g. W2K to XP Home SP2, using same hardware as before (nForce 3 to nForce 430, no jumbo frames):

F:\tools\bench\iperf>iperf -c 192.168.0.125 -l 1M -t 30 -i 5 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.125, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[816] local 192.168.0.141 port 5706 connected with 192.168.0.125 port 5001
[ ID] Interval Transfer Bandwidth
[816] 0.0- 5.0 sec 429 MBytes 720 Mbits/sec
[816] 5.0-10.0 sec 565 MBytes 948 Mbits/sec
[816] 10.0-15.0 sec 566 MBytes 950 Mbits/sec
[816] 15.0-20.0 sec 566 MBytes 950 Mbits/sec
[816] 20.0-25.0 sec 566 MBytes 950 Mbits/sec
[816] 25.0-30.0 sec 566 MBytes 950 Mbits/sec
[816] 0.0-30.0 sec 3.18 GBytes 910 Mbits/sec
[860] local 192.168.0.141 port 5001 connected with 192.168.0.125 port 1123
[ ID] Interval Transfer Bandwidth
[860] 0.0- 5.0 sec 562 MBytes 943 Mbits/sec
[860] 5.0-10.0 sec 550 MBytes 923 Mbits/sec
[860] 10.0-15.0 sec 560 MBytes 939 Mbits/sec
[860] 15.0-20.0 sec 566 MBytes 949 Mbits/sec
[860] 20.0-25.0 sec 550 MBytes 923 Mbits/sec
[860] 0.0-30.0 sec 3.27 GBytes 936 Mbits/sec

Here's what I get for example with a PCI NIC (Intel Pro/1000 MT Server) on the XP machine. Note, no jumbo frames:

F:\tools\bench\iperf>iperf -c 192.168.0.191 -l 1M -t 30 -i 5 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.191, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[816] local 192.168.0.141 port 5710 connected with 192.168.0.191 port 5001
[ ID] Interval Transfer Bandwidth
[816] 0.0- 5.0 sec 493 MBytes 827 Mbits/sec
[816] 5.0-10.0 sec 561 MBytes 941 Mbits/sec
[816] 10.0-15.0 sec 561 MBytes 941 Mbits/sec
[816] 15.0-20.0 sec 562 MBytes 943 Mbits/sec
[816] 20.0-25.0 sec 562 MBytes 943 Mbits/sec
[816] 25.0-30.0 sec 561 MBytes 941 Mbits/sec
[816] 0.0-30.0 sec 3.22 GBytes 922 Mbits/sec
[860] local 192.168.0.141 port 5001 connected with 192.168.0.191 port 1155
[ ID] Interval Transfer Bandwidth
[860] 0.0- 5.0 sec 363 MBytes 608 Mbits/sec
[860] 5.0-10.0 sec 363 MBytes 609 Mbits/sec
[860] 10.0-15.0 sec 362 MBytes 608 Mbits/sec
[860] 15.0-20.0 sec 362 MBytes 607 Mbits/sec
[860] 20.0-25.0 sec 362 MBytes 608 Mbits/sec
[860] 0.0-30.0 sec 2.12 GBytes 608 Mbits/sec

The transmit performance of the Intel PCI NIC is relatively hobbled here. To be fair though, it improves quite a lot when jumbo frames are used here:

F:\tools\bench\iperf>iperf -c 192.168.0.191 -l 1M -t 30 -i 5 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.0.191, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[816] local 192.168.0.141 port 5713 connected with 192.168.0.191 port 5001
[ ID] Interval Transfer Bandwidth
[816] 0.0- 5.0 sec 580 MBytes 973 Mbits/sec
[816] 5.0-10.0 sec 589 MBytes 988 Mbits/sec
[816] 10.0-15.0 sec 589 MBytes 988 Mbits/sec
[816] 15.0-20.0 sec 589 MBytes 988 Mbits/sec
[816] 20.0-25.0 sec 591 MBytes 992 Mbits/sec
[816] 25.0-30.0 sec 589 MBytes 988 Mbits/sec
[816] 0.0-30.0 sec 3.45 GBytes 985 Mbits/sec
[900] local 192.168.0.141 port 5001 connected with 192.168.0.191 port 1173
[ ID] Interval Transfer Bandwidth
[900] 0.0- 5.0 sec 547 MBytes 918 Mbits/sec
[900] 5.0-10.0 sec 546 MBytes 916 Mbits/sec
[900] 10.0-15.0 sec 545 MBytes 914 Mbits/sec
[900] 15.0-20.0 sec 545 MBytes 914 Mbits/sec
[900] 20.0-25.0 sec 545 MBytes 914 Mbits/sec
[900] 0.0-30.0 sec 3.20 GBytes 915 Mbits/sec
 
Well, here is some info:

I picked up a Intel PRO 1000GT card. I tested it in all three systems. It has zero effect on my network performance. I used its built in cable tester to verify all my cables are good.

I have also added a 3rd system to the mix, this being a XP MCE 2005 system with a 3com Gigabit onboard. With that present, I have performed the following copy tests using a 715MB file, and here are the speeds I get in MB/sec

XP Professional (nForce 430 Gigabit)
XP to XP MCE: 24
XP to Vista: 12 to 24

Vista Ultimate 32-bit (nForce 680 Gigabit)
Vista to XP: 6 to 24
Vista to XP MCE: 3.25

XP Media Center 2005 (3com Gigabit)
XP MCE to XP: 24
XP MCE to Vista: 12

So, 24MB/sec = 200Mbit. That is the absolute best I can get. I have also gotten ahold of iperf and ran the following tests, the first being Vista server and XP the client, and the second being the reverse.

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.4, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1876] local 192.168.1.3 port 2304 connected with 192.168.1.4 port 5001
[ ID] Interval Transfer Bandwidth
[1876] 0.0- 3.0 sec 187 MBytes 523 Mbits/sec
[1876] 3.0- 6.0 sec 186 MBytes 520 Mbits/sec
[1876] 6.0- 9.0 sec 184 MBytes 515 Mbits/sec
[1876] 9.0-12.0 sec 186 MBytes 520 Mbits/sec
[1876] 12.0-15.0 sec 185 MBytes 517 Mbits/sec
[1876] 0.0-15.0 sec 929 MBytes 518 Mbits/sec
[1948] local 192.168.1.3 port 5001 connected with 192.168.1.4 port 49226
[ ID] Interval Transfer Bandwidth
[1948] 0.0- 3.0 sec 336 MBytes 940 Mbits/sec
[1948] 3.0- 6.0 sec 336 MBytes 940 Mbits/sec
[1948] 6.0- 9.0 sec 337 MBytes 942 Mbits/sec
[1948] 9.0-12.0 sec 337 MBytes 943 Mbits/sec
[1948] 0.0-15.0 sec 1.64 GBytes 941 Mbits/sec



------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.3, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[128] local 192.168.1.4 port 49229 connected with 192.168.1.3 port 5001
[ ID] Interval Transfer Bandwidth
[128] 0.0- 3.0 sec 325 MBytes 909 Mbits/sec
[128] 3.0- 6.0 sec 331 MBytes 926 Mbits/sec
[128] 6.0- 9.0 sec 329 MBytes 920 Mbits/sec
[128] 9.0-12.0 sec 334 MBytes 934 Mbits/sec
[128] 12.0-15.0 sec 332 MBytes 928 Mbits/sec
[128] 0.0-15.0 sec 1.61 GBytes 924 Mbits/sec
[140] local 192.168.1.4 port 5001 connected with 192.168.1.3 port 2321
[ ID] Interval Transfer Bandwidth
[140] 0.0- 3.0 sec 188 MBytes 525 Mbits/sec
[140] 0.0- 3.0 sec 1.58 Gbits 525 Mbits/sec


Can someone explain to me what this possibly means?

 
Your underlying network benches very well, at least going from the XP to the Vista, so for this path at least, you can pretty much forget about tweaking at the network level, and focus on other parts. The reverse direction is not that great, but still a lot higher than your effective throughput, so you can apply the same logic to it and leave it alone, or try to tweak it further. Note however that tweaking the underlying network at this point is unlikely to help much at the overall problem. So you have a good outcome from the iperf result -- don't bother spending more money on NICs or networking gear.

To tweak the Vista networking, you could try the netsh command that I suggested earlier.

There are other OS tweaks possible, and I'll come back to them when I have some more time. For the next steps, I suggest jumping to ATTO to formally test the drive performance simply, and focusing on only one or two paths -- e.g. the path that works best and works worst.

Run ATTO locally and then map a network drive, and run it for that mapped drive over the network to compare local and remote performance. This test would be formally verifying how fast the local drives are in practice, but know that there is often a big gap in local and network performance. The subsequent steps would try to reduce the gap or improve the drives, as warranted.

E.g. pic of ATTO configuration (and test results). This is to show the desired configuration, not the results:

http://i89.photobucket.com/alb...tto-random-network.png

You can save a bit of testing time by dropping the very small and very large access size tests.

ATTO is not the ultimate in drive/network performance testing, and has some test-to-test variability, but it's very easy to use, and gives useful information without involving multiple drives simultaneously.
 
Back
Top