• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Very slow gigabit network

Nexworks

Member
Two computers, both using onboard gigabit NICs, one using the nForce 430 chipset and the other the nForce 680. Latest drivers installed. XP Pro on the 430, Vista Ultimate on the 680.

When copying files between the two systems, I get about 9-10MB/sec. Using Sisoft Sandra network benchmark I get 20MB/sec with a 900 latency. I should be getting 50MB+ with 150'ish latency. Both systems have 300GB SATA drives, fully defragged. Drive performance isnt a problem.

Between both systems is a gigabit switch. I have tested without it by using a crossover cable linking the two systems together. Performance is the same.

In windows when I set both NIC's to 100MB FD, I get 99% network utilization and 9-10MB/sec transfer. When I set both cards to Full Autonegotiation, both show they are connected at 1Gb, but network utilization hovers at around 12%. I have tested with Flow Control on and off without any difference.

Any ideas?
 
Originally posted by: Nexworks
Two computers, both using onboard gigabit NICs, one using the nForce 430 chipset and the other the nForce 680. Latest drivers installed. XP Pro on the 430, Vista Ultimate on the 680.

When copying files between the two systems, I get about 9-10MB/sec. Using Sisoft Sandra network benchmark I get 20MB/sec with a 900 latency. I should be getting 50MB+ with 150'ish latency. Both systems have 300GB SATA drives, fully defragged. Drive performance isnt a problem.

Between both systems is a gigabit switch. I have tested without it by using a crossover cable linking the two systems together. Performance is the same.

In windows when I set both NIC's to 100MB FD, I get 99% network utilization and 9-10MB/sec transfer. When I set both cards to Full Autonegotiation, both show they are connected at 1Gb, but network utilization hovers at around 12%. I have tested with Flow Control on and off without any difference.

Any ideas?

Try adjusting your ethernet frame size up (4K+). nVidia gigabit isn't exactly well-known for a) driver stability and b) performance.
 
After I set the jumbo frames from 1500 to 9000 on both systems, the following image is the result of my transfer. It started off great at around 30-35% utilization and then after a period of time dropped back to 5-10%.

The results for other jumbo packet sizes are the same. Starts off great, and then all of a sudden plummets.
 
Originally posted by: Nexworks
After I set the jumbo frames from 1500 to 9000 on both systems, the following image is the result of my transfer. It started off great at around 30-35% utilization and then after a period of time dropped back to 5-10%.

The results for other jumbo packet sizes are the same. Starts off great, and then all of a sudden plummets.

Back it down to 4000 byte ethernet frames. Connect the two units together sans gigabit switch, but use an interconnect (aka straight-through), not crossconnect (aka crossover), cat5e cable. Then try again.
 
Alright both systems connected together with a Cat5e cable. Flow control on both disabled. Jumbo packets on both set to 4500. Speed set to Auto Negotiate. Both nics detected the speed as 1Gb. Performed a network transfer, and got network utilization steady of 12% like before. If I set both cards to 100Mbit, network utilization is 99%.
 
Originally posted by: Nexworks
Originally posted by: nweaver
are you using homemade cables?

No, brand new cables. Also tested with older cables. No difference.

Yes, but were you standing on your head with one hand on your stomach and the other touching your nose while no fewer than 8 planets in our solar system were aligned? If so, I think there could be something fishy going on.:shocked:

I'm extremely skeptical of those nVidia drivers, to be quite frank.
 
Originally posted by: p0lar
Originally posted by: Nexworks
Originally posted by: nweaver
are you using homemade cables?

No, brand new cables. Also tested with older cables. No difference.

Yes, but were you standing on your head with one hand on your stomach and the other touching your nose while no fewer than 8 planets in our solar system were aligned? If so, I think there could be something fishy going on.:shocked:

I'm extremely skeptical of those nVidia drivers, to be quite frank.

Did all that, was even rubbing a rabbits foot with my toes.

Was going to pick up a dedicated gigabit nic for additional testing, but reading the reviews of the D-Link 530 makes me think otherwise. The alternatives are considerably more expensive. Hrm...
 
Originally posted by: Nexworks
Did all that, was even rubbing a rabbits foot with my toes.
:shocked:

Damn.

Was going to pick up a dedicated gigabit nic for additional testing, but reading the reviews of the D-Link 530 makes me think otherwise. The alternatives are considerably more expensive. Hrm...
How about forcing duplex to full and speed to 1000 mbit/s?
Turn off hardware checksumming?
Max IRQ/s?
Interrupt mitigation?
Increase receive/transmit buffers?

That's all I see on this Marvell Yukon driver in Windows. Good luck past that. :/
 
When it come to today Giga.

If you really needs it, than you have to pay.

If you want to be fashionable, use what ever you get, you do not have to tell your friends the bandwidth numerical value, just saying: "I use Giga" makes you look cool, and gives you the status.
 
Originally posted by: p0lar
Originally posted by: Nexworks
Did all that, was even rubbing a rabbits foot with my toes.
:shocked:

Damn.

Was going to pick up a dedicated gigabit nic for additional testing, but reading the reviews of the D-Link 530 makes me think otherwise. The alternatives are considerably more expensive. Hrm...
How about forcing duplex to full and speed to 1000 mbit/s?
Turn off hardware checksumming?
Max IRQ/s?
Interrupt mitigation?
Increase receive/transmit buffers?

That's all I see on this Marvell Yukon driver in Windows. Good luck past that. :/

Well on the nForce 430 system, I can set to either Negotiate 1000FD , Auto Negotiate, or Force upto 100FD. On the nForce 680 system, I can also force upto 100FD or Auto Negotiate, there is no option to Negotiate 1000FD. The nForce 680 system has 2 gigabit nics. I have enabled and tested with both, same results. Both systems under status on the nic show it connected at 1Gb, but im clearly getting 100Mbit performance.
 
sounds like crappy gear.

Great intel pro 1000 adapters can be had fairly cheap on Ebay or new egg. When it comes to my network, I always spend the money for an Intel NIC, onboard or card, it doesn't matter. If a price quote for a server comes back without it, I make them redo it. My network is too important to trust to those craptastic vendors.
 
Originally posted by: nweaver
sounds like crappy gear.

Great intel pro 1000 adapters can be had fairly cheap on Ebay or new egg.
The whole "boring" issue is coming from the Discovery of the Mobo producers that for 5c more they can install onboard Giga chips and claim the Mobo fame to Giga.

In most cases the End-Users do not really understand the issues but they can not let go.

A side effect of this is the second NIC as well. It is causing a real psychological disress to many enthusiasts to have a second NIC on the Mobo and there is nothing that they can do with it.

Now imagine a person that has a second NIC that is good for nothing, and a Giga that does not do real Giga, it is all most the end of human life as we now it.;0

 
But I can use that 2nd card in super SLI mode and get twice the performance! I'm so much better! I can frag more! Bow before my two network cards!

Gamers will buy anything. 😉
 
Originally posted by: spidey07
But I can use that 2nd card in super SLI mode and get twice the performance! I'm so much better! I can frag more! Bow before my two network cards!

Gamers will buy anything. 😉
That is very true; provided you create the SLI Bond with Gorilla Glue.

Gorilla Glue dries fast and expands about three times in volume, thus prevents a bottleneck from occurring.😉

Do not try it at home, we are pros we are doing it for a living.😀
 
Originally posted by: JackMDS
Originally posted by: spidey07
But I can use that 2nd card in super SLI mode and get twice the performance! I'm so much better! I can frag more! Bow before my two network cards!

Gamers will buy anything. 😉
That is very true; provided you create the SLI Bond with Gorilla Glue.

Gorilla Glue dries fast and expands about three times in volume, thus prevents a bottleneck from occurring.😉

Do not try it at home, we are pros we are doing it for a living.😀

Yeah! All the reviews say it increases your frag ratio. If you want to kill'em good, you MUST have dual network ports. Don't be the loser that only has a single NIC.
 
Try factoring out the drives and file transfer protocol, etc., to start, and do just network benchmarks.

E.g. using iperf 1.7:

server: iperf -s
client: iperf -c server -l 1M -t 15 -i 3 -r

E.g. results:

F:\tools\bench\iperf>iperf -c amd-vista -l 1M -t 15 -i 3 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to amd-vista, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[656] local 192.168.0.141 port 2332 connected with 192.168.0.125 port 5001
[ ID] Interval Transfer Bandwidth
[656] 0.0- 3.0 sec 329 MBytes 920 Mbits/sec
[656] 3.0- 6.0 sec 329 MBytes 920 Mbits/sec
[656] 6.0- 9.0 sec 329 MBytes 920 Mbits/sec
[656] 9.0-12.0 sec 329 MBytes 920 Mbits/sec
[656] 12.0-15.0 sec 328 MBytes 917 Mbits/sec
[656] 0.0-15.0 sec 1.61 GBytes 918 Mbits/sec
[900] local 192.168.0.141 port 5001 connected with 192.168.0.125 port 49212
[ ID] Interval Transfer Bandwidth
[900] 0.0- 3.0 sec 331 MBytes 926 Mbits/sec
[900] 3.0- 6.0 sec 332 MBytes 928 Mbits/sec
[900] 6.0- 9.0 sec 331 MBytes 927 Mbits/sec
[900] 9.0-12.0 sec 330 MBytes 923 Mbits/sec
[900] 0.0-15.0 sec 1.62 GBytes 927 Mbits/sec

That was nVIDIA nForce 3 to nForce 430, Win2K to Vista x86, jumbo frames not used.

I disabled autotuning on Vista, as this gives a bit of a performance boost at present:

netsh interface tcp set global autotuninglevel=disable

To undo, you can run:

netsh interface tcp set global autotuninglevel=normal
 
Originally posted by: spidey07
Yeah! All the reviews say it increases your frag ratio. If you want to kill'em good, you MUST have dual network ports. Don't be the loser that only has a single NIC.

You guys are giving me an inferiority complex.. how to hack the iMac.. hmnn... <scratching head> .. I do have several USB ports.. I could add gigabit USB adapterzz!
 
Originally posted by: p0larI do have several USB ports.. I could add gigabit USB adapterzz!
No, No, No!

Big mistake, USB is too slow.

Use the Firewire. If you use for the Fire part propane in gold cylinders it really Rockz.😉
 
Originally posted by: JackMDS
Originally posted by: p0lar
I do have several USB ports.. I could add gigabit USB adapterzz!
No, No, No!

Big mistake, USB is too slow.

Use the Firewire. If you use for the Fire part propane in gold cylinders it really Rockz.😉

I was told I also need stripes, is this true or should I go with stickers instead?
 
Originally posted by: p0lar
Originally posted by: JackMDS
Originally posted by: p0lar
I do have several USB ports.. I could add gigabit USB adapterzz!
No, No, No!

Big mistake, USB is too slow.

Use the Firewire. If you use for the Fire part propane in gold cylinders it really Rockz.😉

I was told I also need stripes, is this true or should I go with stickers instead?

Remember, if you read it on the Internet then is simply MUST be true.

 
Back
Top