Gigabit ethernet very fast...in one direction

fuzzymath10

Senior member
Feb 17, 2010
520
2
81
This is a variant of a common problem. In the past I've noticed that pushing a file from a local machine to a remote machine was always faster than taking a file from a remote machine and copying it to a local drive, regardless of which two machines were used, and where disks were not the bottleneck. I also noticed that it was less of an issue if the remote machine was faster.

My issue now is that from my i5 to my Core 2, regardless of which initiates, I get about 700mbps or 85MB/sec which is awesome; I think that might be limited by the hard drive on the Core 2 since it's reading from an SSD on the i5 machine.

However, from the Core 2 to the i5, I'm stuck at 300mbps, or about 35MB/sec. I'm 100% sure it's not an I/O problem since I was fine reading from an SSD and writing to a hard drive; I'm stuck when I'm reading from a hard drive and writing to an SSD (the sequential write on my X25-M array is about 120MB/sec).

Any thoughts? This is just a test with a 5GB movie back and forth. I noticed that with the fast transfers, my Q8200 was about 30% CPU with the "System" process at 25%. Maybe it was running at 300M to limit CPU usage.
 

SirGCal

Member
May 11, 2005
122
1
0
www.sirgcal.com
You might have some caching going on with the drives that doesn't happen when reading. For example, even writing to my slow green 5400RPM conventional patter drives, (normal ~ 80M/sec write sustained), I can move a few G file from my SSD to one of those drives and it moves at 500MBytes/sec or 3000Mbits/sec. No joke... But it's not really writing that fast. I have so much freaking memory, it's just caching the huge files I move. So the SSD reads it super fast, and dumps it to memory waiting for the HDD to catch up. But the system is otherwise done. But reversing it, it has to read from the HDD which is limited to that (also ~ 80M in my case) so writing the files back to the SSD is much slower of an overall job.

However, 30M/sec read is slow even for green type drives... So while it might not be your exact problem, I'm just basically thinking out loud which might or might not help you out...

Other things that come to mind. Are you doing this through two physical locations? Going through an ISP of some type? That could have something to play with it also. 30M/sec might be all your other ISP allows as an upload speed. Heck, it's tons more than mine lets me... darn TWC... Anyhow; But if your upload out is much faster, that could be exactly what you see... But I don't know how 'remote' and 'local' your machines really are... I myself have my local LAN which is extremely fast as above mentioned, but going through the WAN to work changes the story tremendously... But if they are all within the same local network only, then that wouldn't be a problem. Again, just thinking out loud...
 

LiuKangBakinPie

Diamond Member
Jan 31, 2011
3,903
0
0
Re you saying your upload is slower than your download?

ANYWAYS

You can do a "quick and dirty" test for file server throughput by creating a large temporary file on your client computer with the fsutil command, and then timing the transfer to the OTHER computer:

fsutil file createnew temp-file-name 209715200

That would create 200MB temporary file. You can do a quick copy w/ timing using the following script (from the directory where you created the temporary file, and assuming you have rights to copy to some share on the server computer):

@echo off
echo.|time
copy temp-file-name \\server-computer-name\share-name
echo.|time

Subtract the ending time from the starting time, convert to seconds, and divide 209715200 by the number of seconds elapsed to get bytes-per-second.

You should see upwards of 7,000,000 bytes per second (roughly 56Mbps) on a 100Base-TX LAN. Anything below that and I'd begin to suspect that something is up. Assuming that the server computer is reasonably modern, it should be able to fill a 100Mbps pipe with no problem. If you're seeing transfer speeds slower than that, I'd start to look at the error counters in the administration interface of the switch that the server and client are connected to. You could have faulty cabling, a duplex mismatch, or NIC driver problems. It's all just a matter of tracking the problem down methodically.

Edit: The file copy test is a nice test because you can conduct it w/o any third-party software. Since you've found a bottleneck does exist, the next step is to identify the bottleneck's cause.

The WSTTCP utility (available at http://www.pcausa.com/Utilities/pcattcp.htm) is a quick and dirty test of your NIC driver and network infrastructure hardware. It sends data that's not coming off disk or being written to disk, so the disk subsystems on the client and server end up being factored out of the equation.



On one machine, execute the following (after you've unpacked WSTTCP!) to "listen" for a connection:

wsttcp -r

On the other machine, execute the following to transmit a test to the remote machine:

wsttcp -t <hostname>

On 100Mbps Ethernet, you might want to modify the transmit command (re-running the receive command on the receiver before you start the transmitter again) to send more buffers, because you'll get slightly more accurate numbers with a longer test:

wsttcp -t -n8192 <hostname>

That will move 64MB of traffic. Increase the "8192" number to send more traffic.

You'll need to either allow the listener thru your firewall software on the listening computer (TCP port 5001, by default) or disable the firewall temporarily.

If you're seeing good transfer speeds with WSTTCP but slow transfers with the file copy, start looking at your disk subsystem (and consider running a hard disk drive benchmark). If the network transfers are still cruddy w/ WSTTCP, keep investigating the network infrastructure, cabling, NIC drivers, or NIC hardware.
 
Last edited:

fuzzymath10

Senior member
Feb 17, 2010
520
2
81
Core 2 Quad is Server 2008 R2, i5 is Win7.

I don't think storage subsystem is the issue since I can hit 85MB/sec writing to the hard drive from the SSD, but only 35MB/sec writing to the SSD from the hard drive.
 

SirGCal

Member
May 11, 2005
122
1
0
www.sirgcal.com
Did you see my above thoughts? Also, have you tried this with any other system or HDD on the slow end? It is possible the read speed of that particular drive is simply failing all together. Or that that particular system has an issue of it's own.