• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Windows XP: measure throughput node-to-node

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Hey guys, aside from getting out a stopwatch and calculator, is there a way to measure the throughput of a file copy between two Windows XP boxes on a LAN? I don't have an FTP server on either machine, so it would have to measure a copy between shares, or use some other mechanism. I've checked out the various command line tools and so far haven't found what I need. Thanks in advance for any tips!
 
QCheck is commonly used to measure throughput on a LAN. Pay attention to your Firewall permissions, though, as they might well interfere with QCheck.
 
Thanks guys! Appreciate the tips. I'm going to try those out this afternoon.
 
Originally posted by: Markbnj
Hey guys, aside from getting out a stopwatch and calculator, is there a way to measure the throughput of a file copy between two Windows XP boxes on a LAN?

qcheck and iperf can give you ideas about the "raw networking" speed, which is useful, but they're not the same as file transfer speed, which will include many other factors. For file transfer speed, there's nothing which is as reliable as actual file transfers.

xxcopy gives some performance stats as part of the transfer, and at least before Vista, generally performs as well or better than standard Windows file transfers.

A quick and dirty substitute for actual file transfers is ATTO diskbench -- it can also measure transfers across mapped drives. Such utilities give you read & write performance focusing on one side of the transfer (i.e. local drive performance is factored out) -- this can also be helpful for analysis. I use 256 MB and Direct I/O "Neither".

Task Manager / Networking can also give you a good idea of the moment-to-moment transfer rate. You can add some counters there to show outbound and inbound transfer rates per interval, which is per second by default. These are a bit higher than the actual file data transfer rates because they're at the network level including protocol overhead, but in practice the difference between this rate and the actual data rate is not high. The stopwatch/etc., other techniques can give you an idea of the difference in your case.
 
I looked at 'xcopy', the standard cli tool for bulk transfers under Windows, but I didn't see any options for throughput information. Is this what you meant by "xxcopy" or is there another tool?

I didn't think about perfcounters. That's worth a try. Thanks.
 
I meant a nagware tool called xxcopy:

http://www.xxcopy.com/index.htm#download

E.g. output:

F:\tools>xxcopy /y m:\test\test0\10.gb \\intel-vista\m\test\test9

XXCOPY == Freeware == Ver 2.93.1 (c)1995-2006 Pixelab, Inc.

[...]
-------------------------------------------------------------------------------
M:\test\test0\10.gb 10,000,000,000
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 10,000,000,000
Elapsed time in sec. = 162
Action speed (MB/min) = 3703
Files copied = 1
Exit code = 0 (No error, Successful operation)

You have to divide by 60 to convert MB/min to MB/s.

BTW, PerfMon counters for networking can give you the same info as Task Manager, and these can also be logged if you're interested in tracking very long or scheduled tasks.
 
I've used several of these tools, but when in doubt I either pull out XXcopy or simply copy a large file, and then divide. XXcopy is beneficial because it works in RAW vs core mode, which is a little more efficient and predictable than a Windows GUI/Explorer copy.

Or, I just install NetStat Live and watch the pretty graph during a file transfer.

Remember that copying one large file produces entirely different results than copying a slew of much smaller files because of file system over-head, so you want to stick with the same test target along with total size.

FTP is really not a good way to judge network performance on a pier to pier network. Decent for a relative test I guess.

If I'm really picky and want to check specific platform performance on one end of the stream, such as an SQL or Web server that needs to be tuned for optimum performance, I'll mount a RAM drive on the client box and test from that. This way the file system of the client box is not interferring with the test and introducing unknown variables. You then have bragging rights when the networking admins rant about how fast their SANs is because your file system is WAY faster on a laptop.

If you really want to have some fun and cause the network crew to freak out, mount a RAM drive on both the client and server, and use a batch file to keep copying data back and forth. If it's a managed switch with any utilites monitoring traffic you'll eventually hear a scream from a cube down in IT as the fans kick on the ol' Cisco. I've 'smoked' many a NIC this way 🙂
 
I upgraded my backbone yesterday after the Comcast guys were here to clean up our five year-old cable install (very successful, new drop + new modem + replaced bad port + new connectors and splitters == 30-40% increase in throughput both ways). Installed a DIR-655 and new CAT-6 cabling to the wired machines. Also put a gigabit NIC in the DNS/squid server and confirmed its link speed at 1000 mbps.

Everything seemed to be cracking right along. I fired up Filezilla and dropped a 200 meg file from the Windows box over to the Debian system... 750 KBps. Actually, it seemed to fluctuate from 680 - 760 KBps.

I don't have another XP machine on the wired network, and have to haul one in here. So I haven't done that yet, but damn, there's a gigabit path between these two machines. What the heck could be holding it up? Here's the ifconfig from the Debian system (the gb Intel NIC is eth1):

eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::21b:21ff:fe13:e429/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:98166 errors:0 dropped:0 overruns:0 frame:0
TX packets:65761 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:62420741 (59.5 MiB) TX bytes:43451255 (41.4 MiB)
Base address:0xc000 Memory:ed020000-ed040000

Does that MTU look big enough for the link speed?
 
Ok, well there's no problem with the network. I took Madwand1's suggestion and downloaded xxcopy. I used it to move a 2.6 gigabyte file from my Windows XP system to a Samba share on the Debian machine. Here's the output:

D:\MS installs\en_windows_vista_x86_dvd_X12-34293.iso 2,678,614,016
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 2,678,614,016
Elapsed time in sec. = 76.23
Action speed (MB/min) = 2,108
Files copied = 1
Exit code = 0 (No error, Successful operation)

I can't quite make sense of their "Action speed" number. Obviously that doesn't mean 2,108 _MB_ per second, and it doesn't make sense as Mb per second either. But it's easy to work out if the elapsed time is correct.

(2,678,614,016 * 8) / 76.23 = 281,108,646 bps = 35,138,581 Bps

Ignoring overhead of error correction and packets of course. I did a couple of tests and the calculations came out roughly the same. That is probably the limit of the IDE drives in that machine, so I'm plenty happy with that transfer rate.

Then I pulled the same file back from the Debian server to the XP, with the receiving disk this time being a WD Raptor with probably a 60-70 MB/sec. transfer rate. Both NICs are linked up at 1000 mbps. Should be faster, right, since the XP box can accept the data much faster? It was almost 5x slower:

\\mambazo\allusers\en_windows_vista_x86_dvd_X12-34293.iso 2,678,614,016
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 2,678,614,016
Elapsed time in sec. = 337.6
Action speed (MB/min) = 476
Files copied = 1
Exit code = 0 (No error, Successful operation)

(2,678,614,016 * 8) / 337.6 = 63,474,266 bps = 7,934,283 Bps

So what the heck could make the transfer so much faster going from XP->Debian vs. Debian->XP?
 
Originally posted by: Markbnj
I can't quite make sense of their "Action speed" number. Obviously that doesn't mean 2,108 _MB_ per second, and it doesn't make sense as Mb per second either.

That's MB/min.

2108 MB/min / 60 s/min =~ 35.1 MB/s.
 
XXcopy's speed report can be trusted.

2,108 / 60 = 35.13 MB per sec. That's solid gig speed for non file server hardware.

Given the Debian box is much quicker to receive than transmit I highly suspect it's a quirk with Samba on that end, or the NIC driver on the Debian box. I'm really suspecting the NIC driver.
 
For burst I use Qcheck, for constant I use Iperf, then sometimes i fall back to Ms Robocopy since I can choose any file and copy it betwwen the 2.
 
Originally posted by: Madwand1
Originally posted by: Markbnj
I can't quite make sense of their "Action speed" number. Obviously that doesn't mean 2,108 _MB_ per second, and it doesn't make sense as Mb per second either.

That's MB/min.

2108 MB/min / 60 s/min =~ 35.1 MB/s.

Damn, completely misread that, thanks. That's right in line with my calculations, and I agree with the other poster who said it's what would be expected for IDE non-file server disks.

What I still can't make sense of is the really poor performance going the other direction.

Given the Debian box is much quicker to receive than transmit I highly suspect it's a quirk with Samba on that end, or the NIC driver on the Debian box. I'm really suspecting the NIC driver.

The NIC driver is e1000 for the Intel gigabit cards, which was compiled into the kernel. Do you know anything good or bad about it? I'll look over the Samba config as well, maybe something arcane in there is throttling things.
 
Originally posted by: Markbnj
What I still can't make sense of is the really poor performance going the other direction.

The NIC driver is e1000 for the Intel gigabit cards, which was compiled into the kernel. Do you know anything good or bad about it? I'll look over the Samba config as well, maybe something arcane in there is throttling things.

It's quite common for performance to be asymmetrical, even Windows to Windows, where pushes generally perform better than pulls. I suppose this has something to do with the SMB protocol and internal tuning and caching, whereby local requests may be treated differently from remote requests.

There can be other causes as well -- I recall one case where the PCI bus was apparently introducing a limitation in one direction but not the other. For this part, you could use a network-only benchmark such as iperf version 1.7.

E.g.:

server: iperf -s
client: iperf -c server -l 64k -t 15 -i 3 -r

The key parameters here are -l 64k, which uses a common decently-sized message buffer, and -r, which tests performance in both directions -- first outgoing from the client to the server, and then in reverse.

If iperf doesn't show you a significant asymmetry, then you can probably forget about concerns about the bus or the NIC options and focus on OS and protocol issues such as read-ahead caching and perhaps re-trying ftp.
 
Thanks, Madwand1. That's a great tool. Once I figured out what package to get under Debian it was a snap. Ran it with the server on Debian and the client on windows, so the first block of results is XP->Debian, and the second is Debian->XP. Here's the output:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to mambazo, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1856] local 192.168.0.100 port 1123 connected with 192.168.0.105 port 5001
[ ID] Interval Transfer Bandwidth
[1856] 0.0- 3.0 sec 317 MBytes 885 Mbits/sec
[1856] 3.0- 6.0 sec 316 MBytes 884 Mbits/sec
[1856] 6.0- 9.0 sec 316 MBytes 884 Mbits/sec
[1856] 9.0-12.0 sec 316 MBytes 882 Mbits/sec
[1856] 12.0-15.0 sec 315 MBytes 882 Mbits/sec
[1856] 0.0-15.0 sec 1.54 GBytes 883 Mbits/sec
[1952] local 192.168.0.100 port 5001 connected with 192.168.0.105 port 48619
[ ID] Interval Transfer Bandwidth
[1952] 0.0- 3.0 sec 286 MBytes 801 Mbits/sec
[1952] 3.0- 6.0 sec 287 MBytes 802 Mbits/sec
[1952] 6.0- 9.0 sec 286 MBytes 800 Mbits/sec
[1952] 9.0-12.0 sec 287 MBytes 803 Mbits/sec
[1952] 12.0-15.0 sec 288 MBytes 804 Mbits/sec
[1952] 0.0-15.0 sec 1.40 GBytes 802 Mbits/sec

This was actually the second run. The first time through the second block of bandwidth numbers were in the 700's. So from 80 mbps to 100 mbps difference. Flipping the config and running the server on XP and the client on Debian didn't make a huge difference.

Curious, but nowhere near enough to explain the discrepancy in throughput when copying with xxcopy, so I think you're right, and the operating system/protocol level is where I'll have to look.
 
Back
Top