10Gbe - iperf Results Different From Windows Transfer Rates

Collider

Senior member
Jan 20, 2008
522
7
81
In testing newly setup 10Gbe home network, I'm able to transfer files at around 1.12 GB/s (based on windows popup transfer window) between Windows 10 & Windows Server 2016 which is what you'd expect from 10Gbe. Apparently Windows Server does ram caching so I didn't have to setup a RamDisk or anything. Copying a 4GB file takes about 2-3 seconds.

However, when I run iperf it reports max of only 4.7 Gb/s.

I'm a bit puzzled by results.

Should I believe windows reported transfer or iperf? Are there some additional flags I need to set in iperf?
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I'm impressed with your results. I just setup a 10Gbe network at home as well. My storage is also 2016 with clustered Hyper-V hosts running 2019. Doing tests with 2x10Gbe ports on each device. I am only able to max out around 1.4GB\sec using SMB 3.0 multi-channel. When the theoretical should be 2.4GB\sec.

Are you using a DAC cable or transceiver with fiber?

I'd trust what you are seeing in windows. If a 4GB file transfers in 2-3 seconds. It is moving at over 1GB\sec.
 

Collider

Senior member
Jan 20, 2008
522
7
81
I'm using DAC cables since they have slightly lower latency then fiber and are more energy efficient.

After some late night troubleshooting I found an answer to my own question. Turns out iperf needs more threads to saturate 10Gbe. I added -P 5 to to parameters and was able to hit 8 - 9.2Gb/s. It isn't consistent for some reason but is in that range.

Another thing that helped was setting MTU to 9000 and enabling Jumbo frames on my switch. It gave me a bit more consistent results closer to 9Gb/s. I'm able to hit 9.5 sometimes, but not always.

Wondering what else I could tweak to get better results.

Sent from my Nokia 7.1 using Tapatalk
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I'm using DAC cables since they have slightly lower latency then fiber and are more energy efficient.

After some late night troubleshooting I found an answer to my own question. Turns out iperf needs more threads to saturate 10Gbe. I added -P 5 to to parameters and was able to hit 8 - 9.2Gb/s. It isn't consistent for some reason but is in that range.

Another thing that helped was setting MTU to 9000 and enabling Jumbo frames on my switch. It gave me a bit more consistent results closer to 9Gb/s. I'm able to hit 9.5 sometimes, but not always.

Wondering what else I could tweak to get better results.

Sent from my Nokia 7.1 using Tapatalk

The MTU thing could be more related to your systems on either side hitting high CPU Usage due to lack of offload for traffic, or poor NIC drivers. Any modern 10GbE switch gear shouldn't have any problems running line rate with L2 traffic, especially in a single port to port test.