• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

GbE - traffic in one direction only half of the other direction

velis

Senior member
Jul 28, 2005
600
14
81
Home network, two PCs. Connected through ASUS AC56U.

When loading files to the server, I get ~700 Mbit/s. Intel proo 1000GT PCI Nic here.
When loading FROM the server, I get ~350 MBit/s. Onboard Realtek 8111F here (P8Z77-M pro)

SSDs on both ends, also NetIO says this is so, so this is a pure ethernet issue.

Router says both NICs connect GbE full duplex.
I also replaced the router with a business grade switch known to work at Gb. It increased the high figure to ~800 Mbit/s, but didn't touch the low one.
I also tried pretty much every setting on the intel adapter (I assumed it was the culprit since this was often associated with NIC connecting half-duplex only and then having issues transmitting)

Why do I get this 1/3 speed? Is there a simple procedure to diagnose where the issue might be?
 
Feb 25, 2011
16,992
1,621
126
So your server has an Intel NIC and your PC has the Realtek?

700Mb/sec is actually pretty good for a file transfer, when you account for overhead and all.

Not uncommon to see performance issues with Realtek NICs. Rarely as fast as advertised in all conditions. I'd pick up another Intel NIC and try it out in the client PC.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
I know you have a SSD on both sides, but have you tried to use iperf just to take storage out of the equation? Also, another vote for realtek being total crap.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,552
429
126
I have few computers with on-board Realtek Giga NICs and I get 80 MB/sec. (B=Byte) LAN Transfer.

Check the NIC Properties (set to max performance).

This can measure LAN transfer regardless of the storage.

http://www.totusoft.com/downloads.html (download the free version, it os portable).



:cool:
 
Last edited:

velis

Senior member
Jul 28, 2005
600
14
81
I used NetIO to take disks out of equasion.
Both NICs are set to offload everything (especially realtek has humongous amount of settings for that)
Is it really possible that Realtek would be the culprit here? It's on the receiving side in this scenario. Receive buffers set to max (512 I think).


I suppose I should get me a good GbE card and just try it in one computer then the other. That would definitely identify the culprit. Also, I might try the onboard in the server (Qualcomm Atheros, ASUS P8H67-V). Initially I used the separate intel exactly because I measured better performance on it.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
Lets also consider the fact that it is a VERY old PCI NIC, not even an old PCIe NIC.

700-800Mbps is not very good performance.

Even my Realktek NICs get in the range of 880-920Mbps, generally in the higher end of the range. My old Intel PCIe (Gigabit CT) NICs get 960Mbps with 9k jumbo frames (around 930Mbps without).

With overhead included, around 950-960Mbps is roughly the max you can get over gigabit between TCP, IP and L7 (application) level overhead, that is roughly what you have left with 9k jumbo frames. With 1500MTU frames you are stuck with around 930Mbps as the max with all the overhead accounted for.

Anything lower means there is a "problem" with the adapter. It cannot keep up for whatever reason. Either it has excessive interupts of the CPU, is sending/recieving malformed packets or the interface cannot actually handle full speed (PCI is ~1064Mbps max IIRC, which means you SHOULD be able to get 1Gbps in each direction, half-duplex, not full duplex, but that doesn't account for any PCI overhead, which does exist, which means 800Mbps or so is likely the best you can manage with a PCI interface).
 

velis

Senior member
Jul 28, 2005
600
14
81
Update & solution:
I tried the onboard Atheros NIC today. I immediately got > 100 MB/s traffic in both directions, 110 with packets >=4 KB.
Then a funny thing happened: I went to NIC properties and set all the usual suspects, like offloading, interrupt moderation, jumbo frames, etc.
Traffic immediately went back to abysmal levels, even worse than on Intel NIC.

Turns out it was the Jumbo frames that killed my performance.

I didn't test the Intel NIC since Atheros gives me so much better performance (110 MB/s peak vs 75MB/s peak), but I suspect performance might improve on Intel as well if I disabled jumbo frames. Seems there was some icompatibility in jumbo frames implementation among these NICs.