Jumbo frames, worth it, yeay or nay?

VirtualLarry

No Lifer
Aug 25, 2001
56,348
10,048
126
I thought that I had read previously, from an authoritative source, that with modern networking equipment, jumbo frames on a home LAN was unnecessary to get maximum performance out of it. And that even possibly, it would result in poorer latency, when mixing bulk-transfer loads with something like latency-sensitive gaming loads.

Though, I haven't read the results with a 5GbE-T or 10GbE-T LAN. I suppose it might be beneficial then.

Does anyone have any somewhat conclusive personal experience?

My understanding (having never used "Jumbo Frames"), is that one must manually configure each desktop PC and other device, for jumbo frame length, as well as have an internet router, that can understand Jumbo frames on the LAN, and still translate them to "Internet WAN MTU" properly. That is, there is no auto-configuration mechanism, for Jumbo Frame support (other than Switches automatically supporting it).

Has anyone here seen any benefit from Jumbo Frames on a 1GbE LAN, with mixed network usage (bulk NAS file-transfers, as well as internet streaming, gaming, and web browsing, with multiple PCs behind a SOHO router)?

I could see the case for using Jumbo Frames on a LAN segment, serving an iSCSI server or something, make the Jumbo Frame size some multiple of the disk sector size.
 

cmdrdredd

Lifer
Dec 12, 2001
27,052
357
126
I experimented with this a while back and found that I had really slow transfers to a NAS using them. Other than that little test I haven't bothered to research more. I have a wired 1GbE network.
 

mnewsham

Lifer
Oct 2, 2010
14,539
428
136
Jumbo frames, when used properly for LAN transfers, CAN provide a healthy improvement in bandwidth.

Normal 1500 MTU leaves you with ~5% overhead. So 1000mbps really ends up being closer to about 945mbps.
With an MTU of 9000, you can bring that overhead down to around 1%.

Obviously, the difference between 5% and 1% isn't significant, which is why most people would never bother with it, in most situations it won't be noticeable. But if you're doing CONSTANT heavy bandwidth LAN transfers, obviously that ~4% savings will add up significantly over-time.

And yes, as you pointed out, Jumbo frames are used generally for LAN to LAN only, most WAN interfaces will be using the normal MTU of 1500.

As for actually CONFIGURING a working LAN network with jumbo frames, my experience is next to nothing, I have never been in a situation where I felt it would be a benefit to me.
 
  • Like
Reactions: killster1

VirtualLarry

No Lifer
Aug 25, 2001
56,348
10,048
126
Thanks, @mxnerd , good read. I thought that I had heard that "Jumbo Frames" was more efficient for 10GbE networks.
I was personally planning on enabling it, when I go 10GbE, but I was curious for others' opinions on the matter.

Edit: According to the graph in that article, latency actually went down / got better, when JF was Enabled. Something that seems counter-intuitive to me, but it might be explained by the lower CPU demand necessary for packet processing when JF is Enabled. And maybe in this relatively simple direct test, latency on the wire didn't matter so much.
 
Feb 25, 2011
16,790
1,472
126
My experience with it has been "mixed" - in the sense that if not everything on the network is set to use jumbo frames, random stuff just stops working. And if you've got one device that's got a dodgy implementation, performance will crater.

So I wouldn't recommend it at all for a home network. Also probably not worth bothering for a NAS, because those are relatively high-overhead to begin with.

If you had a SAN setup you were working with, had a discrete physical network for storage, knew exactly what was there, and knew nobody's cell phone would randomly start tossing 1500MTU packets onto your segment, then it might be worth a go-round.

But for an extra couple percentage points of performance, I also have to observe that if you're at the point where +5% matters, you're probably close to your limit anyway.
 
Last edited:

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,471
387
126
If you want to have a Real Trial.

Disconnect the Network fro the Internet and try the LAN on its on.

Then connect back to the Internet and see what happens to the Internet and the LAN transfers.

When the whole Network was based on Win 7 used MTU=9000 and it served me well.

With Win 10 I find it useless.


:cool:
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
I use jumbo frames on my 10GbE connection direct from my desktop to my server, and it improved performance in the 5-20% range. I do not use it on my 1GbE network, i tried and it had a negative effect on network reliability for certain devices.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,348
10,048
126
Interesting data points, thanks guys!

Edit: I'm thinking of setting up a "parallel" 5GbE LAN, for storage purposes. Different switches, extra 5GbE PCI-E NIC in each desktop, an 8-port multi-gig switch, and my NAS units. Isolate them physically from the internet, although, I'd want to provide an NTP server for them.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,672
578
126
I think there's a justification that can be made for Jumbo Frames on the smaller gear. For instance the small NAS units with ARM CPUs. Especially if you're doing iSCSI. When they are already CPU bound, jumbo frames can squeeze out a bit more performance.

For larger gear, I don't see it as worthwhile unless your workload is composed of entirely large, sequential data transfers. Even Nutanix, running Hyperconverged traffic across 40Gb links, recommends sticking to 1500 MTUs, because in 99% of day to day operations jumbo frames is not going to make any meaningful benefit in that sort of workload, while drastically increasing the administrative implementation overhead as well as ongoing overhead needed to make sure existing and future workloads remain compliant.

For the most part, Layer 2 switching and modern NICs / CPUs have more than enough resources to cope with 1500 MTU, and the 5% vs. 1% overhead just isn't worth chasing after (because if you're consistently bumping into this limit, you've already got major issues).

But for a point to point link where you're just setting up a device on each end, especially if your data mover is small and not necessarily powerful, then the quick set up for jumbo frames makes sense, because its easy to implement, and it's more likely to encounter those large, bulk data transfers where lessening the overhead on those tiny ARM CPUs can help!
 

abufrejoval

Member
Jun 24, 2017
39
5
41
Simple answers can be misleading.

Short version:
At 1-10Gbit on Linux with current hardware: Makes no difference.
At 100Gbit on Linux I get the best transfer rates on Mellanox CX5 at 4KB frames (not 1.5, not 9K)

Long version:
Windows 2012R2 at least (~2016 version of Windows 10 for servers), seems to have been challenged by 10Gbit on desktop class hardware.
And there I managed to double throughput using 9k jumbo frames, alas still far shy of the theoretical maximum.
Everything helps on full backups, because that's a dozen Terabytes or so in my case.

Typically once you jump from 1Gbit to 10Gbit (or more) you'll discover plenty brand new bottlenecks...

When I first played with jumbo frames a decade ago, it was far too easy to mess things up completely, because everybody has to agree on jumbo frames for them to work: A lot of walking, plugging and head scratching.

These days the good news is that everybody will negotiate frame sizes, so you won't actually get jumbo frames, until everybody agrees to give them to you: Hosts, switches and routers.

Might take you a while, but at least you won't lose the network entirely trying...
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
Simple answers can be misleading.

Short version:
At 1-10Gbit on Linux with current hardware: Makes no difference.
At 100Gbit on Linux I get the best transfer rates on Mellanox CX5 at 4KB frames (not 1.5, not 9K)

Long version:
Windows 2012R2 at least (~2016 version of Windows 10 for servers), seems to have been challenged by 10Gbit on desktop class hardware.
And there I managed to double throughput using 9k jumbo frames, alas still far shy of the theoretical maximum.
Everything helps on full backups, because that's a dozen Terabytes or so in my case.

Typically once you jump from 1Gbit to 10Gbit (or more) you'll discover plenty brand new bottlenecks...

When I first played with jumbo frames a decade ago, it was far too easy to mess things up completely, because everybody has to agree on jumbo frames for them to work: A lot of walking, plugging and head scratching.

These days the good news is that everybody will negotiate frame sizes, so you won't actually get jumbo frames, until everybody agrees to give them to you: Hosts, switches and routers.

Might take you a while, but at least you won't lose the network entirely trying...

I recently installed a 10Gbe network at home and to your point, Windows Server and certain network adapters introduce odd issues. For example, I was able to saturate the connection transferring files between my desktop computers (Windows 10 1809 update), each running an NVMe drive. I tested both ways, multiple times, with large 30-40GB movies. However, desktop to server (Windows Server 2019) was a completely different story. Would sometimes get good throughput, then other times it would be under 1Mb/sec. However, logging into the server and pushing files to the desktops would completely saturate the connection without issue. The server has 4 x NVMe, 6 x SSD and 20 7200RPM 8TB drives, 2 cpus and plenty of PCIe channels so the storage speed is pretty fast.

During all of this, the 1903 update hit my desktop hard and corrupted the install. My ISO was the original version of Windows 10. Oddly enough, it didn't have any problems with transfers, but running Windows updates caused some other issues. Did a clean install of 1809 and once again the transfer had issues. Did a lot of digging and found that disabling a variety of the offload features (All Large Send Offload, Receive Side Scaling, ALL Recv Segment Coalescing) for the adapter fixed the problems. The server runs an Intel X540 T2 and the desktops run Aquantia 10Gbe cards.

1559257415929.png