Go Back   AnandTech Forums > Hardware and Technology > Networking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals
· Free Stuff
· Contests and Sweepstakes
· Black Friday 2013
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 07-30-2012, 05:44 PM   #1
BTRY B 529th FA BN
Lifer
 
BTRY B 529th FA BN's Avatar
 
Join Date: Nov 2005
Posts: 12,748
Default Jumbo Frames, Interupt Moderation

If one PC on the LAN is using Jumbo Frames do all others have-to to avoid problems with the Router?

EDIT:

I am also curious how to reduce my interrupt moderation as much as possible without increasing the latency in sending packets.

I've got a Intel Pro PT Dual NIC, and a Intel Gigabit Intel CT Desktop adapter as a backup.

Thanks
__________________
Rig1: Xeon X5660 6core 32nm @ 4GHz, Asrock X58, GTX 670 FTW, Samsung 128Gb 840 Pro, Vertex2 100Gb, Cherry MX Reds, 2009 Seasonic X 650w, Aluminum Full tower
Rig2: i7 970 6core 32nm @ Stock, EVGA 760 Classified,
Nvidia 0db 210, Elpida Hyper CL6.6.6.15, Samsung Evo 120Gb, Vertex LE 100Gb, Cherry MX Blacks, Sidewinder X3, Enermax Revo 950w, Obsidian 800D - F@H machine

Last edited by BTRY B 529th FA BN; 07-30-2012 at 06:03 PM. Reason: Adding another question instead of opening another thread
BTRY B 529th FA BN is offline   Reply With Quote
Old 07-30-2012, 07:05 PM   #2
spidey07
No Lifer
 
spidey07's Avatar
 
Join Date: Aug 2000
Posts: 65,052
Default

TCP will be fine as the max segment side will be agreed upon. UDP or any other protocol then all bets are off.

It's generally a "bad idea" to have all machines on a segment with different MTUs.
__________________
___
(\__/)
(='.'=)
(")_(")
spidey07 is offline   Reply With Quote
Old 07-30-2012, 08:32 PM   #3
BTRY B 529th FA BN
Lifer
 
BTRY B 529th FA BN's Avatar
 
Join Date: Nov 2005
Posts: 12,748
Default

Do I have to manually set the MTU to 9k, or by setting 9k (Jumbo Frames) in the Advanced adapter properties does it take care of this?

? is there a way to adjust UDP ? I thought I read something about FastSendDatagramThreshold
__________________
Rig1: Xeon X5660 6core 32nm @ 4GHz, Asrock X58, GTX 670 FTW, Samsung 128Gb 840 Pro, Vertex2 100Gb, Cherry MX Reds, 2009 Seasonic X 650w, Aluminum Full tower
Rig2: i7 970 6core 32nm @ Stock, EVGA 760 Classified,
Nvidia 0db 210, Elpida Hyper CL6.6.6.15, Samsung Evo 120Gb, Vertex LE 100Gb, Cherry MX Blacks, Sidewinder X3, Enermax Revo 950w, Obsidian 800D - F@H machine

Last edited by BTRY B 529th FA BN; 07-30-2012 at 08:55 PM.
BTRY B 529th FA BN is offline   Reply With Quote
Old 07-30-2012, 09:07 PM   #4
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,321
Default

Jumbo Frames are pointless except in a SAN situation and even then the result today is minimal. You have to set every device to MTU 9000, 9015, 9018, 9218 depending on how the device represents jumbo frames. UDP is not MTU aware and will get "lost" or fragmented at the MTU transition. This typically wrecks UDP performance. If you're doing this for "gaming" then stop because you are doing nothing other than hurting yourself.
imagoon is offline   Reply With Quote
Old 07-30-2012, 10:03 PM   #5
BTRY B 529th FA BN
Lifer
 
BTRY B 529th FA BN's Avatar
 
Join Date: Nov 2005
Posts: 12,748
Default

Then how do you set everything up for maximum throughput + low system latency (not internet latency/ping) ? Are you a network guru?
__________________
Rig1: Xeon X5660 6core 32nm @ 4GHz, Asrock X58, GTX 670 FTW, Samsung 128Gb 840 Pro, Vertex2 100Gb, Cherry MX Reds, 2009 Seasonic X 650w, Aluminum Full tower
Rig2: i7 970 6core 32nm @ Stock, EVGA 760 Classified,
Nvidia 0db 210, Elpida Hyper CL6.6.6.15, Samsung Evo 120Gb, Vertex LE 100Gb, Cherry MX Blacks, Sidewinder X3, Enermax Revo 950w, Obsidian 800D - F@H machine

Last edited by BTRY B 529th FA BN; 07-30-2012 at 10:06 PM.
BTRY B 529th FA BN is offline   Reply With Quote
Old 07-30-2012, 10:29 PM   #6
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,321
Default

Quote:
Originally Posted by BTRY B 529th FA BN View Post
Then how do you set everything up for maximum throughput + low system latency (not internet latency/ping) ? Are you a network guru?
You're question is pretty broad.

For most home users there is minimal you can do to improve latency because is most cases your connection to the router is in the sub 1 millisecond range. There periodically is tweaks specific to certain games such as WoW and the TCP ack frequency. It is very dependent on how the app is built. In most cases having a halfway decent NIC and a half decent router is really all you need. The half way decent Nic will handle stacking frames in interrupt groups (vs 1 interrupt per frame), the 1/2 decent router handles keeping interface to interface latency down.

I am still not clear on this system latency you seem to be a bit obsessed with lately here in the networking forum. We have servers handling multiple 10gigE connections not going above 2-3% CPU for network layer interrupt handling [on single core]. Those servers are seeing more than a level magnitude of interrupt activity you home computer will be seeing.

Short of SAN's we don't bother with Jumbo frames because the CPU savings isn't work the time to configure it all and tweak it. Add in the varying level of support in everything from the NIC, routers and switches it isn't worth it to gain 1% CPU on one core on one system.

As for the guru thing, I'll leave it up to you. I have built and deployed multinational networks for a Fortune 500. Take that as you will.
imagoon is offline   Reply With Quote
Old 07-31-2012, 11:07 AM   #7
dawks
Diamond Member
 
Join Date: Oct 1999
Posts: 5,019
Default

Yea all these little tweaks aren't going to have much of an impact on ping, if any at all. Many of the changes you are looking at simply reduce CPU utilization. It might be beneficial if you were running large internet servers with thousands of clients, and millions of packets per second.. But if you're running any dual core 2Ghz CPU from the last 5 years, its really inconsequential in terms of gaming performance. If you want a different ping, choose a different server, or try a different ISP. Jumbo frames wont change you're gaming experience, but if anything it will degrade it, as your router has to break the packets down and push them over the internet (since jumbo frames arent really supported on the public internet).
dawks is offline   Reply With Quote
Old 07-31-2012, 11:35 AM   #8
imagoon
Diamond Member
 
imagoon's Avatar
 
Join Date: Feb 2003
Location: Chicagoland, IL
Posts: 4,321
Default

add on to Dawks:

Most major game provides as send fragmented packets from jumbo frames to /dev/null also.
imagoon is offline   Reply With Quote
Old 07-31-2012, 01:25 PM   #9
JackMDS
Super Moderator
Elite Member
 
JackMDS's Avatar
 
Join Date: Oct 1999
Posts: 26,062
Default

Quote:
Originally Posted by dawks View Post
It might be beneficial if you were running large internet servers with thousands of clients, and millions of packets per second.
+1
----------

Most users (Including Enthusiasts) are basically Ignorant about Networking (posting twice a day the word stable & DD-WRT does not make One Network savvy - ).

Users carry the mundane knowledge of CPU OC and Video frames pushing into Networking which is causing more harm than being useful.

Unless there is a special situation similar to what described in the Quote.

Leave the NIC configuration and other Networking parameters at Default.


__________________
Jack
Microsoft, MVP - Networking.
JackMDS is offline   Reply With Quote
Old 07-31-2012, 07:06 PM   #10
Jamsan
Senior Member
 
Join Date: Sep 2003
Posts: 786
Default

Deleted.

-Jack
Moderator.
__________________
My Heat
My Ebay

Last edited by JackMDS; 07-31-2012 at 07:38 PM.
Jamsan is offline   Reply With Quote
Old 07-31-2012, 07:13 PM   #11
BTRY B 529th FA BN
Lifer
 
BTRY B 529th FA BN's Avatar
 
Join Date: Nov 2005
Posts: 12,748
Default

Aowe, he's not!? This is BS!


__________________
Rig1: Xeon X5660 6core 32nm @ 4GHz, Asrock X58, GTX 670 FTW, Samsung 128Gb 840 Pro, Vertex2 100Gb, Cherry MX Reds, 2009 Seasonic X 650w, Aluminum Full tower
Rig2: i7 970 6core 32nm @ Stock, EVGA 760 Classified,
Nvidia 0db 210, Elpida Hyper CL6.6.6.15, Samsung Evo 120Gb, Vertex LE 100Gb, Cherry MX Blacks, Sidewinder X3, Enermax Revo 950w, Obsidian 800D - F@H machine
BTRY B 529th FA BN is offline   Reply With Quote
Old 07-31-2012, 07:44 PM   #12
cmetz
Platinum Member
 
Join Date: Nov 2001
Posts: 2,288
Default

BTRY, as a general rule, you must configure all L2 domains with the same value for MTU and MRU for all stations. (Yes, advanced topic, you can build a mixed network and you can make it work with the right configuration. Don't. Just don't.)

With the cards you have, make sure you have MSI or MSI-X in use. If you want lowest latency, you should disable interrupt coalescing, while if you want highest throughput, you should leave it on or adaptive. These are all receive-side issues; as far as I know there is nothing you can or should need to tweak on the send-side.
cmetz is offline   Reply With Quote
Old 07-31-2012, 09:34 PM   #13
BTRY B 529th FA BN
Lifer
 
BTRY B 529th FA BN's Avatar
 
Join Date: Nov 2005
Posts: 12,748
Default

Quote:
Originally Posted by cmetz View Post
BTRY, as a general rule, you must configure all L2 domains with the same value for MTU and MRU for all stations. (Yes, advanced topic, you can build a mixed network and you can make it work with the right configuration. Don't. Just don't.)
How is the MRU edited? Just curious

Quote:
Originally Posted by cmetz View Post
With the cards you have, make sure you have MSI or MSI-X in use. If you want lowest latency, you should disable interrupt coalescing, while if you want highest throughput, you should leave it on or adaptive. These are all receive-side issues; as far as I know there is nothing you can or should need to tweak on the send-side.

Ok. I've tried doin a little Googgling to get familiar with some terminology. I'm not sure what MSI or MSI-X means in total. I'm guessing it refers to Interrupt Moderation, not Interrupt Moderation Rate


There are 2 different Interrupt Moderation settings.

Interrupt Moderation = Enabled, or Disabled
Interrupt Moderation Rate = Off, Minimal, Low, Medium, High, Extreme, Adaptive.
__________________
Rig1: Xeon X5660 6core 32nm @ 4GHz, Asrock X58, GTX 670 FTW, Samsung 128Gb 840 Pro, Vertex2 100Gb, Cherry MX Reds, 2009 Seasonic X 650w, Aluminum Full tower
Rig2: i7 970 6core 32nm @ Stock, EVGA 760 Classified,
Nvidia 0db 210, Elpida Hyper CL6.6.6.15, Samsung Evo 120Gb, Vertex LE 100Gb, Cherry MX Blacks, Sidewinder X3, Enermax Revo 950w, Obsidian 800D - F@H machine
BTRY B 529th FA BN is offline   Reply With Quote
Old 08-01-2012, 07:49 PM   #14
cmetz
Platinum Member
 
Join Date: Nov 2001
Posts: 2,288
Default

BTRY, as far as I know all modern OSs set the MRU to be the same as the MTU of the interface. While technically possible for the values to be different, that would cause only badness in practice, so in practice MRU==MTU.

MSI/MSI-X - see:

https://en.wikipedia.org/wiki/Messag...led_Interrupts

Not really well covered in this article - old-fashioned out of band PCI interrupts had a terribly high latency and were just plain bad for performance. MSI helps with that. Also, PCI-E is just plain better for performance than the old PCI bus.

Interrupt coalescing (interrupt moderation) is a trick whereby the receiver side of the controller only interrupts the system when it has received a certain number of packets or a certain amount of time has gone by since the first receive. It increases throughput but also increases latency. It was a bigger deal back in the PCI days, when the interrupt latency was high and there was only one bus and it had to be totally interrupted to service a device interrupt. In modern PCI-E systems, while it does help, it's not such a big deal.

You can play with the values, but I would suggest enabled/adaptive. If the driver is well written (I believe Intel's NIC drivers are), it should do a good job of automatically flipping you between lower latency and higher throughput configurations based on current need.
cmetz is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 04:44 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.