Jumbo frame in a mix 100/1000Mbps cisco network

azev

Golden Member
Jan 27, 2001
1,003
0
76
Is there a side effect in enabling jumbo frame if some of the current infrastructure are still in a 100Mbps realm ? The server farm were recently upgraded to all gigabit infrastructure on new gigabit blades for 6509 switches, but most of our access switches are still 100Mbps (3550), with some special circumstances where certain dept actually have a 3560g switches.

Currently there are no network speed issues, because we design above and beyond the requirement. I think most of our bottleneck are on the servers it self, but I'd like to know if enabling jumbo frame will improve the performance of both the network and servers ?

Thx
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
You shouldn't mix them in the same broadcast domain. It can work, but at that point your relying on upper layer protocols to determine segment size. The potential is there for problems.

Your servers should be in a separate network from the clients anyway, so you probably already have a router that will take care of fragmenting the packets.
 

m1ldslide1

Platinum Member
Feb 20, 2006
2,321
0
0
I thought that TCP used windowing to determine packet sizes? So when you say that a router fragments the packets, I'm a little confused. Doesn't that conflict with saying "upper layer protocols to determine segment size"??

Perpetually confused on this subject, I still feel like SAN (fiber-channel) is one of the only applications that encapsulates using jumbo frames and the rest is marketing, even though others vehemently deny this. "Others" doesn't include the cisco website, which last I searched kinda confirmed my suspicion.

Any truly technical links on the subject Spidey? I'd really like to understand this better. Now that I'm thinking about it I'll do some googling on my own.

Edit: Sorry to hijack this thread a little bit, but I did a little reading and the following came clear: Packet size can be determined by either switch/router fragmentation or by TCP. If the hardware supports fragmentation of jumbo frames, then it will do just that and TCP will be oblivious. However if it's older equipment it will probably drop the frames, and then maybe TCP will window down. At least that's the windowing theory in 1500 land, not sure if it holds true with jumbo. I guess I'm also not sure if TCP determines the jumbo size or if the NIC waits to collect a whole bunch of packets before encapsulating at layer 2 in a single jumbo. :confused:
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Originally posted by: m1ldslide1
I thought that TCP used windowing to determine packet sizes? So when you say that a router fragments the packets, I'm a little confused. Doesn't that conflict with saying "upper layer protocols to determine segment size"??

Perpetually confused on this subject, I still feel like SAN (fiber-channel) is one of the only applications that encapsulates using jumbo frames and the rest is marketing, even though others vehemently deny this. "Others" doesn't include the cisco website, which last I searched kinda confirmed my suspicion.

Any truly technical links on the subject Spidey? I'd really like to understand this better. Now that I'm thinking about it I'll do some googling on my own.

You have to really understand the OSI model and how it works to comprehend it. TCP does using a sliding window but this has nothing to do with packet or frame size, just how much data can be sent on a TCP session before an ACK is needed.

I'd need a white board to explain it. Just look up TCP max segment size (MSS) for starters, this is negotiated upon the intial 3-way TCP handshake. TCP has no problems with mixed frame sizes. UDP and other protocols "can" and "it depends".

On the fragmentation thing - this is when a router has to send a packet that is too big for it's outgoing interface. So it fragments the packet - this happens at layer3 and can be identified by the "may fragment" "more fragements" flags. Look at a protocol breakout of IP, TCP for more clarity. For example let's say you had a packet that was 4000 bytes long and the routers outgoing interface is 1500 byte frame size. If the layer3 protocol is IP then the router chops this packet up into 3 patckets. 1st is 1500 bytes, 2nd is 1500 bytes and 3rd is 1000 bytes. This is a very simplified explanation because my byte counts are off because each packet has an IP layer added to it, and I've muddied the waters by talking frames and packets at the same time.

Layer2 = frame
Layer3 = packet
Layer4 = segment
Layer 5-7 = datagram
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
this is getting quite interesting, doing goglyn on this topic my self.
Keep it coming guys
 

m1ldslide1

Platinum Member
Feb 20, 2006
2,321
0
0
OK I'm getting you now. The one thing though that I can't seem to find is what layer and corresponding hardware determines to send a 9000 byte frame.

If we're working our way down the OSI, then if it isn't the TCP handshake, we get to IP. In order for IP to prepend headers to each packet, there has to be just that: a packet with a determined size in order to affix an IP FCS field.

Ethernet is at layer 2, and layer 2 should be receiving neat little packets from layer 3 with headers and FCS fields and all that, and so then if layer 2 were responsible for generating a jumbo frame then it would have to buffer multiple packets in order to encapsulate with the single ethernet header. I can handle that if that's the case, I just can't find any data confirming or denying it.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
As Spidey says, TCP doesn't really have a problem with mixed frame size, but there are some exceptions -- e.g. if both ends accept jumbo frames, but a midway switch doesn't. Then both sides will negotiate (using small frames to start) just fine, and then send the big frames, which will be dropped/etc. by the midway gear that doesn't support it.

Path MTU Discovery is supposed to handle cases like this, but it has a bunch of issues and cannot be generally relied upon.

If you can meet this condition -- no bridges, small frame-only switches, etc., in the pathway between nodes, and use only TCP only for significant data transfers, then you might be OK.

On the other hand, what are the benefits from jumbo frames in your environment that justify the potential risk of downtime / mysterious problems / etc.? If you're not really stressing the network (or servers due to networking load), then of course the odds are that the benefits will be nil. IME some performance benefits can be seen at the high end of data throughput when using old/PCI gear, but even that varies, and non-jumbo-frame performance can be great these days.

I'd take the second approach towards justifying the effort for jumbo frames before risking it in a heterogenous environment for unknown benefits -- you should have something interesting that you can use as a benchmark; try that in isolation with/without jumbo frames to see the performance impact for yourself.
 

cmetz

Platinum Member
Nov 13, 2001
2,296
0
0
Path MTU discovery does not handle this case.

In IP, all nodes on a broadcast network must agree on their MTU. I'm not sure that's written down in any one standard, but it's definitely part of the design. Many protocols, such as OSPF and IS-IS, actively enforce this rule. In practice, if you attempt to mix MTUs on a broadcast domain, it won't work.

It's theoretically possible to create a homogeneous L2 broadcast network running IP with some jumbo-capable nodes and some jumbo-incapable nodes, if every jumbo-capable node has a static route with a MTU/MSS override configured for each and every jumbo-incapable node. All it takes is one error, however, and you have two nodes that can't talk with each other reliably. Don't try this in any production network, it's a nightmare to debug.

If you want to mix jumbo-capable and jumbo-incapable nodes, you need to put them into separate L2 domains (e.g., VLANs and IP subnets) and you need to stick a forwarding device between them (L3 switch or a real router). Then that device can return ICMP too bigs / fragment as necessary so that nodes can talk.

Oh, and two gotchas to consider.

First, there's no standard for exactly what sized frame jumbo-frames means. On some equipment, it's an on/off switch, and you are stuck with whatever value the vendor picked. On other equipment, you can configure any number greater than 1500 and below some limit the vendor picked. So you need to configure everything explicitly to ensure that they're using the same jumbo frame size, or you end up into the mixed capability problem again. (oh, and exactly what number you pick can wildly affect performance of some of your equipment, for extra fun)

Second, Windows's path MTU discovery implementation is broken. If there's any Windows in the mix you're going to get to learn about this the hard way.

Now, don't get me wrong. Jumbo frames are a very good thing. But the IEEE really let us down by taking a philosophical stance against standardizing them in 802.3z or somewhere else in 802. Much of the pain involved with using jumbo frames wouldn't be a problem if the IEEE reps did the right thing. Perhaps someday in the future they will. I hope that somewhere in the 20G/40G/100G development that more vendors get with the program to get PPS rate requirements under control.