- Feb 20, 2006
- 2,321
- 0
- 0
WARNING: really long verbose post that will bore you to tears. It includes rambling, incomplete information, etc. Only intended for the brave:
I'm developing a config for QoS for voice/video/scavenger on our WAN, which is composed of 1841 routers at all of the branch locations. So I set up my lab just to get my feet wet here, and I have an 1841 connected to a 2960 switch off one 10/100 port and a 2950 off of the other 10/100. I have a pair of Fluke devices, which I'm using to push 100Mbps through the link. One is connected to the 2950, and the other is on the 2960. So they're in effect straddling the 1841. I have an IP phone hooked up on the 2960.
When I run a 100Mbps throughput test at 256B, 512, and higher packet sizes, I get 100% bidirectional throughput. When I go to 128B and 64B, this drops to around 60% and 30%, respectively. I see thousands of input errors incrementing on the 1841's 10/100 interfaces when this happens, and zero errors on the switches. I had a feeling that this had to do with inferior ingress memory/buffer resources on the 1841 (versus switches and higher-grade routers). I talked to a CCIE I know and he seemed to agree. I'm alright with this limitation if that's the case - our WAN connections max out at 10Mbps for these particular sites. I thought I could proceed with my QoS test config though, given that the 1841's support priority-queuing on their 10/100 interfaces, meaning that no matter how jacked up the buffers are for my best effort traffic, my voice traffic should be fine if properly classified and policed into the priority queue. This however isn't the case, and I'm wondering if somebody can glance at my config and offer some input? I feel like there must be something relatively obvious that I'm missing.
Now for the configuration:
I used the auto qos configuration on the 2960 interface that the IP phone is connected to, and I was surprised at how comprehensive the configuration generated actually is. It essentially maps CoS=5 to egress queue #1 with SRR shaped at 10Mbps or something (can't remember the exact value for shaping - not in the lab right now). The default CoS-DSCP mutations were still in effect - CoS-5=DSCP-46. On the 1841 I configured a simple class-map that is matching an ACL for our centralized call manager cluster subnet:
1841(config)#access-list 100 permit ip any 192.168.1.0 0.0.0.255
access-list 100 permit ip 192.168.1.0 0.0.0.255 any
class-map voice
match access-group 100
policy-map voice-policy
class voice
set dscp ef
priority in 1000 (kbps)
When I make a phone call, I can see the counters incrementing on the policy-map, and see that all voice packets are in fact receiving priority. So I feel confident that it's working.
So anyway, when I'm on the phone and start pushing 100Mbps from the Flukes at 512B packet sizes, the call quality isn't impacted at all. When I cut the packet size down to 64B however, I experience some severe packet loss and call quality degredation. It appears that my policy-map is still matching all of the appropriate packets and placing them in the priority queue. However I'm not getting the results I was hoping for.
Can anybody attest to the 1841 having a specific drawback here? I know it's not made for high-speed LAN switching, but it is doing CEF, and I'm only initiating a single flow (src/dst IP, prot) for my throughput testing, so I'd figure that once the FIB is built the router should forward the traffic without incident, unless of course the buffers are getting hosed. Which is probably the case.
But why is this impacting my priority queue?
I'm developing a config for QoS for voice/video/scavenger on our WAN, which is composed of 1841 routers at all of the branch locations. So I set up my lab just to get my feet wet here, and I have an 1841 connected to a 2960 switch off one 10/100 port and a 2950 off of the other 10/100. I have a pair of Fluke devices, which I'm using to push 100Mbps through the link. One is connected to the 2950, and the other is on the 2960. So they're in effect straddling the 1841. I have an IP phone hooked up on the 2960.
When I run a 100Mbps throughput test at 256B, 512, and higher packet sizes, I get 100% bidirectional throughput. When I go to 128B and 64B, this drops to around 60% and 30%, respectively. I see thousands of input errors incrementing on the 1841's 10/100 interfaces when this happens, and zero errors on the switches. I had a feeling that this had to do with inferior ingress memory/buffer resources on the 1841 (versus switches and higher-grade routers). I talked to a CCIE I know and he seemed to agree. I'm alright with this limitation if that's the case - our WAN connections max out at 10Mbps for these particular sites. I thought I could proceed with my QoS test config though, given that the 1841's support priority-queuing on their 10/100 interfaces, meaning that no matter how jacked up the buffers are for my best effort traffic, my voice traffic should be fine if properly classified and policed into the priority queue. This however isn't the case, and I'm wondering if somebody can glance at my config and offer some input? I feel like there must be something relatively obvious that I'm missing.
Now for the configuration:
I used the auto qos configuration on the 2960 interface that the IP phone is connected to, and I was surprised at how comprehensive the configuration generated actually is. It essentially maps CoS=5 to egress queue #1 with SRR shaped at 10Mbps or something (can't remember the exact value for shaping - not in the lab right now). The default CoS-DSCP mutations were still in effect - CoS-5=DSCP-46. On the 1841 I configured a simple class-map that is matching an ACL for our centralized call manager cluster subnet:
1841(config)#access-list 100 permit ip any 192.168.1.0 0.0.0.255
access-list 100 permit ip 192.168.1.0 0.0.0.255 any
class-map voice
match access-group 100
policy-map voice-policy
class voice
set dscp ef
priority in 1000 (kbps)
When I make a phone call, I can see the counters incrementing on the policy-map, and see that all voice packets are in fact receiving priority. So I feel confident that it's working.
So anyway, when I'm on the phone and start pushing 100Mbps from the Flukes at 512B packet sizes, the call quality isn't impacted at all. When I cut the packet size down to 64B however, I experience some severe packet loss and call quality degredation. It appears that my policy-map is still matching all of the appropriate packets and placing them in the priority queue. However I'm not getting the results I was hoping for.
Can anybody attest to the 1841 having a specific drawback here? I know it's not made for high-speed LAN switching, but it is doing CEF, and I'm only initiating a single flow (src/dst IP, prot) for my throughput testing, so I'd figure that once the FIB is built the router should forward the traffic without incident, unless of course the buffers are getting hosed. Which is probably the case.
But why is this impacting my priority queue?