For my intended application... Why Cisco?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
So you go on a physical walkthrough of every DC for every "cloud" service and make sure they're using "approved" brand name hardware?

If I'm colocating there, yes.

If I'm not colocating, but rather using a hosted service, then I just make sure my contract has an SLA. If it does and the terms of the SLA are appropriate, then I don't care what kind of hardware they run.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
The router is going to be switching packets in hardware.

Shorter hops? More throughput?

That is not a distinct advantage unless it gets the job done better.

According to other sources in this thread, a single 6-core xeon can handle the forwarding for two 10gbe connections with less than 50% utilization in OpenBSD.

I'm just trying to find the hole between theory and practice.
 

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
Shorter hops? More throughput?

That is not a distinct advantage unless it gets the job done better.

According to other sources in this thread, a single 6-core xeon can handle the forwarding for two 10gbe connections with less than 50% utilization in OpenBSD.

I'm just trying to find the hole between theory and practice.

The compute resources required to route a packet depends on its size, and this can cause the throughput to vary quite a bit. This is why routing performance is measured in packets per second, not bandwidth.

A modern high-end x86 processor has enough power and memory bandwidth to handle several million packets per second. This is plenty for <10Gb routing, but at 40Gb/s, you're going to have a very difficult time keeping up with the traffic, particularly if you have to handle multiple packet sizes. Best case, smaller packets degrade your speed, worst case (i.e., DDoS) your router can completely lock up. And that's just with extremely simple routing activity; you can forget about things like ACLs, traffic shaping, VRRP, etc.

Before you say "throw more processors at it!," Linux's (and I assume BSD's) TCP/IP stack can scale with additional processors, but only to a point. NICs have a limited number of queues, and that will necessarily limit how many cores can be assigned to a particular NIC. Also, having a single fast multi-core CPU is the ideal case for software routing, as multiple processors add NUMA-related headaches that can easily decrease performance if things aren't tuned just right.

A few years ago, a Linux kernel developer gave a presentation about using Linux as a bi-directional 10GbE router, and while it worked in that role for larger packet sizes, performance didn't scale when adding 10GbE links, and performance tanked with smaller packet sizes. Granted, server hardware has improved since then, but not enough to assure line-rate routing performance at >10GbE.
 

alkemyst

No Lifer
Feb 13, 2001
83,769
19
81
For routing usually a PC can handle many needs (hence the real reason why Cisco doesn't like Dynamips/GNS3).

For switching needs ASICS > CPU.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
For routing usually a PC can handle many needs (hence the real reason why Cisco doesn't like Dynamips/GNS3).

For switching needs ASICS > CPU.

Dynamips tops out at about 1k PPS, even on high end PCs. It'll never be a threat to Cisco equipment.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Dynamips / GNS3 can't give you the route performance that a native cisco router it emulates can. Hell it chugs just doing router# ?

The main issue is going to be CPU limitations. From what I have seen, something like PFSense (BSD based) starts to have difficulty getting much above 1Gb/s route doing open /24 to /24. Once you add ACL's and other rules the performance starts to floor out. Given that is still a good little app for lower needs.

Also "Cisco doesn't like Dynamips/GNS3" is pretty funny because they often use it for their classes and most of the CCIE's I have dealt with are happy to use it to do testing and will even slap their CCIE # on it. Dynamips at the moment is a dead project looking for a keeper. I am pretty sure that Cisco will be happy to ignore it since it can't touch the performance of a 1721 yet.

--edit--

On topic, I would go Cisco / Juniper / Any other dedicated vendor before I would do a server based router. ASIC based >>>>> generic CPU once you start getting above 1 gb/s routing (or lower if you like ACLs etc.)
 
Last edited:

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
Mikrotik is such garbage.

I helped a guy set up a WISP using Mikrotik at his towers and I wanted to shoot myself in the face.

Also, in their own performance metrics, note how performance tanks when you start adding in any kind of QoS or ACLs. That's going to be the case with any non-ASIC-based routing platform.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
The compute resources required to route a packet depends on its size, and this can cause the throughput to vary quite a bit. This is why routing performance is measured in packets per second, not bandwidth.

A modern high-end x86 processor has enough power and memory bandwidth to handle several million packets per second. This is plenty for <10Gb routing, but at 40Gb/s, you're going to have a very difficult time keeping up with the traffic, particularly if you have to handle multiple packet sizes. Best case, smaller packets degrade your speed, worst case (i.e., DDoS) your router can completely lock up. And that's just with extremely simple routing activity; you can forget about things like ACLs, traffic shaping, VRRP, etc.

Before you say "throw more processors at it!," Linux's (and I assume BSD's) TCP/IP stack can scale with additional processors, but only to a point. NICs have a limited number of queues, and that will necessarily limit how many cores can be assigned to a particular NIC. Also, having a single fast multi-core CPU is the ideal case for software routing, as multiple processors add NUMA-related headaches that can easily decrease performance if things aren't tuned just right.

A few years ago, a Linux kernel developer gave a presentation about using Linux as a bi-directional 10GbE router, and while it worked in that role for larger packet sizes, performance didn't scale when adding 10GbE links, and performance tanked with smaller packet sizes. Granted, server hardware has improved since then, but not enough to assure line-rate routing performance at >10GbE.

I could use jumbo packets, and the edge topology is very flexible. There would be nothing stopping me from implementing one PC per 10gbe as a "router".
 

theevilsharpie

Platinum Member
Nov 2, 2009
2,322
14
81
I could use jumbo packets, and the edge topology is very flexible. There would be nothing stopping me from implementing one PC per 10gbe as a "router".

In order for jumbo frames to work, jumbo frames have to be fully supported from one end of the network to the other. If the purpose of your "router" is to connect to the Internet, you're not going to be able to use jumbo frames.
 

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I am toying with the idea of starting a small datacenter featuring 4 x 10gb bonded lease lines and i'm eyeing the price tags on some of the Cisco routers that could handle this kind of switching load...

The price is prohibitive.

Is there a reason a home built 12-core windows server with 4 dual 10gbit cards couldn't do everything that a Cisco router would be able to do? I can't imagine that performance would be an issue with that much processing power on hand, is this underpowered or overpowered for this application? Would a properly configured Windows / CentOS / FreeBSD gateway perform as well as a high end Cisco router?

Server isn't going to have the kind of backplane and integrated switching capacity to actually push 40Gbps through it. It takes some serious switching hardware to handle that, along with the bonding/bgp'ing of the nics. Add in QOS of any kind of deep packet inspection and the idea of pushing 40Gbps becomes a pipe dream.

Our Juniper switch/router in our datacenter that we got for 2 of our 10G links was like $200K+
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I could use jumbo packets, and the edge topology is very flexible. There would be nothing stopping me from implementing one PC per 10gbe as a "router".

Even if you managed to get a 9218MTU to the internet core, everyone else in the world won't have that so you will be left dealing with the fragmentation issues that would add considerable overhead to the connection that your router would need to deal with. Those fragments would lower the MTU for you and then add CPU time to reassemble them.

Jumbo frames also only matter when the traffic is pretty consistently at 9218 MTU. Most Internet / TCP / UDP frames won't be.
 
Last edited:

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Even if you managed to get a 9218MTU to the internet core, everyone else in the world won't have that so you will be left dealing with the fragmentation issues that would add considerable overhead to the connection that your router would need to deal with. Those fragments would lower the MTU for you and then add CPU time to reassemble them.

Jumbo frames also only matter when the traffic is pretty consistently at 9218 MTU. Most Internet / TCP / UDP frames won't be.

I see. I didn't take into account that fragmentation would become an issue. I was thinking that Jumbo Frames could be used on the internal network to the edge, and then transmitted with a normal 1500MTU out to the internet.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I see. I didn't take into account that fragmentation would become an issue. I was thinking that Jumbo Frames could be used on the internal network to the edge, and then transmitted with a normal 1500MTU out to the internet.

That's correct. What he's saying is that the fragmentation and reassembly that will have to happen when packets leave and come into your network will be a significant bottleneck at those speeds.
 

freegeeks

Diamond Member
May 7, 2001
5,460
1
81
While that is a neat looking device, it doesn't work that way.

future Mikrotik cloud routers will support sfp+, I don't see the problem of using 10 Gbit optics in one of their future products. I doubt that the op really needs 10 Gbit connectivity right away, some bonded gbit ethernet ports will do to get started
 

freegeeks

Diamond Member
May 7, 2001
5,460
1
81
Mikrotik is such garbage.

I helped a guy set up a WISP using Mikrotik at his towers and I wanted to shoot myself in the face.

Also, in their own performance metrics, note how performance tanks when you start adding in any kind of QoS or ACLs. That's going to be the case with any non-ASIC-based routing platform.

they are a pita to configure but most stuff just works when you get it going
I just configured a Mikrotik RB2011 a couple of hours ago, I was able to push 500 Mbit with a bridged configuration (so all in cpu). Not bad for a $80 device

Try that with a Cisco, for what you pay, Mikrotik offers a lot of value
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
future Mikrotik cloud routers will support sfp+, I don't see the problem of using 10 Gbit optics in one of their future products. I doubt that the op really needs 10 Gbit connectivity right away, some bonded gbit ethernet ports will do to get started

Not when he has 4 x 10GB lines. The SPF also needs to support 10GB for a 10GB SPF to work.
 

jumpncrash

Senior member
Feb 11, 2010
555
1
81
I think we have 160gbit coming in here, and I think we have 10 6500s, so there must be a reason for that
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
in my opinion most home brewed computer with that kind of processor will be more than sufficient to support 40Gbps for traffic. Most of the time the limitation of a computer for a router is the amount of ports that it can support is limited by the amount of pci-e slot available. There are lots of open source linux based routing OS that uses very similar command as Cisco IOS.

The only issue, most of those are open source and its support are forum based. If you are pushing that kind of bandwidth, that means down time = $$$ lost, and you want the best support you can get to help you bring up the network.