• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

critique a spidey network

spidey07

No Lifer
Ok, here's your chance to tear my ideas to threads and tell me i'm a dumb a$$. (otherwise known as constructive criticism)

Got a new building going up and need to provide a LAN for it. There are 6 floors with occupancy for about 50 people on each floor. Each floor also has a centrally located wiring closet with terminated cat5 and 12 pairs-50 micro multimode fiber that runs to a distribution panel on the 3rd floor. All closets contain a POTS phone.

network requirements:
high performance
multicast video and audio a must
used for full fledged voice over IP implementation
stub site, most resources located at a corporate campus
SONET connectivity to LEC (bell south)
fault tolerant (power supplies on ups to separate PDU circuit)
remotely supportable via network management systems and ops center

Initially only floors 2, 3, and 4 will have the 50 folks per floor, but could grow to include 1, 5, 6.

here's the plan: use cisco 4006 switch as core (8 ports gig, 96 ports 10/100, 2 slots free) on third floor. two servers gig attached to core 4006. 3rd floor also contains cisco router with OC-3 POS/ATM capability...probably a 3640.

FL2 and FL4 closets will have cisco 4003 switches (two ports gig, 80 ports 10/100) the access 4003s will connect to core 4006 via two gig links in an etherchannel. The etherchannel is not necessarily for performance as it is to take spanning-tree out of the equation and not block ports. If other floors are opened for occupancy then identical configed 4003s will be purchased and connected in the same fashion.

No layer three switching is required at this time. all three floors (some 256 user ports) will trunk with two VLANs, all IP phones will belong to another vlan and data to another. If layer three switching becomes a requirement for security or other purposes a L3 line card will be purchased for the core 4006

Well? Does it SCALE? one of the new guys told me I should build it entirely out of 24 port switches. what do you think? what can be improved?

thanks,
spidey
 
ummm... I have nothing to add except you have a kickass job! That must me so rewarding when you complete projects like that!

hmm... I still don't know whether I want to be a programmer or a network admin... 🙂
 
I haven't drawn it out, but it looks good off the top of my head.(..assuming that redundancy is not an issue?).

and ...does it have to be Cisco? does it ALL have to be Cisco? Have you played with or researched Cisco's VoIP?

Do you have adequate HVAC in the closet? How long does it take a swallow to fly ....never mind.

Just curious.

Scott
 
does it have to be cisco?

well, yes. That is mainly from a support/relationship standpoint. with 8 guys pretty good with cisco gear plus tremendous discounts. We don't want a NOC to have to support many different comm platforms. Less phone calls for me don't ya know.

environmentals for the closets are good.

on the redundancy issue...I push for fours hours down time a year. two hours for hard failure plus 2 - one hour maintenance windows. Fours hours is acceptable for this application. there is a good amount of resiliency with power supplies and dual interswitch links. Hard failure would knock a section out for two hours though. spares are on site.
 
From my experience, if your 3640 has the OC3 interface that's about all it can do. Have you tested this configuration in a lab? As for your 4000's, 24Gb backplane looks good for your design, and 6Mpps is more than adequate. Why does the FNG recommend 24-port switches?

How about a console router? A 2600 or 3600 with a couple of async cards to hook up to the console ports of your switches for OOB management. I find it very useful.
 
2980G should save you a pretty penny. If layer 3 is ever required, you have the ability to do MLS up to the distribution switch. For a distribution switch you can use a 4908G-L3 with its ports configured for bridging rather then routing. While not a modular solution it will probably be signifigantly cheaper.

 
I think I threw out the 2980s because that line is getting phased out. Is it true or am I misinformed? I've got a bunch of 2948s on campus and the 2980 is simply an extension of the product line.

good idea.

My choice on the chassis at the core is based on "what do you do when the other three floors open up?" If/when that happens I'll need 10 ports of gig for interswitch plus the two for servers (that and I just like chassis...so easy to support and troubleshoot). now I could just add another 4908g or I could add a blade to a 4006. judgement call there.

Cost is not much of a factor. We've learned the hard way that capital costs in a network are generally less than 30% of the total cost of ownership. The other 70% being support and maintenance and upgrades and salaries.
 
I think your overall design is good very good, but here's a few things to think about..

A decision that you need to make for IP telephony is if you're going to want to use inline power for your IP phones (assuming they will be Cisco) or if you want to use external power adaptors. Some of the switches have inline power, some don't. If you don't use inline power, assume $55 (retail) for transformers for each IP phone.

If inline power isn't required, you might consider the new 2980 switches - 80 FE ports in a 2U chassis with two gig uplinks. They aren't as redundant (no real redundant power, beyond the external RPSU which isn't really all that great) and no hot swappability for failures. But, for the price it absolutely destroys everything else in it's class - $9K retail.

The 4003 is a great switch and has some great features, but it's very pricey compared to some of the stackables. You'd need a chassis, a redundant power supply, a supervisor card (which doesn't have gigabit ports, BTW), a 48-port 10/100 card and a combo card with 2 gig ports and 32 10/100's. The combo card isn't available with inline power, by the way.. Just the 48-port flavor. All this will run you about $19K retail for 80 FE ports and 2 gig ports - Same thing you get with the 2980 for 1/3rd the price. Even a couple of 3548's (at about $5K each) would be cheaper. If you really want to go 400x in the wiring closet, a 4006 chassis would be a better purchase, assuming you've got more than 80 cables in the closet.

One thing that I've learned when designing networks is that things always seem to grow more than you think they will. For this reason I think your 4006 in the wiring closet is a good choice. I'd also recommend going L3 in the initial rollout. By the time you have 50 people per floor plus all their printers, devices, IP phones, etc. you're going to be at around 125 IP's per floor, 800 devices eventually once all 6 floors are in place. Even three floors gives you a lot of users, far beyond what you'd want on one vlan. You might as well plan for it going in and get the L3 in place up front so you don't have to rebuild later. You could try to setup L3 switching via your router using it one-armed, but you're going to loose a LOT of performance that way. The only time I would see you not needing L3 switching here would be if there were NO servers and all traffic left via the ATM. Then routing speeds wouldn't be an issue, as the WAN speed would be the slow link. I'd still go L3, however. Worth it's weight in gold. And don't believe the NT/Unix guys when they say "Nooo. We'll NEVER put any servers in there! I Promise!!!"

You'll need two 6-port gig cards in your 4006, the supervisor card and the L3 card (which gives you 32 FE's and 2 more gigs). That leaves you with two open slots for future growth, plenty of expansion.

As to your router, yes, a 3640 is a good choice - It should handle an OC3 pretty easily. If you're concerned about redundancy the 3660 is a bit better but quite a bit pricier. Only catch with the 3600 series is that they don't support gigabit Ethernet - You'll need to channel a couple of FE's together to provide full access to your OC3.

At my last job I put in about 50 6509's, most filled to the gills with 48-port cards. This was before the 4006 was really available (Too bad - It's a great cheap high-density switch compared to the 6509's). One of the things we did with our wiring closet might be of use to you (and others doing enterprise design). We wired up EVERY jack in the closet to a switch and kept a very detailed table of what jack went to what switch port. When a user had a problem we could easily identify their port and make whatever changes needed to be done at the network level. OF course, we had a large variety of systems - Unix, PC's, etc. that needed a lot of VLAN and port speed/duplex maintenance which might not be applicable for you, but it certainly saved us a lot of headaches. I had our development team write us a web app which allowed the technicians to go in and view (and change) port settings for end users without needing to talk to my team. It saved us at least one FTE's worth of admin.

With the 2980's you could do this fairly inexpensively, provided you can convince your boss to spend the cash. I had much more fast talking to do at $70K/switch. *grin*

If you've got the cash(A lot of it!) one other option comes to mind... You could use a 6500-series switch in the MDF and use one of the WAN blades within the 6509. You wouldn't have to mess with two devices and everything could be consolidated into one. I've used this with a HSSI card (a couple of DS3's) and really liked it. Definitely not a cheap option, it could be about $80K or higher for that one box.

Best of luck!

- G


 
You say programming is hard! It seems like you guys are talking german to me 😛

oh and I think you should use the 2112 and then plug that into the 5150 so the 316 will be running at optimal levels 😛
 
garion

Well to tell you the truth, 50 people per floor really is about the most they could put in. So add at most 15 printers.

hmmmm...4006 in every wiring closet. hmmmm, me likes open slots (they ALWAYS get filled in the end anyway).
hmmm...6500 with WAN card (OVERKILL) me likes anyway...it does address CTRs concern of routing power in the 3640.
me wonders if it is too much overkill...hmmm.

mucman, yeah I'd like to do your suggestion but the 5150 doesn't mate well with the 316. male-to-male don't ya know. 🙂
 


<< garion

Well to tell you the truth, 50 people per floor really is about the most they could put in. So add at most 15 printers.

hmmmm...4006 in every wiring closet. hmmmm, me likes open slots (they ALWAYS get filled in the end anyway).
hmmm...6500 with WAN card (OVERKILL) me likes anyway...it does address CTRs concern of routing power in the 3640.
me wonders if it is too much overkill...hmmm.
>>



50 people - Depends on who they are. If theyt are engineers they might have 2-3 computers per cube. If they are more generic business folks then only one or two. Add in capacity for IP telephones, etc. The number gets big quickly.

The 6509's rock - They are incredible with their monster (256Gb/s!) backplanes and total redundancy. You will NEVER regret putting one of these in with a WAN blade. It's very cool - It just looks like another switch port in the L3 config. Plays nicely with other routers, however.

Just remember.. If you build it, they will come!

- G
 
yeah, love the 6500s. have about 80 of them.

But can I really live with myself for putting in a 160K network for 150 people? That's over 1000 per port!!!!!
 


<< yeah, love the 6500s. have about 80 of them.

But can I really live with myself for putting in a 160K network for 150 people? That's over 1000 per port!!!!!
>>



Don't think of it for 150 people.. Think about it as supporting a whole buillding and nearly a thousand end devices in the long run. In reality, your 6509 would probably run you about $80K and three 2980's about $22K. Throw in cables and GBICS and you'd be in it for about $110K. Yes, still a lot of cash, but hey - You get what you pay for.

I guess I assumed by VOIP you meant Cisco IP phones. If that's not in the picture then you wouldn't need as many ports or as much density.

- G
 
I like Garions suggestions. I would get a 6500 with the Supervisor Engine and boost the RAM. With the ability to do Layer 3, I would hook it up to the 2980G for complete Layer 3 throughout. I believe with the 6509 you can connect up to an OC-48 line either SONET or ATM. I know that I will be corrected. 🙂

Since you have a complete Cisco network, have youguys made the move to Windows 2000 at the client and server level? If you haven't and want a Value-Add, try looking at the Cisco Networking Services 1.0 for Windows 2000 Active Directory. If you want to truely provide QoS, that is the product for you. With Windows 2000 Active Directory's built-in support for QoS, you can not only provide QoS from the Client to the Server, but also Client to switch to switch to router to router to switch to Server. Also, you can manage every piece of Cisco equipment from Active Directory. Try doing that in NDS (just for you CTR! 🙂) See the point? A truely great product. If you want to read more about it, click here.
 
The 6509 will handle up to OC12 for LAN networking but, I believe to use SONET you need to use the Flex-Wan blade and the appropriate line card, OC3 in this case. It also depends on what interface your telco is giving to you for your OC3, too.

- G
 
Hmmm this is quite amazing. You guys always impress me. My CCNA and CCDA don't weight much when compared to experience 🙂. It seems like we're having a new kid in town. GARION. CTR is here as usual and don't forget Xanathar and Shadow who always succeed in giving some tips about win2k and AD 🙂. Now where the hell is damaged? 😀
 
I can't justify to myself L3 switching with 150 people. this place is gonna hold at max 300 which is fine for a single VLAN.

Now, if I decide to use a 6500 with flex WAN with OC-3 int then L3 is required.

shadow07,
We don't use microsoft. 🙂 primarily sun. nice not having things crash all the time. That product you talked about sounds pretty cool, although the V1.0 scares me. not to mention MS is dropping 2000 anyway.

maybe I should just chain a bunch of 24 port switches together?
 


<< We don't use microsoft. 🙂 primarily sun. nice not having things crash all the time. That product you talked about sounds pretty cool, although the V1.0 scares me. not to mention MS is dropping 2000 anyway. >>



300 people in one vlan? You guys must supernet or something. I'm always of the &quot;smaller is better&quot; philosophy in VLANS - Just a different style of networking. I know that with switching it really doesn't matter too much - Just a different way of doing things.

Actually, the way that Microsoft consultants have described XP Server &amp; Pro to us is that it's &quot;everything we wanted to make in 2000 but didn't have the time to get it all in&quot;. It's going to take even fuller advantage of Active Directory than 2000. Think of this way.. Windows 2000=Windows 95. Windows XP=Windows 98. It's definitely not going away! Believe me, we're rolling it out to 45,000 people - It better not go away!

- G
 
I heard they we're dropping development and support for win2K in q4 2001?

I thought MS considered 2K a flop and are cutting their losses? then again, I'm not north west like you garion.

One last thing. 300 people on one VLAN is fine. Just don't use a broadcasting to heck OS like windows. set the ARP timeouts on hosts to like 1 hour and you'll have very little bcasting goin on.

Otherwise, if you run windows then try to keep it below 200. I generally recommend a 25 bit mask to stay even smaller. then again with L3 who cares how small/big the subnet is. bwahahahahaa

<edit> aww screw it. i'll price it out tomorrow. core=6509/msfc/pfc/flexwan/OC3, access closets=2980s. If I need more ports in the access I can always throw in another 2980. although I can't think of a more expensive option. I can see the look on the dude's budget I'm hitting now...&quot;what do you mean it will cost 130K for a network? With as cheap as network gear is these days. why just the other day I saw a 16 port ethernet switch for 80 dollars!!! we'd only need ten of those and that's still below 1000???!!! I only budgeted 5000.&quot; Guess IT wasn't involved in the budget process for this project huh? you guessed wrong sir.
 
You'll be happy you did.. Also, if this is a remote network think hard about pre-wiring all the ports. You'll be happy for it in the future.

One caveat - If you think you're going to add in additional 2980's in the IDF's, you'll either have to buy the 16 port gig blades for extra capacity or do a chain of the 2980's and use trunking and not channel. IE..

6509 3/1 -> 2980 #1 GE1
2980 #1 GE2 -> 2980 #2 GE1
2980 #1 GE2 -> 6509

If you loose a switch or a GE port you're still alive via the other channel. I've done this a few times and spantree takes care if it nicely. Yes, it's cleaner to uplink all of them down to the 6509 but you need twice the port density. Performan really isn't an issue - Your uplink speed is 155, so a single gigabit without channel is MORE than enough for 80 (or even 160 in case of a failure) users.

Althought.. If money is at all tight, you'd be JUST fine with a 4006. Might as well buy the L3 card up front, however - It gives you some gig ports and some 10/100's that you'd have to buy additional cards for up front if you don't have the L3.

- G
 
I think you are getting good advice here. Sounds like you know your needs and have covered
the bases.

One comment on the structured wire. I would pull some single mode along with the 50/125.
I do not think multimode is going to have good distance at 10Gbps or above. Remember, fiber is nearly free, its the labor to pull it (and terminate it) that costs money.

I would like to address the 24port vs. chassis issue.

Bluntly, both have advantages and disadvantages and if you and your staff
have a preferences with on or the other, stick to your guns.

Also, the Cisco phones should have a separate vlan and have pay attention to QOS
issues end to end.

Also, multicast pruning is very important if lots of multicast traffic is generated.
The multicast experts at the Interop show seemed to like PIM Sparse mode whenever possible.

Good Luck;
 
l3guy,

good idea on the single mode. jeesh, i never thought I would see the day when you need single mode for intrabuilding runs. But I do remember that we're very close to the limits of what MM can do.

On the multicast thing:
Definately sparse mode only. I woulldn't dare do dense in a network of this size. actually I run sparse-dense with auto-RP. mcast tables look great!!!! works like a champ. Actually got some training from the inventor of PIM and modern multicast.
 
Back
Top