• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Let's talk Cisco Nexus and unified LAN/SAN fabric

spidey07

No Lifer
I haven't really been exposed to it much but it's where we're headed. Been getting ready to do a large Cisco UCS implementation, it's pretty awesome technology that allows data centers to scale virtual machines quickly.

The unified fabric approach is really just about combining storage and network traffic on a single link or single data center switch. The UCS solution is blade centers with two 10 gig ethernet links to them that carry LAN and FCoE traffic (fiber channel over ethernet). That's the only conection they have so cabling and overall management is a breeze compared to traditional blade centers and all their associated switches.

Anybody ever mess with Nexus or UCS?

http://www.cisco.com/en/US/products/ps10265/index.html
 
I haven't really been exposed to it much but it's where we're headed. Been getting ready to do a large Cisco UCS implementation, it's pretty awesome technology that allows data centers to scale virtual machines quickly.

The unified fabric approach is really just about combining storage and network traffic on a single link or single data center switch. The UCS solution is blade centers with two 10 gig ethernet links to them that carry LAN and FCoE traffic (fiber channel over ethernet). That's the only conection they have so cabling and overall management is a breeze compared to traditional blade centers and all their associated switches.

Anybody ever mess with Nexus or UCS?

http://www.cisco.com/en/US/products/ps10265/index.html


I have several HP c7000 blade chassis running and I use a lot of vmware (esx). I use pass-through Gigeth or 10gig flex connect modules on the blades systems. Not a big fan of FCoE, I use primarily iSCSI because I have not found a real world performance advantage using FCoE (+ added disadvantage that it's not routable so less flexible). I still also use a lot of native hba to connect to my san fabric. The blade systems are a godsend, not only for simplified cabling but also iLO mgmt and the ability to quickly install an os on the blades.
 
Last edited:
I haven't really been exposed to it much but it's where we're headed. Been getting ready to do a large Cisco UCS implementation, it's pretty awesome technology that allows data centers to scale virtual machines quickly.

The unified fabric approach is really just about combining storage and network traffic on a single link or single data center switch. The UCS solution is blade centers with two 10 gig ethernet links to them that carry LAN and FCoE traffic (fiber channel over ethernet). That's the only conection they have so cabling and overall management is a breeze compared to traditional blade centers and all their associated switches.

Anybody ever mess with Nexus or UCS?

http://www.cisco.com/en/US/products/ps10265/index.html

I use it quite a bit, remember nexus line is not a replacement for 6500 series. They are leaving features like NAM and VSS out in an attempt to keep the lines separate.

Rollback feature is nice, no 1gb fiber blade (to my knowledge) is not. copy run start vs wr takes some getting used to. Being able to ip route 192.168.1.0/24 instead of ip route 192.168.1.0 255.255.255.0 is another neat little addition
 
I use it quite a bit, remember nexus line is not a replacement for 6500 series. They are leaving features like NAM and VSS out in an attempt to keep the lines separate.

Rollback feature is nice, no 1gb fiber blade (to my knowledge) is not. copy run start vs wr takes some getting used to. Being able to ip route 192.168.1.0/24 instead of ip route 192.168.1.0 255.255.255.0 is another neat little addition

A lot of the inside guys at Cisco tell me the opposite. The 6500 line is severely dated for 10gig performance and is best served as an end-of-row solution for LAN access only and services. It's over 10 years old and it shows.

Cisco will never come out and admit it, but the line is dead IMHO.
 
We're rolling out Nexus 5K's & 7K's in our new datacenter.
Our server guys though aren't open minded enough to accept FCoE, so we're keeping SAN to the MDS 9500's for now.

They're very nice swtiches.
I like the fact that they do front to back airflow to support cold/hot aisles, but their 7018 is still side to side to save chassis space.
 
A lot of the inside guys at Cisco tell me the opposite. The 6500 line is severely dated for 10gig performance and is best served as an end-of-row solution for LAN access only and services. It's over 10 years old and it shows.

Cisco will never come out and admit it, but the line is dead IMHO.

I was told they are not wanting to abandon it and thats the reason for some feature blades not being available. This was from the trainer that taught my nexus class who also works for Cisco, so who knows.
 
I've designed and implemented a few Nexus solutions so far as well as attended a training seminar on their capabilities. They are a little flukey from time to time and could probably stand to mature a bit but for the most part I have no gripes about what they're capable of. As for them replacing 6500's... not until they build out more features and modules for them, but I can see where they're going with it and they def. do have a lot of potential to replace 6500's. The instructor in the class I took never came out and said they were looking to replace 6500's, yet every design model happened to have 7k's in the core with 6500's off to the side for "service" only.

Of the implementations I've done, no one have taken advantage of running FCoE along side of their typical LAN connection. People are still a little hesitant to jump on that boat, imo. We'll see if that changes when multi-hop fcoe hits the streets.
 
I'm deploying Nexus at the moment (5k's and 2k's). Eventually we will add a pair of 7k's in order to replace our existing DC solution, which happens to include 6500's. I really like the UCS concept from top to bottom, and our server guys are exploring the Cisco servers and evaluating them against HP.

Our storage team plans to start testing the FCoE capabilities soon as well. There could be potential for big savings in terms of NIC's if that pans out for them.

WRT the 6k line, I don't believe they are going away anytime soon. The 2T series Sup's are on the way to release (if not already) and will support dramatically increased speeds in the e-series chassis. Not Nexus 7k speeds, but plenty for core / distribution layer in a lot of environments. Feature support includes just about everything one would need, including emerging technologies like OTV and LISP. So its a matter of comparing options at price points, and I think the 6k will do well in many cases. Especially in cases where people already have the chassis in a rack.
 
HP kept trying to say FCoE was too new and there's no standards out there for congestion control. BCN vs. QCN and what happens when you run into head of line blocking that can occur when you hop more than a single switch. I don't know the merit of it as I'm not a storage guy and haven't really dug deep into FCoE but the general consensus is that it WILL replace fiber channel in the data center.
 
I guess I don't see the point of FCoE... just use iSCSI. At the end of the day it's still Ethernet and iSCSI is cheap and easy. If you are going to do FC, do it right from the get go. Just my two cents.

I'm a SAN/NAS engineer and work on NetApp hardware, pretty much their entire range.
 
I guess I don't see the point of FCoE... just use iSCSI. At the end of the day it's still Ethernet and iSCSI is cheap and easy. If you are going to do FC, do it right from the get go. Just my two cents.

I'm a SAN/NAS engineer and work on NetApp hardware, pretty much their entire range.

The line of thinking is switches can be hardware optimized for FCoE.
 
We recenly had Cisco SE/AM come in to do their UCS presentation, and the technology and direction Cisco is going with this is quite amazing.

The most impressive things about Cisco UCS platform are:

1) Proprietary memory controller which allows for the highest density blade/rack server memory capacities (big plus for virtualization in the large SP space).

2) Unified single management interface with delegated access for Network/System/Storage Admins.

3) Unified interface for KVM,ILIO,Network (including VNic), and Storage (Fiber encapsulated over Ethernet).

4) Ability to template your blade hardware deployments (Profiles to including, bios, nic, vlan, and FCoE settings.

5) UCS will eventually become available for their rack version servers as well.

I see many concerns over FCoE issues, but our SE regards this as something that improves the capability of FC, as it now includes control methods for traffic (which traditional FC does not have other than buffering), and is totally transparent at the System and Storage levels.

Cisco UCS is mainly targetted at large SPs, but eventually this will be technology more geared towards Enterprise/Medium businesses which have not already heavily invested, or are looking for more "unified" hosting environments for their push towards Virtualization.

Chime in.
 
Careful about a vendor saying there are no concerns with FCoE as they're trying to sell you something. I'm still unclear about the congestion management standards vs. what Cisco is doing. Cisco has a way of doing things their way when a standard isn't in place and then try to get the standards passed they way they're addressing the "problem" or feature.
 
Careful about a vendor saying there are no concerns with FCoE as they're trying to sell you something. I'm still unclear about the congestion management standards vs. what Cisco is doing. Cisco has a way of doing things their way when a standard isn't in place and then try to get the standards passed they way they're addressing the "problem" or feature.

True, I always take information from vendors with grain of salt, but I am not clear what you mean by congestion standards. Our SE claims everything is configurable via adaptive QoS, if nothing is using the bandwidth, it can give something everything, or control the top out levels.

I am not a storage admin, so I am not certain of standards in the FC area, but I don't think Cisco would market, produce, sell a product that did not follow/work with storage FC standards.
 
...but I don't think Cisco would market, produce, sell a product that did not follow/work with storage FC standards.

😀

Cisco has a hard enough time following standards in the IP networking world. What makes you think their storage products are any different?
 
cisco has big interest in ibm servers since their own are piddly - but with the 4 chan (lol) 8-core MP chips out - a few people (Ahem dell) are stealing DMI links to add more memory banks. it comes at a price of course. #1 feature of nehalem/westmere is the direct memory bus - slowing it down or burdening it to do features it doesn't want to like more memory channels than natural frequently see a 25% memory speed decrease and latency even more - what we're about to see is a big change now that 8gb rdimm have met equal pricing as 4gb dimm - and with feb MLC/SLC next - gen 100,200,400,800gb drives (!!) with controllers to back them up the amount of ram will not need to be as much since the time to access your entire database on DAS (or tmp/log) - this dynamic is going to change the needs of storage permanently. If you can have 8 800gb drives with nuclear speed - the amount of ram needed will be less and of course the FCOE/ISCSI/FC/etc infrastructure will change dramatically - if your can have a single database server with 1333@96gb with 2-4TB of DAS (drive slot based) server grade mlc (and slc too) - traditional storage structures like ISCSI will have alot more life in them. FCOE? it's too new - but many admins will see no problem filling a server with a crapton of SSD drives in DAS formation to reduce san traffic since that $5M emc is getting crusty slow.

seems they have alot of ipads on PRP lately - between than and bose - those seem to have the highest point to dollar value ratio lately huh 🙂
 
Cisco makes up their own standards. While they are ubiquitous in the datacenter - they are not in servers - storage - etc.

nobody wants the extra overhead of FCOE - FC works as-is - if I want less (oh my god) you can do 10gbe ISCSI - which works quite fine already - and has been.

So why rock the boat? FCOE is tying in VERY expensive if you need to add new gear - HP's VC ethernet for bladesystem AND Proliant (!!) makes it easy to consolidate ISCSI (again 1-10gb speed) without adding the expensive joo-joo.

I am curious what other players there are in the CNA arena - it's going to take a real storage company to go ALL FCOE only - (3par?) - to force people to jump shit from something that works really well 8gbps FC or 10gbps ISCSI. hell most people don't even saturate 1gbit iscsi (MPIO) in real world apps that are not backups or streaming media. (sql).
 
Emulex, you're seeing the hp/cisco divorce. Each started dipping into their prospective pie. I'm now focusing my energy into data center LAN San architect. The merge is coming. It did me well to merge my voice into LAN.

The shift is coming. How long has the concept of Cna been around? Adapt or be left behind.
 
I'm seeing people being frugal and careful in this economy. for most people they are using their FC SAN - and do not wish to add another point of failure or two - nexus - (two at least to prevent a single point of failure) to gain what? nothing.

Cisco needs modern storage partners that are willing to jump FCOE only or it's all in vain. quite honestly - iscsi isn't as popular as it should be. nor is AOE. I'd like to see the share of core storage market shift.

Tunning a protocol through a gateway is just bound to add more troubles than we can afford right now - if it was dot com 3.0 then maybe but you are talking alot of money for near zero gain. ISCSI 10gbit could easily replace your EMC @ 4gb yeah? But at what cost? at what gain? Likewise what is FCOE going to buy you.

Most folks right now don't have the cash to remove their $5-10M EMC and replace it - just sayin' what i see. New opportunities - maybe - but it is truly unwise to lock yourself into a single vendor solution.

Not sayin' it is bad - it is just not for everyone and definitely must weigh the unknowns/consequences of choice against what works flawlessly. You know why they sell emulex/qlogic from every server distributor - some SAN's work better with some hba's - that itself speaks volumes still.
 
VTP vs. GVRP
ISL vs. 802.1q
PAgP vs. LACP
TACACS+ vs. RADIUS
EIGRP vs. OSPF
HSRP vs. VRRP

I could probably go on.

No, I want examples of how "they have a hard time following standards". All of the protocols you list are supported on all Cisco gear and work very well in a mixed-vendor environment. Just because Cisco develops their own standards in parallel (or usually in advance) of industry standards doesn't mean that they have a hard time following. If you can show me an example of a standard that they have eschewed in favor of a proprietary protocol then you will have answered my question.

Edit: Except GVRP - I'd never heard of that one and just googled it. So I guess we have one example, since I assume its not supported on Cisco gear. My point still stands on the other 5.

Also, EIGRP is a different protocol than OSPF with different strengths and weaknesses.
 
Last edited:
VTP vs. GVRP
ISL vs. 802.1q
PAgP vs. LACP
TACACS+ vs. RADIUS
EIGRP vs. OSPF
HSRP vs. VRRP

I could probably go on.

As long as you realize most of the cisco proprietary stuff came before there was any standard like these...
ISL vs. 802.1q (isl isn't even supported on modern cisco gear)
PAgP vs. LACP (pagp is better anyway)
HSRP vs. VRRP (hsrp is better anyway)
 
Last edited:
I'm seeing people being frugal and careful in this economy. for most people they are using their FC SAN - and do not wish to add another point of failure or two - nexus - (two at least to prevent a single point of failure) to gain what? nothing.

Cisco needs modern storage partners that are willing to jump FCOE only or it's all in vain. quite honestly - iscsi isn't as popular as it should be. nor is AOE. I'd like to see the share of core storage market shift.

Tunning a protocol through a gateway is just bound to add more troubles than we can afford right now - if it was dot com 3.0 then maybe but you are talking alot of money for near zero gain. ISCSI 10gbit could easily replace your EMC @ 4gb yeah? But at what cost? at what gain? Likewise what is FCOE going to buy you.

Most folks right now don't have the cash to remove their $5-10M EMC and replace it - just sayin' what i see. New opportunities - maybe - but it is truly unwise to lock yourself into a single vendor solution.

Not sayin' it is bad - it is just not for everyone and definitely must weigh the unknowns/consequences of choice against what works flawlessly. You know why they sell emulex/qlogic from every server distributor - some SAN's work better with some hba's - that itself speaks volumes still.

But the big huge "virtualize everything, throw 100s of VMs on blades" is pushing the envelope of fiber channel and networking which now seem to be the bottleneck. And vmotion is truly changing the way lan/san are being built where bleeding edge performance is required.

And don't get me started on this data center bridging stuff.
 
Back
Top