Design & Build

Data Center Infrastructure, Storage-Deployment Guide: SAN Core Building Block with Cisco NPV

by ‎07-25-2012 08:18 AM - edited ‎08-06-2014 08:29 AM (9,568 Views)

Synopsis: Detailed deployment configurations for connecting a Brocade SAN Core to Cisco switches using NPV mode.

 

Contents

 

Preface

Overview

Brocade DCX Backbone switches are acknowledged as the industry’s leading chassis for Fibre Channel storage area networks. In environments that include Cisco Fibre Channel or converged networking, reliable interconnections with Brocade DCX Backbone are required. Both Cisco and Brocade provide implementations of the ANSI T-11 N_Port ID Virtualization protocol. In addition, Access Gateway (AG) mode from Brocade, and N_Port Virtualization (NPV) from Cisco are options which reduce the number of domains (switches) in a fabric which is beneficial when many top-of-rack Fibre Channel switches or FCoE switches are used. This document covers configuration of Nexus 5000 and UCS 6120 Fabric Interconnect switches for NPV when connecting them to a SAN Core Building Block using Brocade’s DCX Backbone Fibre Channel switch with Fibre Channel storage. See "Related Documents" for more information about SAN Core Building Blocks.

 

Purpose of This Document

This document provides detailed configuration of Cisco N_Port Virtualization (NPV) with Brocade DCX Backbone Fibre Channel switches. Configurations include Cisco Nexus 5000 and UCS 6120 Fabric Extender

 

NOTE:

In this document, “Cisco Nexus 5000” or simply “Nexus 5000” is used to reference the Cisco Nexus 5010 or 5020 models.

 

Audience

Network architects, designers and administrators will find useful information about configuration and deployment of Brocade DCX Backbone Fibre Channel switches with Cisco’s NPV.

 

Objectives

The SAN edge connects servers to the SAN core where storage is managed. Therefore, the SAN edge is subject to several changes including server virtualization, 10 GE adoption (at the switch and in the server) and converged networking (IP + Storage traffic). Consequently, the SAN core, which relies on Fibre Channel, has to scale as more server racks are deployed with top of rack Fibre Channel switches and server virtualization. A complication comes from the limitation of the total number of switches in a fabric due to fact the the Domain ID address space has a maximum of 239 Domain IDs per fabric as specified in the ANSI T-11 standards. For operational reasons, the total number of Domain IDs (switches) per fabric is often much less than this maximum.

 

To address this Brocade introduced Access Gateway mode which eliminates the use of a domain ID for top of rack Fibre Channel switches including the Brocade 8000 FCoE switch. Later, Cisco introduced a similar capability called N_Port Virtualization (NPV) for the Nexus 5000 and UCS 6120 Fabric Interconnect switches.

When a switch is configured for NPV, it must connect to a full function Fibre Channel switch so all Fibre Channel services are available to it by proxy. It is common to configure NPV on switches at the SAN edge so the SAN core switch(es) can provide full Fibre Channel services.

 

For customers with Brocade DCX Backbone Fibre Channel switches, this Deployment Guide provides validated procedures for Cisco NPV configuration

 

Related Documents

 

References

    o Fibre Channel Core, Core Backbone Building Block

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Lead Engineer: Michael O'Conner, Strategic Solutions Lab

 

Document History

Date                  Version        Description

2012-07-25         1.0                Initial Release

 

Technical Architecture

The following configurations are provided in this guide:

 

Configuring the SAN Core with Cisco Nexus 5000 NPV

Configuring the SAN Core with Cisco UCS 6120 Fabric Interconnect NPV

 

Brocade® Access Gateway feature. While NPV is similar to the standard N_Port ID Virtualization (NPIV), it does not offer exactly the same functionality. NPIV provides a means to assign multiple Fibre Channel (FC) IDs to a single N_Port and allows multiple applications on the N_Port to use different identifiers. NPIV also allows access control, zoning, and port security to be implemented at the application level. NPV makes use of NPIV to get multiple FC IDs allocated from the core switch on the NP port (see definition below).

 

References

 

The Cisco feature, N_Port Virtualization (NPV), is similar to the Brocade® Access Gateway feature. While NPV is similar to the standard N_Port ID Virtualization (NPIV), it does not offer exactly the same functionality. NPIV provides a means to assign multiple Fibre Channel (FC) IDs to a single N_Port and allows multiple applications on the N_Port to use different identifiers. NPIV also allows access control, zoning, and port security to be implemented at the application level. NPV makes use of NPIV to get multiple FC IDs allocated from the core switch on the NP port (see definition below).

 

NPV Mode

A switch is in NPV mode after a user has enabled NPV and the switch has successfully rebooted.

 

NOTE:

Enabling NPV mode on a Cisco switch is disruptive to all traffic on the switch since the switch must be rebooted. Ensure you have adequately prepared for this.

 

NPV mode applies to an entire switch. All end devices connected to a switch in NPV mode must log in as an N_Port to use this feature (loop-attached devices are not supported). All links from edge switches (in NPV mode) to NPV core switches are established as NP ports (not E_Ports), which are typically used for Inter Switch Links (ISLs). NPIV is used by the switches in NPV mode to log in to multiple end devices that share a link to the NPV core switch.

 

NP Ports

An NP port (proxy N port) is a port on a device that is in NPV mode and connected to the NPV core switch. The Core switch in this Deployment Guide is the Brocade DCX® Backbone. Ports on the DCX Backbone connected to the Cisco switch in NPV mode act as Fibre Channel F_Ports. NP ports behave like N_Ports except that in addition to providing N_Port behavior, they also function as proxies for multiple physical N_Ports.

 

NP Links

An NP link is basically an NPIV uplink to a specific end device. NP links are established when the uplink to the NPV core switch comes up, and the links are terminated when the uplink goes down. Once the uplink is established, the NPV switch performs an internal FLOGI to the NPV core switch, and then (if the FLOGI is successful) registers itself with the NPV core switch's name server. Subsequent FLOGIs from end devices in this NP link are converted to FDISCs.

 

Best Practices

The following are best practices for the configurations included in this Deployment Guide.

 

  • Configure at least two NP connections from the Nexus 5000 to the Brocade DCX Backbone for High Availability (HA).
  • Put NP connections on different blades in the Brocade DCX chassis.
  • If using Cisco switches with a single supervisor consider deploying two Cisco switches to avoid an outage in the event of a Cisco switch failure. In this case, deploy storage multi-path software device drivers in the Windows server.
  • For highest availability and resiliency, deploy two physically separate SAN fabrics between servers and storage. In this case, deploy storage multi-path software device drivers in the Windows server.
  • The Nexus 5000 can takes anywhere from 7 – 10 minutes to reboot. You can decrease reboot time with the diagnostic bootup level bypass command.
  • With NX-OS releases after NX-OS 4.0(0)N on the Cisco Nexus 5000 switch, VLANs need to be dedicated to FCoE traffic. Plan VLAN use accordingly.
  • Make sure that the NP ports on the Cisco Nexus 5000 are in the correct VSAN.
  • Be sure Priority Flow Control (PFC) is enabled on the Nexus 5000 switch, which is the default configuration.
  • Link-Level Flow Control is disabled by default on the Ethernet interface on the Nexus 5000; it can be enabled only if PFC is disabled on the interface.

Configuring the SAN Core with Cisco Nexus 5000 NPV

This configuration uses a Cisco Nexus 5000 switch connected to a Brocade DCX Backbone switch. A native Fibre Channel storage array is connected to the Brocade DCX Backbone and a Windows server with a CNA configured for FCoE is connected to the Nexus 5000 switch as shown below.

 

Topology

The diagram below shows the deployment topology.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_DeploymentTopology.JPG

   Deployment Topology

 

Pre-requisites

1.       Backup the running Nexus 5000 configuration and verify it has been saved to boot flash as it is deleted when the switch is booted.

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_VerifyBootFlashSaved.JPG

   Saving Nexus 5000 Configuration to Boot Flash

 

Bill of Materials

The following products are used in this deployment.

 

Identifier

Vendor

Model

Notes

UCS Chassis

Cisco

Blade Server Chassis

(4) Blade Servers: B250-M1

Windows Server instance running on each blade server.

Nexus5K

Cisco

Nexus 5010

 

DCX-1

Brocade

DCX

48 port 8 Gbps blades

 

Task 1: Configure NPV on Cisco Nexus 5000

Description

The Cisco Nexus 5000 is configured for NPV mode and Fibre Channel ports are created by enabling FCoE.

 

Assumptions

1.       The Cisco Nexus 5000 is fully operational.

 

Step 1:  Enable NPV Mode on Cisco Nexus 5000 Switch

Enter the following common on the Nexus 5000 switch.

----

Nexus5k# conf t

Nexus5K(config)# npv enable

----

The Nexus 5000 will boot and come back up in NPV mode.

 

NOTE:

This could also be done from the Nexus 5000 Device Manager GUI.

 

Step 2:  Enable FCoE on Cisco Nexus 5000 Switch

By default, there are no Fibre Channel ports on a Nexus 5000. In order to enable ports for Fibre Channel, enable FCoE with the feature fcoe command. Then save the configuration and reboot the Nexus 5000.

 

Task 2: Configure CNA on Server

The Nexus 5000 uses FCoE for server connections, so traffic from the server must use a Converged Network Adaptor (CNA) for transport.

 

Description

Consult your server and CNA documentation for how to configure the CNA and enable FCoE traffic on it.

 

Assumptions

  1. The Cisco Nexus 5000 is fully operational.
  2. Server Operating System supports FCoE device drivers
  3. Server Operating System supports CNA hardware

 

Task 3: Connect Cisco Nexus 5000 to Brocade DCX Backbone

Connect a cable between the Cisco Nexus 5000 and Brocade DCX Backbone. It is best practice to have at least two links for resiliency and availability, but more links can be used to provide greater bandwidth as required.

 

Description

Consult your server and CNA documentation for how to configure the CNA and enable FCoE traffic on it.

 

Assumptions

  1. Proper optics and cables are available.
  2. Cable distances do not exceed maximum for cable type.

Step 1:  Configure Nexus 5000 Ports

1.   Display the Device Manager, right click the FC Port and select Configure as shown in following screen.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_N5000DeviceManager.JPG

   Nexus 5000 Device Manager,  Configure NP Ports

 

2.   Select the correct VSAN from the PortVSAN drop-down menu, and click the radio button for NP.

3.   Select the speed for the port. Mark sure the port is in Service and the admin state is up. Then, click Apply and the Close to complete the operation.

4.   Repeat step 3 for each of the FC ports on the Cisco Nexus 5000 that you wish to configure as NP ports attached to the Brocade DCX Backbone.  When you are finished, the Device Manager will look similar the following where ports 1, 2 and 4 are configured as NP ports.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_N5000ConfirmNPPortConfig.JPG

   Nexus 5000 Device Manager, Confirm NP Ports Configured

 

Task 4: Configure Fibre Channel Zoning on Brocade DCX Backbone

 

Description

Enable device connectivity using Fibre Channel zoning service. To create a zone on the Brocade DCX Backbone for devices connected to the Cisco Nexus 5000, the port World-wide Name (pWWN) of the server CAN connected to the Nexus 5000 FCoE port is required.

 

Assumptions

  1. All cables are connected between NP ports on Cisco Nexus 5000 and F_Ports on Brocade DCX Backbone.
  2. Links are up and operational with no errors.
  3. Server CNA is connected to Cisco Nexus 5000 switch and storage ports are connected to Brocade DCX Backbone.

Step 1:  Discover Port WWN of CNAs

Display the Cisco Nexus Device Manager and choose Interface -> Virtual Interfaces -> Fibre Channel. In the display, click the FLOGI tab to display the pWWN

list as shown below.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_N5000DeviceManagerShowpWWN.JPG

   Nexus 5000 Device Manager, Display CNA Port World Wide Name

 

You can also use the Nexus 5000 CLI to display the pWWN as shown below.

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_N5000CLIShowpWWN.JPG

   Nexus 5000 CLI, Display CNA Port World Wide Name

 

Step 2:  Create Zone(s) and Assign to Active Zone Set on Brocade DCX Backbone

Enter the pWWN of the CNA and the pWWN of the port on the storage array connected to the Brocade DCX Backbone into a zone. Add the zone to the current zone

set and activate the zone set.

 

NOTE:

It is necessary to use the Brocade DCX Backbone to update and edit the SAN Fabric zones.

 

Confirm Correct NPV and Zoning Configuration

The following shows the storage manager display on the Windows server. In this configuration, the Windows server has a CNA configured for FCoE, the Nexus 5000 is configured for FCoE and NPV. The storage LUNs are available via the Brocade DCX Backbone.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_VerifyStorageLUNs.JPG

   Verify Windows Server Can Access Storage LUNs

 

Configuring the SAN Core with Cisco UCS 6120 Fabric Interconnect NPV

This configuration uses a Cisco UCS 6120 Fabric Interconnect (FI) switch connected to a Brocade DCX Backbone switch. A native Fibre Channel storage array is connected to the Brocade DCX Backbone and a Windows server with a CNA configured for FCoE is connected to the UCS 6120 FI switch as shown below.

 

Topology

The diagram below shows the deployment topology.

 

DataCenter-Infrastructure-DG_SANBB+CiscoNPV_DeploymentTopology.JPG

   Deployment Topology

 

Pre-requisites

  1. The Cisco UCS 6120 Fabric Interconnect is operational.

Bill of Materials

The following products are used in this deployment.

 

Identifier

Vendor

Model

Notes

UCS Chassis

Cisco

Blade Server Chassis

(4) Blade Servers: B250-M1

Windows Server instance running on each blade server.

UCS-FI

Cisco

UCS 6120 Fabric Interconnect

 

DCX-1

Brocade

DCX

48 port 8 Gbps blades

FC Storage Array

EMC

AX4

Not limited to EMC Storage

 

NOTE:

In the following procedure, links are provided where a task or step was previously described. Click the link to display the step or procedure.

 

Task 1: Configure Cisco UCS 6120 Fabric Interconnect for NPV Mode

 

Description

The Cisco UCS 6120 FI switch should be set for End Host Mode. NPV is the default End Host Mode.

 

Assumptions

  1. Cisco UCS 6120 FI is in default configuration.

Step 1: Boot the UCS 6120 FI Switch

Boot the UCS 6120 FI switch. It will come up in End Host Mode (EHM). In EHM, the UCS 6120 FI is configured for NPV.

 

Task 2:  Configure CNA on Server

(click for procedure details)

 

Task 3:  Connect Cisco UCS 6120 Fabric Interconnect to DCX Backbone

Connect a cable between the Cisco UCS 6120 Fabric interconnect and Brocade DCX Backbone. It is best practice to have at least two links for resiliency and availability, but more links can be used to provide greater bandwidth as required.

 

Description

Consult your server and CNA documentation for how to configure the CNA and enable FCoE traffic on it.

 

Assumptions

  1. Proper optics and cables are available.
  2. Cable distances do not exceed maximum for cable type.

Step 1: Connect UCS 6120 FI to DCX Backbone

Connect cables between ports on the UCS 6120 and the DCX Backbone.

 

Task 4: Configure Fibre Channel Zoning on Brocade DCX Backbone

(click for procedure details)

Comments
by
on ‎07-25-2012 10:17 AM

Brook,

you made a good job.

I would suggest to create a Best Practices Guide from this procedure.

Antonio

by
on ‎07-25-2012 10:31 AM

Antonio,

Thanks for the compliment. However, I have a great team of knowledgeable folks who do all the hard work, and that really helps the content.

I'm curious about what you would want to add to create a "Best Practices" guide?  The goal of a Deployment Guide is to provide Best Practices.  And there is a section in this Deployment Guide listing them.

What do you think we should include that is missing?

Thanks for your thoughts.

by
on ‎07-25-2012 11:55 AM

Brook,

Thanks for reply to my comment.

--->>> However, I have a great team of knowledgeable folks who do all the hard work, and that really helps the content.

I know, and that is the big difference between Brocade and Cisco

I love Brocade Documentation, why:

1) is simple
2) is clear about its content
3) and finally, is easy to understand

the user needs only to read that's all, however I'm the opinion some Best Practices guide are most usefully as User or Admin guide.

I note that this Guide contained a section Best Practices, but from my point of view this not Build a own "Best Practices Guide" as we know from other Brocade BP.

"again, this is my opinion"

in reality I do not miss anything, but what I would add....?

-Planning
-Implementing
-SAN Design deployment
-Implementing in Brocade Virtual Fabric Topology(!) <- not sure if any such config is supported

-Unsupported or Hardware restrictions(!) if any such is present

and finally, maybe a section with

-Supported or Tested FOS and NX-OS Releases.

by
on ‎07-25-2012 02:16 PM

Antonio,

Thanks for identifying some additional information you would like to see.

You asked about Brocade Virtual Fabric (VF) support for this deployment. Since VF supports F_Ports and essentially that's all that's required for the DCX Backbone to connect to an NP port on the Nexus/UCS switches, there should be no issue when using VF.

We are going to publish some "Validation Testing" documents. These would cover the details for testing a deployment. These same procedures can be used by customers to ensure their configuration is working and when creating internal test plasn for evaluating Brocade products or features before actually deploying them. Stay tuned ...

Best.
Brook.