Design & Build

Data Center Solution, Storage-Deployment Guide: Brocade GEN 5 Fibre Channel with Pure Storage Flash Array

by ‎07-31-2014 07:14 AM - edited ‎08-20-2014 10:04 AM (5,097 Views)
 

 

Preface

 

Overview

 

This document provides guidance for deploying Brocade Gen 5 Fibre Channel SAN products with Pure Storage FA400 series flash storage arrays. The deployments shown have been validated and tested. The following document provides information about the Validation Test.

 

References

 

Two deployments are presented; a native Fibre Channel SAN deployment and an extension to this using FCoE in the Hosts to connect to the Pure Storage flash array.

 

Audience

Engineers responsible for design, deployment and operations who want to successfully add Pure Storage flash arrays to a Brocade Gen 5 Fibre Channel SAN fabric.

 

Objectives

This guide provides a base deployment using dual Fibre Channel SAN A/B fabrics and an alternative deploument with hosts using FCoE to connect to the SAN while the flash storage arrays continues to use native Fibre Channel. The content in this deployment guide assumes a Fibre Channel SAN has been designed and installed in accordance with the Brocade SAN Design Guide (see Related Documents). The procedures in this deployment guide only cover specific settings or configuration changes that ensure the best performance, reliability and management of a Gen 5 Fibre Channel SAN fabric when using Pure Storage FA 400 series flash storage arrays.

 

Related Documents

The following publications are useful when deploying Brocade Gen 5 Fibre Channel fabrics, Brocade VCS Fabrics and Pure Storage flash arrays.

 

References

 

Document History

Date                  Version        Description

2014-07-31         1.0                Initial Release

2014-08-14         1.1                Updated "SAN A/B Core-Edge Deployment Template"

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Test Architects: Mike Astry, Patrick Stander
  • Test Engineer: Randy Lodes
  • Publication Editor: Brook Reams

About Brocade

Brocade networking solutions help the world�s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One® strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

About Pure Storage

Pure Storage has a simple mission: to enable the widespread adoption of flash in the enterprise data center. We're a team of some of the brightest minds in storage, flash memory and related technical industries. Founded on principles of teamwork and knowledge sharing, we focus on producing an exceptional result while we transform the landscape of the enterprise storage market (and have some fun along the way).

 

 

Technical Architecture

Fibre Channel SAN storage is commonly found in data centers, particularly when low and consistent latency, high availability and simplicity are required. For this reason, existing Fibre Channel SAN fabrics are being upgraded with storage arrays that use flash storage arrays instead of rotating magnetic disk arrays. For some applications, including databases, virtual desktop infrastructure, CRM and ERP systems, the addition of flash storage arrays can boost the return on investment of the application software (often much more expensive than the server and storage infrastructure they run on) by increasing the transactions per second significantly. Application licenses often increase with server cores and size, but typically don’t change with the type of storage system. For this reason, adding high speed flash storage to existing Fibre Channel SANs vs. adding a new server or more servers to improve performance is economically attractive. The high IOPS and low latency of flash arrays are better that disk storage and therefore, deployment requires elimination of IO bottlenecks in the Fibre Channel SAN or the economic benefit of flash storage in not realized. 

 

This deployment guide shows how to add Pure Storage flash arrays to existing Brocade Gen 5 Fibre Channel SAN fabrics and Brocade VDX switches supporting Fibre Channel and FCoE in a VCS Fabric.

 

Building Blocks

The Brocade Data Center Base Reference Architecture (see References below) includes building blocks for designing a Gen 5 Fibre Channel SAN and a VCS Fabric. An Edge block, Core block and VCS Fabric Edge block are included in this deployment example as shown below.

 

Fibre Channel Core Block

This block can include one, or as shown more than one, DCX Backbone. If mulitple DCX are used, then Backbone ISL connections between them can be used to connect them without consuming ports on the port cards.

 

 DataCenter_BlockSAN_CoreBackbone.JPG

 

   Fibre Channel Core Building Block

 

Fibre Channel Edge Block

This block shows the hosts connected to twe Edge switches. Each Edge switch is in a physically separate SAN fabric denoted as “Fabric A” and “Fabric B”. The Edge switch in Fabric A connects to the DCX(s) in Fabric A shown in the Core Block above.

 

DataCenter_BlockSAN_EdgeSwitch.JPG 

 

   Gen 5 Fibre Channel SAN-Edge Building Block

 

VCS Fabric Leaf Block

This block can be used in a VCS Fabric leaf-spine topology as a leaf. It uses the VDX 6730 Switch which includes 8 Gbps Fibre Channel ports and 10 GE converged enhanced Ethernet (CEE) ports. For this deployment, the red Brocade ISL Trunk to the VCS Fabric spine switch is not used.

 

The Fibre Channel ports connect to the DCX Backbone switches in either Fabric A or Fabric B. The hosts connect to one or more VCS Fabric leaf switchs using NIC Teaming on the host and vLAG on the VDX 6730 switches for high availability and resilency.

 

DataCenter_BlockVCSFabric_LeafConvergedDevices.JPG

 

   Brocade VCS Fabric-Fibre Channel SAN Building Block

 

References

 

Design Template

These building blocks are combined into a design template as shown below.

 

Template_SANA-BCoreEdge.jpg 

   SAN A/B Core-Edge Deployment Template

 

The topology is a dual core/edge SAN (A/B fabrics. The edge has both Fibre Channel attached hosts via Fibre Channel switches and Fibre Channel over Ethernet (FCoE) attached hosts via the Brocade VDX 6730 Switches. Two independent VCS Fabrics can be used at the edge if desired, each attaching to the Core block so that each VCS Fabric participates in either the Fabric A or B SAN. These switches have 10 GE ports with converged enhanced Ethernet (CEE) supporting FCoE and 8 Gbps Fibre Channel ports. The VDX 6730 Fibre Channel ports that connect to the core DCX Fibre Channel Backbone are configured as E_Ports while the coorepsonding DCX Backbone ports are configured as EX_Ports to provide Fibre Channel routing.

 

The Core block connects existing disk storage arrays using 8 or 16 Gbps Fibre Channel ports and the Pure Storage flash array using 8 Gbps Fibre Channel ports. ISL trunks with 16 Gbps links connect the the Edge Fibre Channel switches to the core DCX Backbone switches.

 

The template supports hosts with either Fibre Channel host bus adaptors (HBA) for native Fiber Channel or 10 GE converged network adaptors (CNA) for Fibre Channel over Ethernet (FCoE). The FCoE traffic terminates in the VDX 6730 switch and is forwarded to the core DCX Backbone switch(es) with native 8 Gbps Fibre Channel. The References below provide information about configuring Brocade Network OS (NOS) for VDX Switches and a VCS Fabric, and Brocade Fabric OS (FOS) for Fibre Channel switches and the DCS Backbone switches.

 

References

 

Base Configuration: Dual SAN Fabrics with Pure Storage Flash Array

 

Deployment Topology

The diagram below shows the deployment topology. There are two independent SAN fabrics (light blue and grey). The hosts/servers and Pure Storage flash array(s) connect to both fabrics for high availability and resiliency. This is a best practice for SAN design as IO failures from hosts to storage can create application failures that can require long recovery times for data synchronization that result in application outages.

 

SANA-B.jpg

 

   Deployment Topology

 

This is a common topology used today although in larger data centers there can be many edge switches and multiple core switches hosting hundreds of servers and petabytes of storage. Regardless of the Gen 5 Fibre Channel fabric size, these deployment procedures can be used to add one or more Pure Storage flash arrays.

 

Pre-requisites

  1. An existing Brocade Gen 5 Fibre Channel SAN designed and deployed in accordance with the Brocade SAN Design Guide.
  2. Sufficient rack space, power and cooling for the Pure Storage flash array(s).
  3. Correct firmware releases for Brocade switches and Pure Storage flash arrays.
  4. Supported servers/hosts with supported Fibre Channel host bus adaptors (HBA) and/or converged Ethernet adaptors (CNA).

 

Bill of Materials

The following table show the bill of materials used for this deployment. The references contain links to product data sheets.  The DCX 8510 Backbone and Brocade 6510 SAN switches are Gen 5 Fibre Channel capable products with up to 16 Gbps per port. Only a single core and single edge switch are shown in this procedure, but it is applicable when more DCX 8510 and/or Brocade 6510 switches are installed in the fabric.

 

Identifier

Vendor

Model / Release

Notes

FA-420

Pure Storage

FA-420

Purity 3.4.0

The Pure Storage FA-420 flash storage array is an all-flash array that supports 11-35 TB raw capacity. Each controller supports 4x 8Gb Fibre Channel connections, 2x 10Gb iSCSI connections, and 2x Infiniband connections

6510-1

Brocade

BR-6510

FOS 7.3.0

48 port Gen 5 16Gb FC switch

6510-2

Brocade

BF-6510

FOS 7.2.1

48 port Gen 5 16Gb FC switch

DCX-1

Brocade

DCX 8510-8

FOS 7.3.0

8 slot Gen 5 16Gb FC chassis

DCX-2

Brocade

DCX 8510-4

FOS 7.2.1

4 slot Gen 5 16Gb FC chassis

Hosts/Servers

Various

Various

Hosts and Servers supporting Brocade Gen 5 Fibre Channel switches and supported Fibre Channel host bus adapators (HBA) and converged Ethernet adaptors (CNA)

Host Bus Adaptors

QLogic       -- >

(Brocade)

 

 

 

QLogic       -- >

 

 

 

Emulex      -- >

 

 

 

 

Brocade     -- >

Brocade 1860

2-port 16Gb FC HB

Drvr: 3.2.4.0

Frmw: 3.2.4.0

 

QLogic QLE2672

2-port 16GB FC HBA

Drvr: 8.06.00.10.06.0-k, Frmw: 6.06.03

 

Emulex LPE 12002

2-port 8Gb Fc HBA

Drvr: 10.0.100.1

Frmw: 1.00A9

 

Brocade 1020

2-port CNA

Drvr: 3.2.4.0

Frmw: 3.2.4.0

Fibre Channel Host Bus Adaptors supporting Brocade Gen 5 Fibre Channel switches.

 

References

 

Task 1: Create Zones for Each HBA

 

Description

Fibre Channel zones are used to logically and securely isolate traffic between each HBA (initiator) in a host and all storage ports it connects to. New zones can be added for the existing HBA and the new Pure Storage array ports. An alternative is to update existing zones for each HBA that will connect to the Pure Storage array adding the World-wide Port Name (WWPN) of the Pure Storage array port(s)

 

Assumptions

  1. New host or HBA added to existing host connecting to Pure Storage flash array.

 

Step 1: Create / Modify zones

Use the zoneCreate command to create new zones or the zoneAdd command to add devices (Pure Storage array port targets) to an existing zone. The example below shows how to use the zoneCreate, cfgAdd and cfgEnable commands to create a new zone (hb067168_pure), add the zone to an existing zoneset (SSR) and enable the updated zone set.

 

<==========>

root> zonecreate hb067168_pure, "10:00:8c:7c:ff:24:a0:00; 10:00:8c:7c:ff:24:a0:01; 52:4a:93:7d:f3:5f:61:00; 52:4a:93:7d:f3:5f:61:01"

root> cfgadd SSR, hb067168_pure

root> cfgenable SSR

<==========>

 

References

 

Confirm Zoning

Use the zoneShow command to display the zones connecting the Pure Storage array to the host.

 

<==========>

B6510_01:root> zoneshow hb067168_pure

 zone:  hb067168_pure

                10:00:8c:7c:ff:24:a0:00; 10:00:8c:7c:ff:24:a0:01;

                52:4a:93:7d:f3:5f:61:00; 52:4a:93:7d:f3:5f:61:01;

<==========>

 

Task 2: Enable Bottleneck Detection on Fibre Channel Switches

 

Description

This step enables reporting of latency and congestion alerts on each switch in the SAN fabric. This aids peformance monitoring and troubleshooting of SAN device performance problems.

 

Step 1: Turn on Bottleneck Monitoring

Enter the bottleneckmon command

 

<==========>

root>  bottleneckmon --enable -alert

root>  bottleneckmon --config -alert -time 150 -qtime 150 -cthresh 0.7 -lthresh 0.2

<==========>

 

Confirm Bottleneck Monitoring

Use the bottleneckmon status argument to verify the configuration settings for bottleneck monitoring.

 

<==========>

root> bottleneckmon --status

Bottleneck detection - Enabled

==============================

 

Switch-wide sub-second latency bottleneck criterion:

====================================================

Time threshold                 - 0.800

Severity threshold             - 50.000

 

Switch-wide alerting parameters:

================================

Alerts                         - Yes

Latency threshold for alert    - 0.200

Congestion threshold for alert - 0.700

Averaging time for alert       - 150 seconds

Quiet time for alert           - 150 seconds

<==========>

 

Task 3: Present Pure Storage LUNs

 

Description

The Pure Storage flash array needs to present LUNs (Logical Unit Numbers) to a specific host. The LUNs are show to both HBA on the host when it is configured with dual HBA and dual SAN A/B fabrics for high availability. The HBA multipath failover feature ensures traffic continues between the host and the Pure Storage array should any components fail in one of the SAN fabrics.

 

Assumptions

  1. Dual HBA installed in the host.

 

Step 1: Configure Pure Storage LUNs for Each Host

Configure LUNs using the Pure Storage management GUI.  

 

References

 

Confirm LUNs Configured

The Pure Storage management GUI shows LUNs and their configuration.

 

PureStorage-LUNConfiguration.jpg 

   Pure Storage LUN Configuration

 

Task 4: Configure Multipath Driver on Each Host

 

Description

Host HBA’s include multipath drivers so there are mulitple paths between the host and the storage array for high availability and resiliency. As shown in the deployment topology, dual SAN A/B fabrics are used to ensure high availability and continued IO when any component in one SAN Fabric fails.

 

Assumptions

  1. Dual HBA installed in the host.
  2. HBA device driver supports multipath option.

 

Step 1: Configure Multipath Support on Linux Hosts.

This configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default Linux settings which would only use a single active path per LUN.

The recommended /etc/multipath.conf entry on a Linux system is shown below.

 

<==========>

devices {

    device {

        vendor                "PURE"

        path_selector         "round-robin 0"

        path_grouping_policy  multibus

        rr_min_io             1

        path_checker          tur

        fast_io_fail_tmo      10

        dev_loss_tmo          30

    }

}

<==========>

 

Confirm Linux Multipath Support

Enter the multipath command to display the Linux host multipath configuration.

 

<==========>

# multipath -l

mpathe (3624a9370a15a66e949f7d1440001003d) dm-3 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:13 sdac 65:192 active undef running

  |- 1:0:1:13 sdm  8:192  active undef running

  |- 1:0:2:13 sdu  65:64  active undef running

  |- 1:0:3:13 sde  8:64   active undef running

  |- 2:0:0:13 sdas 66:192 active undef running

  |- 2:0:1:13 sdbi 67:192 active undef running

  |- 2:0:2:13 sdba 67:64  active undef running

  `- 2:0:3:13 sdak 66:64  active undef running

mpathd (3624a9370a15a66e949f7d1440001003c) dm-2 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:12 sdab 65:176 active undef running

  |- 1:0:1:12 sdl  8:176  active undef running

  |- 1:0:2:12 sdt  65:48  active undef running

  |- 1:0:3:12 sdd  8:48   active undef running

  |- 2:0:0:12 sdar 66:176 active undef running

  |- 2:0:1:12 sdbh 67:176 active undef running

  |- 2:0:2:12 sdaz 67:48  active undef running

  `- 2:0:3:12 sdaj 66:48  active undef running

<==========>

 

Step 2: Configure Multipath Support on VMware Hosts.

This configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default VMWare ‘Most Recently Used’ settings which would only use a single active path per LUN.

 

Pure Storage-Multi-pathSelectionPolicy.jpg

…Pure Storage SSD Array Multi-path Selection Policy

 

Task 5: Configure Linux Host Performance Tuning

LINUX hosts can benefit from additional performance tuning options when using Pure Storage flash arrays.

 

Step 1: Configure LINUX noop I/O Scheduler.

The first Pure Storage FA-420 array LINUX tuning choice selects the 'noop' I/O scheduler, which has been shown to get better performance with lower CPU overhead than the default schedulers (usually 'deadline' or 'cfq'). 

 

Add this rule to the /etc/udev/rules.d/99-pure-storage.rules file.

 

<==========>

# Recommended settings for Pure Storage FlashArray.

#

# Use noop scheduler for high-performance solid-state storage

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

#

<==========>

 

Step 2: Remove LINUX Entropy Collection Random Number Generator

The second change eliminates the collection of entropy for the kernel random number generator, which has high cpu overhead when enabled for devices supporting high IOPS.

 

Add this rule to the /etc/udev/rules.d/99-pure-storage.rules file.

 

<==========>

# Reduce CPU overhead due to entropy collection

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

<==========>

 

Step 3: Change LINUX CPU Affinity

The reduces CPU load by redirecting completions to the originating CPU.

 

Add this rule to the /etc/udev/rules.d/99-pure-storage.rules file.

 

<==========>

# Spread CPU load by redirecting completions to originating CPU

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

<==========>

 

Alternate #1: FCoE Hosts with Pure Storage Flash Array

 

Hosts can use Fibre Channel over Ethernet (FCoE) to connect to Pure Storage flash arrays. This option is often used to minimize cabling and cost of a rack of servers. Brocade recommends terminating the FCoE traffic at the top of rack switch at the edge and forwarding the traffic as native Fibre Channel to the core of the existing Fibre Channel SAN fabric.

 

Deployment Topology

The diagram below shows the deployment topology when adding hosts using FCoE. There are two independent SAN fabrics (light blue and grey). The host/servers and Pure Storage flash array are connected to both fabrics.

 

SANA-B+FCoE.jpg

 

   Deployment Topology

 

At the edge, Brocade VDX 6730 switches are added in a VCS Fabric. The VCS Fabric can include other VDX switches and FCoE to connect a larger number of hosts to the Pure Storage flash array but the VDX 6730 switch is required to terminate the FCoE traffic within the VCS Fabric and forward it as native Fibre Channel traffic to the DCX Backbone at the core of the SAN Fabric. The VDX Fibre Channel ports connected to the DCX Backbone are configured as E_Ports while the ports on the DCX Backbone are configured as EX_Ports, or Fibre Channel routing ports.

 

Pre-requisites

  1. An existing Brocade Gen 5 Fibre Channel SAN designed and deployed in accordance with the Brocade SAN Design Guide.
  2. Sufficient rack space, power and cooling for the Pure Storage flash array(s).
  3. Correct firmware releases for Brocade switches and Pure Storage flash arrays.
  4. Supported servers/hosts with supported  converged network adaptors (CNA).
  5. An FCoE license for the VDX switches
  6. A Fibre Channel routing license for the DCX Backbone.
  7. [Optional] A Virtual Fabric license for the DCX Backbone

 

Bill of Materials

The following table shows the bill of materials used for this deployment. The references contain links to product data sheets.  The DCX 8510 Backbone and Brocade 6510 SAN switches are Gen 5 Fibre Channel capable products with up to 16 Gbps per port. Only a single core and single edge switch per fabric are shown in this procedure, but it is applicable when more DCX 8510, Brocade 6510 and or VDX 67390 switches are installed in the fabric.

 

 

Identifier

Vendor

Model / Release

Notes

FA-420

Pure Storage

FA-420

Purity version 3.4.0

The Pure Storage FA-420 flash storage array is an all-flash array that supports 11-35 TB raw capacity. Each controller supports 4x 8Gb Fibre Channel connections, 2x 10Gb iSCSI connections, and 2x Infiniband connections

6510-1

Brocade

BR-6510

FOS 7.3.0

48 port Gen 5 16Gb FC switch

6510-2

Brocade

BF-6510

FOS 7.2.1

48 port Gen 5 16Gb FC switch

DCX-1

Brocade

DCX 8510-8

FOS 7.3.0

8 slot Gen 5 16Gb FC chassis

DCX-2

Brocade

DCX 8510-4

FOS 7.2.1

4 slot Gen 5 16Gb FC chassis

VDX-1

Brocade

VDX 6730

NOS 4.1.1

60x10GbE ports and 16x8Gb FC port switch

VDX-2

Brocade

VDX 6730

NOS 4.1.1

60x10GbE ports and 16x8Gb FC port switch 

Hosts/Servers

Various

Various

Hosts and Servers supporting Brocade Gen 5 Fibre Channel switches and supported Fibre Channel host bus adapators (HBA) and converged Ethernet adaptors (CAN)

Host Bus Adaptors

QLogic     -- >
(Brocade)

 

 

 

QLogic     -- >

 

 

 

Emulex    -- >

 

 

 

 

Brocade  -- >

Brocade 1860

2-port 16Gbps FC HBA

Drvr: 3.2.4.0

Frmw: 3.2.4.0

 

QLogic QLE2672

2-port 16 Gbps FC HBA

Drvr: 8.06.00.10.06.0-k, Frmw: 6.06.03

 

Emulex LPE 12002

2-port 8 Gbps FC HBA

Drvr: 10.0.100.1

Frmw: 1.00A9

 

Brocade 1020

2-port CAN

Drvr: 3.2.4.0

Frmw: 3.2.4.0

Fibre Channel Host Bus Adaptors supporting Brocade Gen 5 Fibre Channel switches.

 

References

 

Task 1: Configure Zones for FCoE Initiators on VDX Switches

 

Description

Fibre Channel zones are added to the VCS Fabric on the VDX 6730 switches. These zones are exactly like the Fibre Channel zones used in the SAN A/B fabrics. But, the VCS Fabric acts as a separate Fibre Channel fabric from the perspective of the existing SAN A/B fabrics, so it has its own independent zones. To connect devices in the VCS Fabric to devices in the SAN A or SAN B fabrics requires Fibre Channel routing. Device connections where each device is in a separate fabric are defined using a special type of zone, an LSAN zone.

 

Assumptions

  1. FCoE license is installed in the VCS Fabric
  2. Supported CNA are installed in the hosts using FCoE to connect to the Pure Storage flash array.

 

Step 1: Configure Zones on a VDX Switch

Refer to the NOS Administrator’s Guide (see References below) for configuring zones on the VDX 6730 Switch. The following is an example of configuring Fibre Channel zones on the VDX 6730 switch.

 

<==========>

#

#Create Zoning Configuration

#

VDX6730_066_075# config t

VDX6730_066_075(config)# zoning defined-configuration cfg NOS_SSR

VDX6730_066_075(config-cfg-NOS_SSR)#

#

#Create New Zone

#

VDX6730_066_075(config-cfg-NOS_SSR)# member-zone lsan_hb067166_pure

#

#Add Device WWN to New Zone

#

VDX6730_066_075(config-cfg-NOS_SSR)# zoning defined-configuration zone lsan_hb067166_pure

VDX6730_066_075(config-zone-lsan_hb067166_pure)# member-entry 10:00:8c:7c:ff:1f:7b:00

VDX6730_066_075(config-zone-lsan_hb067166_pure)# member-entry 10:00:8c:7c:ff:1f:7b:01

VDX6730_066_075(config-zone-lsan_hb067166_pure)# member-entry 52:4a:93:7d:f3:5f:61:00

VDX6730_066_075(config-zone-lsan_hb067166_pure)# member-entry 52:4a:93:7d:f3:5f:61:01

#

#Save Zoning Changes

#

VDX6730_066_075(config-zone-lsan_hb067166_pure)# zoning enabled-configuration cfg-action cfg-save

#

#Enable Zoning Configuration

#

VDX6730_066_075(config)# zoning enabled-configuration cfg-name NOS_SSR

VDX6730_066_075(config)# exit

<==========>

 

 

References

 

Confirm FCoE Zones Configuration

Use the show zoning command to confirm the FCoE zoning configuration.

<==========>

# show zoning enabled-configuration

zoning enabled-configuration cfg-name NOS_SSR

zoning enabled-configuration enabled-zone lsan_hb067166_pure

 member-entry 10:00:8c:7c:ff:1f:7b:00

 member-entry 10:00:8c:7c:ff:1f:7b:01

 member-entry 52:4a:93:7d:f3:5f:61:00

 member-entry 52:4a:93:7d:f3:5f:61:01

<==========>

 

Task 2: Configure Fibre Channel Routing

 

Description

Fibre Channel routing is configured on the DCX Backbone.

 

Assumptions

  1.        VDX Switch connects to Brocade Gen 5 Fibre Channel switch that supports Fibre Channel routing
  2.        Fibre Channel routing license deployed.

 

Step 1: Configure Fibre Channel Routing

There are various options and topologies that can be used to configure Fibre Channel routing on a DCX Backbone. Refer to the references below to understand how to configure Fibre Channel routing that conforms to your requirements.

 

References