Design & Build

Data Center Infrastructure,Storage-Design Guide: SAN Distance Extension Using ISLs

by ‎10-31-2012 11:28 AM - edited ‎10-14-2014 12:47 PM (33,791 Views)


Synopsis: Designs with best practices for a two-site data center disaster recovery solution using Brocade Fiber Channel ISL connections over extended distance.





The most common reason for extending a Fibre Channel (FC) storage area network (SAN) over extended distances is to safeguard critical business data and provide near-continuous access to applications and services in the event of a localized disaster. Designing a distance extension solution involves a number of considerations, both business and technical.


From the business perspective, applications and their data need to be classified by how critical they are for business operation, how often data must be backed up, and how quickly it needs to be recovered in the event of failure. Two key metrics are the Recovery Point Objective (RPO) and the Recovery Time Objective (RTO). The RPO is the time period between backup points and describes the acceptable loss of data after a failure has occurred. For example, if a remote backup occurs every day at midnight and a site failure occurs at 11 pm, changes to data made within the last 23 hours will be lost. RTO describes the time to restore the data after the disaster. RTO determines the maximum outage that can occur with an acceptable impact to the business.

From a technology perspective, there are several choices for the optical transport network and configuration options for the FC SAN when it is extended over distance. Applications with strict RTO and RPO require high-speed synchronous or near-synchronous replication between sites with application clustering over distance for immediate service recovery. Less critical applications may only require high-speed replication that could be asynchronous to meet the RPO/RTO metrics. Lower priority applications that don’t need immediate recovery after a failure can be restored from backup tapes from remote vaults.


Brocade is a leader in Fibre Channel SAN switching providing a broad product portfolio with unique features the designer can leverage for cost-effective and efficient SAN distance extension. Inter-switch Links (ISL) are used to connect two SAN switches together. By stretching ISLs over extended distances (a few kilometers to more than 100 Km), data replication traffic can use Fibre Channel as the transport to a remote data center. For this reason, SAN distance extension over ISLs is a common method of transporting replicated storage data for disaster recovery.


Purpose of This Document

This guide describes how to design SAN distance extension for a disaster recovery (DR) solution using Brocade Fibre Channel SAN products and the Brocade Fabric Operating System (FOS). FOS has a number of features designed to optimize SAN extension using ISL connections.


Design best practices are included for SAN extension. The design topology shown has been configured and validated in Brocade’s Strategic Solution Validation Lab.


The design can be used with array-based replication and/or tape backup systems due to its excellent scalability, high performance and low latency.



This document is intended for disaster recovery planners and SAN architects who are evaluating and deploying DR solutions that use SAN distance extension for storage data.



This design guide is intended to provide guidance and recommendations based on best practices for a two-site data center disaster recovery solution using Fiber Channel ISL connections over extended distance.


Restrictions and Limitations

This design guide only addresses SAN distance extension using ISL connections. An alternative solution that uses Fibre Channel over IP (FCIP) will be covered in a separate design guide.


Related Documents

The following documents are valuable resources for the designer. This design is based on the Data Center Infrastructure Base Reference Architecture which includes SAN building blocks and templates.




About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.


Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (


Key Contributors

The content in this guide was developed by the following key contributors.

Lead Architects: Jeffrey Rametta, Strategic Solutions Lab

                           Haim Gaby, Strategic Solutions Lab


Document History






Initial Release


Reference Architecture

This design guide is based on the Data Center Infrastructure Base Reference Architecture building blocks. Shown below is the SAN Template used for this design.



   SAN Core/Edge Template with Building Blocks (click to enlarge)


This template illustrates a common SAN topology, Core/Edge, with two edge blocks, Edge Switch and Edge Access Gateway and a Core Backbone block. Both edge blocks connect to the Core Backbone block using ISL Trunks, providing automatic frame based flow balancing over multiple ISL links for highest utilization with mixed traffic flows. The Edge Access Gateway block is commonly used with embedded FC switches found in blade servers. Access Gateway can also be used with rack mount switches with Top-of-Rack SAN switches. When a switch is configured for Access Gateway mode, it does not consume a fabric Domain ID simplifying the fabric design. Similar to ISL Trunks, Access Gateway provides Access Gateway Trunks for excellent link utilization and automatic fail-over should a link in the trunk fail.


Note Fabric A and Fabric B are shown indicating the use of two physically independent SAN fabrics to connect servers to storage arrays. This is a SAN best practice for high availability and resiliency. Each Server and storage use dual connections, each going to either Fabric A or Fabric B. Servers are configured with IO Adaptors and multipath IO device drivers for active/active IO from both adaptors to both fabrics. Should a path in one fabric fail for any reason (HBA, cable, FC switch port, FC switch, array port, configuration error, power outage, etc.), then IO continues on the remaining path.


Note Fabric C and Fabric D are shown in the Core Backbone block. The Brocade DCS and DCX Backbone switches support virtual fabrics allowing ports from the same switch to be allocated to logically isolated fabrics. Again, dual physically independent fabrics are connected to the long distance optical transport network for high availability and resiliency.


The Core Backbone block uses ISL links with a long distance optical network as shown by the cloud labeled “San Distance Extension with ISLs”. Server IO at the edge blocks flows to the Core Backbone block and then to storage arrays. The storage array(s) replicates changes to the data blocks to an array(s) in a remote datacenter. The replication traffic flows over the ISL links in Fabric C and D that are attached to the long distance optical network.


Note that “Backbone ICL” links connect the core switches in each fabric. This is an innovative feature available on Brocade DCX and DCX Backbone switches providing very high bandwidth trunks between core switches without consuming ports on port cards allowing all port card ports to be used for connecting arrays and for ISL Trunks to edge switches.




Business Requirements

As more applications drive business value, and the associated data becomes key to competitive advantage, cost-effective protection of the applications and data from site disasters and extended outages has become the norm. Modern storage arrays provide synchronous as well as asynchronous array-to-array replication over extended distances. When the array provides block-level storage for applications, Fibre Channel is the primary network technology used to connect the storage arrays to servers, both physical and virtual. For this reason, cost-effective disaster recovery designs leverage Fibre Channel to transport replicated data between arrays in different data centers over distances spanning a few to more than 100 kilometers. Therefore, SAN distance extension using Fibre Channel is an important part of a comprehensive, cost-effective and effective disaster recovery design.


Special Considerations

It is helpful to review the following special considerations that apply to Fibre Channel SAN distance extension. It is important to understand the Fibre Channel protocol and the optical transport technology and how they interact.


Optical Fiber Cabling

There are two basic types of optical fiber, Multimode Fiber (MMF) and Single-Mode Fiber (SMF). Multimode fiberis generally used for short distance spans and is common for interconnecting SAN equipment within the data center. Single-modefiber has a smaller core diameter of 9 µm and carries only a single mode of light through the waveguide. It is better at retaining the fidelity of each light pulse over long distances and results in lower attenuation. Single mode fiber is always used for long-distance extension over optical networks and often used even within the data center for FICON installations.


There are several types of single-mode fiber, each with different characteristics that should take into consideration when deploying a SAN extension solution. Non-Dispersion Shifted Fiber (NDSF) is the oldest type of fiber and was optimized for wavelengths operating at 1310 nm, but performed poorly in the 1550 nm range, limiting maximum transmission rate and distance.  To address this problem, Dispersion Shifted Fiber (DSF) was introduced. DSF was optimized for 1550 nm, but introduced additional problems when deployed in Dense Wavelength Division Multiplexing (DWDM) environments. The most recent type of single-mode fiber, Non-Zero Dispersion Shifted Fiber (NZ-DSF) addresses the problems associated with the previous types and is the fiber of choice in new deployments.


As light travels through fiber, the intensity of the signal degrades, called attenuation. The three main transmission windows in which loss is minimal are in the 850, 1310, and 1550 nm ranges. The table below lists common fiber types and the average optical loss incurred by distance for both multimode (MM) and single mode (SM) fiber.



   Average attenuation of optical fiber due to distance (click to enlarge)


Optical Power Budget, Fiber Loss

A key part of designing SANs over long distance optical networks involves analyzing fiber loss and optical power budgets. The decibel (dB) unit of measure for the signal power in a fiber link. The dB loss can be determined by comparing the launch power of a device to the receive power. Launch and receive power are expressed in decibel-milliwatt (dBm) units, which is the ratio of measured signal power in milliwatts (mW) to 1 mW.




The optical power budget identifies how much attenuation can occur across a fiber span while still maintaining sufficient output power for the receiver. It is determined by finding the difference between “worst-case” launch power and receiver sensitivity. Transceiver and other optical equipment vendors typically provide these specifications for their equipment. A loss value of 0.5 dB can be used to approximate attenuation caused by a connector/patch panel. It is useful to subtract an additional 2 dB for safety margin.


Optical Power Budget = (Worst Case Launch Power) – (Worst Case Receiver Sensitivity) + (Connector Attenuation)


Signal loss is the total sum of all losses due to attenuation across the fiber span. This value should be within the power budget to maintain a valid connection between devices. To calculate the maximum signal loss across an existing fiber segment, use the following equation:


Signal Loss = (Fiber Attenuation/km * Distance in km) + (Connector Attenuation) + (Safety Margin)


The previous table showed average optical loss characteristics of various fiber types that can be used in this equation, although loss may vary depending on fiber type and quality.  It is always better to measure the actual optical loss of the fiber with an optical power meter.


Some receivers may have a maximum receiver sensitivity that should not be exceed. If the optical signal is greater than the maximum receiver sensitivity, the receiver may become oversaturated and not be able to decode the signal, causing link errors or even total failure of the connection. Fiber attenuators can be used to resolve the problem.  This is often necessary when connecting FC switches to DWDM equipment using single mode FC transceivers.


FC Transceivers for Extended Distances

Optical Small Form-factor Pluggable (SFP) transceivers are available in short- and long-wavelength types. Short wavelength transceivers transmit at 850 nm and are used with 50 or 62.5 µm multimode fiber cabling. For fiber spans greater than several hundred meters without regeneration, use long-wavelength transceivers with 9 µm single-mode fiber. Long-wavelength SFP transceivers typically operate in the 1310 or 1550 nm range.

Optical transceivers often provide monitoring capabilities that can be viewed through FC switch management tools, allowing some level of diagnostics of the actual optical transceiver itself.



Brocade 8 and 16 Gbps products enforce the use of Brocade branded optics plus a restricted list of specialist third party options to meet requirements for extended distance or CWDM/DWDM optics. Other Brocade products do not enforce optics rules but qualified or certified optics only should be used as shown in the latest Brocade Compatibility Matrix, Transceivers Quick Reference section (see the References below).




FC Protocol over Extended Distance Considerations


Flow Control

Brocade switches can support two methods of flow control over an ISL


  • Virtual Channel (VC_RDY) – VC_RDY is the default method and uses multiple lanes or channels, each with different buffer credit allocations, to prioritize traffic types and prevent head-of-line blocking. VC_RDY flow control differentiates traffic across an ISL. It serves two main purposes:
    • To differentiate fabric internal traffic from end-to-end device traffic
    • To differentiate different data flows of end-to-end device traffic to avoid head-of-line blocking. Fabric internal traffic is generated by switches that communicate with each other to exchange state information (such as link state information for routing and device information for Name Service). This type of traffic is given a higher priority so that switches can distribute the most up-to-date information across the fabric even under heavy device traffic. Additionally, multiple IOs are multiplexed over a single ISL by assigning different VCs to different IOs and giving them the same priority (unless QoS is enabled). Each IO can have a fair share of the bandwidth, so that a large-size IO will not consume the whole bandwidth and starve a small-size IO, thus balancing the performance of different devices communicating across the ISL.
    • Receiver Ready (R_RDY) – R_RDY is defined in the ANSI T-11 standards and uses a single lane or channel for all frame types.


When Brocade switches are configured to use R_RDY flow control, other mechanisms are used to enable QoS and prevent head-of-line blocking.



When connecting switches across dark fiber or wave division multiplexing (WDM) optical links, VC_RDY is the preferred method, but there are some distance extension devices that require the E_Port use R_RDY. To configure R_RDY flow control on Brocade switches, use the portCfgISLMode command.




Quality of Service

Starting with FOS release 6.0, Brocade Virtual Channel technology can be used to prioritize traffic between initiator/target pairs by mapping traffic flows to High, Medium, or Low priority queues. QoS support with Virtual Channels is enabled with the Adaptive Networking license. QoS is supported over long-distance ISLs that utilize up to 255 buffers. When an E_Port is allocated more than 255 buffers, the remaining buffers are allocated to the medium priority queue.




Buffer Allocation

Before considering FC-level buffer allocation, note that the availability of sufficient FC-level buffering is not itself sufficient to guarantee bandwidth utilization. Other limitations, particularly at the SCSI level of the storage initiator and/or target, are often the limiting factor. The I/O size, I/O per Second (IOPS) limit, and concurrent or outstanding I/O capability at the SCSI level of the initiators/targets can be and often are gating factors.


While exact calculations are possible, a simple rule of thumb is often used to calculate the BB credit requirement of a given link. Based on the speed of light in an optical cable, a full-size FC frame spans approximately 4 km at 1 Gbps, 2 km at 2 Gbps, 1 km at 4 Gbps, 500m at 8 Gbps , 200m at 16 Gbps  or 400 m at 10 Gbps. The rule of thumb is this:  1 credit is required for every kilometer at 2 Gbps; therefore half a credit is required for every kilometer at 1 Gbps and 2 credits are required for every kilometer at 4 Gbps. With this simple set of guidelines it is easy to estimate the amount of required credits per link to maintain line speed.


Having insufficient BB credits will not cause link failure, but it will reduce the maximum throughput.

In the example cited above, the 1-ms link running at 4 Gbps with only 100 BB credits can achieve a maximum throughput of approximately 2 Gbps.


Using the LS option, the command portCfgLongDistance can be used to allocate the required buffers for the link distance.




Frame-Based Trunking

Long distance links using VC_RDY flow control can be part of an ISL trunk group if they are configured for the same speed and distance and the distances of all links are nearly equal. Within a Frame based trunk, the maximum allowed difference between shortest and longest links is approximately 400 meters.


When R_RDY flow control is used, frame-based trunking is disabled. Exchanged-based routing policy, used to interleave FC exchanges across multiple ISLs, can be used with either type of flow control.




Dynamic Path Selection

Dynamic Path Selection (DPS), also called Exchange Based Routing, is a feature first available for 4 Gbps products and later. DPS applies at a fabric level and has no restrictions on co-location of ports on a given switch or even on their going the same route through the fabric. However, as with some other configuration options DPS is not supported for certain limited cases, specifically FICON and HP-EVA/CA. Where frame-based ISL Trunks cannot be used, DPS is a good alternative for high availability with multiple ISL connections.




D-port Advanced Diagnostics for Brocade 16G SFP+


A Brocade D-Port is used to diagnose optics and cables. It does not carry any FC control or data traffic and is supported on E_Ports and also F_Ports if a Brocade 1860 adaptor is used in the server. When a port is in D_Port mode, the following diagnostic tests can be conducted (refer to the diagram below. “C3 ASIC” refers to 16 Gbps products).


  • Performs Electrical loopback
  • Performs Optical loopback
  • Measures link distance
  • Performs link traffic test


   D_Port Diagnostic Test Paths (click to enlarge)




In-flight Encryption and Compression over 16 Gbps ISLs

With the 16 Gbps products such as the DCX 8510 Backbone switch, in-flight encryption and compression can be applied at an egress E_Port of an ISL between two Brocade switches. The E_Port on the receiving side of ISL will decrypted and decompressed the traffic. A maximum of two ports per ASIC can be have in-flight encryption and compression enabled..




Forward Error Correction (FEC) on 16Gbps Ports

FEC can recover bit errors for 10 Gbps and 16 Gbps ports for both FC frames and FC primitives. FEC on 16 Gbps ports has the following capabilities.


  • Can correct up to 11 error bits in every 2112-bit frame transmission
  • Enhances reliability of transmission and thus performance
  • Enabled by default on backend links for 16 Gbps blades in 8510-8/8510-4 chassis
  • Supported on E/Ex_ports between 16Gbps ports at either 16Gbps or 10Gbps link speed.




Distance Connectivity Options

There are a number of methods in which FC SANs can be extended over long-distance optical networks. Any of the following technologies can provide a viable long-distance connectivity solution, but choosing the appropriate one can depend on a number of variables—including technological, cost, or scalability needs.


It is important to note that many terms are miss-used or used in a very generic way.  In addition many products can be configured and used in a variety of the different ways discussed in the following sections. Thus it is necessary to take care that there is no confusion or uncertainty as to the type of equipment being used.  In addition, if connectivity is being provided by a service provider in addition to any customer premises equipment, it is important to understand all devices in the network.


Native FC over Dark Fiber

The term “dark fiber” typically refers to fiber optic cabling that has been laid, but remains unlit or unused. The simplest, but not necessarily most cost effective or scalable method for extending SANs over distance, is to connect FC switches directly to the dark fiber using long-wavelength SFP transceivers. An optional Brocade Extended Fabrics license can be used to provide additional buffer credits to long distance E_Ports in order to maintain FC performance across the network.


Brocade and its partners have a list of fully tested and qualified optics for various distances. In addition a wider selection of parts has been certified through the Brocade Data Centre Ready group.


Wave Division Multiplexing

Dense Wavelength Division Multiplexing (DWDM) - DWDM is optimized for high-speed, high-capacity networks and long distances. DWDM is suitable for large enterprises and service providers who lease wavelengths to customers. Most equipment vendors can support 32, 64, or more channels over a fiber pair with each running at speeds up to 10 Gbps or more. Fiber distances between nodes can generally extend up to 100 km or farther. DWDM equipment can be configured to provide a path protection scheme in case of link failure or in ring topologies that also provide protection. Switching from the active path to the protected path typically occurs in less than 50 ms.


Coarse Wavelength Division Multiplexing (CWDM) – CWDM provides the same optical transport and features of DWDM, but at a lower capacity, which allows for lower cost. CWDM is generally designed for shorter distances (typically 50 to 80 km) and thus does not require specialized amplifiers and high-precision lasers (lower cost). Most CWDM devices will support up to 8 or 16 channels. CWDM generally operates at a lower bit rate than higher-end DWDM systems---typically up to 4 Gbps.


There are two basic types of Wavelength Division Multiplexing (WDM) solutions – both are available for CWDM and DWDM implementations depending on customer requirements:


  • Transponder-Based Solutions:  Allows connectivity to switches with standard 850 or 1310 nm optical SFP transceivers. A transponder is used to convert these signals using Optical-to-Electrical-to-Optical (O-E-O) conversion to WDM frequencies for transport across a single fiber. By converting each input to a different frequency, multiple signals can be carried over the same fiber.
  • SFP-Based Solutions:  These eliminate the need for transponders by requiring switch equipment to utilize special WDM transceivers (also known as colored optics), reducing the overall cost. Coarse or Dense WDM SFPs are like any standard transceiver used in Fibre Channel switches, except that they transmit on a particular frequency within a WDM band. Each wavelength is then placed onto a single fiber through the use of a passive multiplexer.

Traditionally SFP based solutions were used as a low cost solution and so were mostly CWDM based. Due to a number of compliance requirements some customers are using these solutions to minimize the number of active or powered components in the infrastructure.  Along with the need for increasing bandwidth and the use of such solutions to support Ethernet as well as FC connectivity a number of customers are now using DWDM SFP based implementations and so require DWDM colored optics rather than CWDM colored optics to allow sufficient connections through a single fiber.


Time Division Multiplexing

Time Division Multiplexing (TDM) takes multiple client-side data channels, such as FC, and maps them onto a single higher-bit-rate channel for transmission on a single wavelength. TDM can be used in conjunction with a WDM solution to provide additional scalability and bandwidth utilization. Because TDM sometimes relies on certain FC primitives to maintain synchronization, it may require special configuration on Brocade switches. By default, Brocade E_Ports utilize ARB primitives as fill words between frames. Most TDM devices require Idle primitives as fill words. Specific configuration modes are used on Brocade switches to support the use of Idle as fill words.


Additionally it should be noted that TDM based systems can result in a level of jitter or variable latency.  As such it is not possible to make broad statements about the ability to use frame based trunking and in general best practice is to avoid frame based trunking on a TDM based configuration.


The need to use Idle primitives may impact the availability of other Brocade specific features.  The specific details depend on FOS levels and which configuration mode is used for compatibility.



Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH) are standards for transmission of digital information over optical networks and are often the underlying transport protocols that carry enterprise voice, video, data and storage traffic across metropolitan and wide area networks. SONET/SDH is particularly well suited to carry enterprise mission-critical storage traffic because it is connection-oriented, and latency is deterministic and consistent. FC-SONET/SDH is the protocol that provides the means for transporting FC frames over SONET/SDH networks. FC frames are commonly mapped onto a SONET or SDH payload using an International Telecommunications Union (ITU) standard called Generic Framing Procedure (GFP).

Like TDM, FC-SONET devices typically require special switch configuration to ensure the use of Idle rather than ARB primitives for compatibility.


Additionally it should be noted again that FC-SONET/SDH based systems can result in a level of jitter or variable latency.  As such it is not possible to make broad statements about the ability to use frame based trunking and in general best practice is to avoid frame based trunking on a FC-SONET/SDH based configuration.


High Availability in the Optical Transport Network

Many distance extension devices have several options for providing fault tolerance and high availability of ISL connections. The options provide different levels of availability and can be used alone or combined for increased availability requirements.


Fibre Channel ISL Redundancy

The simplest form of protection is from the FC switch itself. Multiple ISLs use the optical transport network providing both additional bandwidth and high availability in the case of a port failure on the FC switch or the distance extension equipment.



   Multiple FC ISL for High Availability (click to enlarge)


Optical Y-Cable

Another method is the use of an optical Y-Cable. This provides protection from a port or line card failure in the distance extension product, but does not provide any protection from port failure on the FC switch.



   Optical Y-Cable for Line Card High Availability (click to enlarge)


Optical Ring with Protection Switching

Optical extension products can be connected in a ring topology with two paths available. They provide protection switching in the case of optical fiber failure in either ring. Most WDM and SONET/SDH equipment has the ability to perform a protection switch from the active path to the standby path in less than 50 ms should a path in one ring fail.



  Optical Dual Ring with Protection Switching High Availability (click to enlarge)


Other Considerations

Early generation Brocade switch platforms (1 and 2 Gbps) were designed with a short loss-of-sync timer that can cause a SAN Fabric to reinitialize whenever an optical ring protection switch occurred. This caused an unnecessary, but non-disruptive, reconfiguration of the fabric. In later generations of Brocade switches (4, 8 and 16 Gbps) the loss-of-sync timer has been increased to 100 ms which is longer than the 50 ms optical ring protection switching time to avoid unnecessary fabric reconfiguration. Therefore, for most distance extension devices, the ISL remains on-line during an optical ring protection switch.



Any frames in flight during a protection switch will be lost and must be retried by the initiating end device.


It is also important to note that if a frame-based ISL Trunk is used, the physical link de-skew value is calculated when the trunk forms. De-skew is necessary to ensure in-order delivery of frames across all physical links in the ISL trunk. An optical network protection switch can change the link latency making the de-skew values incorrect causing unpredictable behavior of the ISL Trunk. Best practice is to ensure that if optical transport rerouting can occur, all physical links in the ISL trunk should be rerouted, not just a few.


As distances increase, frame-based trunking may not be advisable particularly when it’s necessary to assign extended distance ports to different port groups to maximize buffer credit availability. Since Fibre Channel has its own link redundancy and re-routing capabilities, it is often better to not use the optical transport ring protection and rely on the Fibre Channel protocol for protection against ISL link failures.


Finally, Fibre Channel SAN best practice uses two physically independent SAN fabrics for high availability. Therefore, the design of an optical transport network should not compromise this best practice and should ensure physically separate paths (cables, ports, line cards, optical transport devices) for ISLs in each fabric so no single point of failure is introduced. Optical networkpaths should have diverse routes so damage to optical fiber in one side of the ring does not damage fiber in the other side of the ring.




The diagram below shows the topology used to validate this design. Only one of the SAN distance extension fabrics, Fabric C, is shown but best practice is to use two fabrics and to route them via separate paths and service providers for high availability. Several options are shown for the optical transport network and are described more detail in this section.



   SAN Distance Extension Topology (click to enlarge)


There are a number of options for the optical transport network as discussed earlier and as shown in the figure above. Any of them provide a viable solution. Choosing which is appropriate depends on a number of variables including availability (service provider or privately owned), cost, total distance and scalability.



Many terms are misused or used in a generic way so care is needed when evaluating optical transport technologies. Also, products can be configured and used in a variety of the different ways some of which are discussed in the following sections. Take care so there is no confusion or uncertainty as to the type of equipment being used; it’s true capabilities and ability to interoperability with Brocade products. See the most recent Brocade Compatibility Matrix in the References below.


In addition, if connectivity is being provided by a service provider in addition to any customer premises equipment, it is important to understand all devices in the network and where the line of demarcation is for service and support.




Base Design

The base design is based on the Core Backbone block of the SAN Core/Edge Template.  The edge blocks in this template are not affected.



   SAN Core/Edge Template (click to enlarge)


Core Backbone Block


The Core Backbone block is used to connect storage arrays and provide ISL links to optical transport networks. Common products used for the core are the DCX and DCX 8510 Backbone switches. These chassis switches provide port cards supporting high density 8 Gbps and 16 Gbps Fibre Channel using the Brocade Fabric Operating System (FOS). FOS provides many advanced features including virtual fabrics. This feature enables multiple logically isolated SAN switches, called logical switches, to be configured on a single physical switch. Multiple logical switches and physical switches can be connected together into independent SAN fabrics. The base design uses the virtual fabric feature to create a logical switch dedicated to array replication traffic between data centers.


Block Diagram



   Core Backbone Block with ISL Distance Extension (click to enlarge)


Typically when using SAN distance extension with ISLs, the distance between data centers is short enough (120 Km or less) to keep latency low enough so synchronous replication can be used. This enables active/active storage replication so applications at each data center have RPO/RTO metrics that are very small. This is a more cost-effective design for highest availability since infrastructure at each site is in use and not idle waiting for an event requiring fail-over to the alternate site.


Configuration requirements for Brocade SAN switches may vary depending on the type of distance extension device used. The following information provides generic guidelines for determining how a switch should be configured. Many extension products, including WDM, can support a number of technologies within the same chassis through the use of removable blades or line cards. It is important to take note of the model and type of line card connected to the FC switch in order to determine the proper configuration, as incorrect configuration may result in errors or link failure.


Extended Fabrics Feature

Brocade Extended Fabrics is a feature that allocates additional buffering to E_Ports for increased performance over long distances. For some distances, Extended Fabrics requires a software license.


For FOS releases earlier than FOS 6.1.1, the VC Translation Link Initoption is set to (1) on long-distance links configured for VC_RDY flow control which is the default flow control setting. For an ISL configured for R_RDY mode, it should be set to (0).


For FOS release 6.1.1 and higher, this parameter is used to specify if the long distance link should use Idle or ARB primitives.  If VC Translation Link Init is enabled (1), the link uses ARB primitives. If it is disabled (0), the link will use Idle primitives.



QoS and credit recovery are not supported on links that use idle primitives. These options must be disabled.


The Desired Distance is a required parameter to configure a port as an LS or LD mode link. For an LD mode link, the desired distance is used as the upper limit of the link distance to calculate buffer availability for other ports in the same port group. When the distance measured by the switch round-trip timer is greater than the desired distance, thedesired distance is used to allocate the buffers. The port then operates in degraded mode instead of being disabled due to insufficient buffers. For an LS mode link, the actual distance is not measured; instead thedesired distance is used to calculate the buffers required for the port.


Fibre Channel Buffer Credits

SAN distance extension with ISLs requires products that actively participate in FC buffer-to-buffer credit (BB credit) management to extend the distance between switches further than that which is supportable by internal ASIC buffering. Many Brocade products have sufficient BB credits for distances of 1,000 km or more at full FC link rate.


Choosing the Extended Fabric Mode


The following modes are available for the Extended Fabric feature.

  • Static Mode (L0) - LO is the default mode for an E_Port and is used for interconnecting switches in the same data center.
    If the distance between two switches is greater than several kilometers, performance may be degraded because of insufficient buffer credits.
  • Static Mode (LE) - LE is used to support distances up to 10 km and does not require an Extended Fabrics license. Enough buffers are allocated to the port to support full bandwidth up to 10 km regardless of port speed.
  • Dynamic Mode (LD) - LD provides dynamic distance discovery. A round-trip timer determines the latency between two connected switches and automatically allocates the desired number of buffer credits needed to sustain full bandwidth on the ISL with full-size 2112-byte FC frames. The switch will never allocate more buffers than the maximum desired distance specified by the administrator.
  • Static Long Distance Mode (LS) – LS is used for static allocation of buffer credits. The administrator must specify the distance of the ISL in kilometers and the switch allocates the correct number of full-size frame buffers based on the currently configured port speed.


             Extended Fabrics Mode vs Supported Distance (click to enlarge)



FC over Dark Fiber

Connecting FC switches directly to dark fiber requires the use of long-wavelength SFP transceivers. It may be necessary to use the Extended Fabric license to allocate sufficient buffers to the long distance E_Ports.  In addition where multiple E_Ports are required it may be necessary to spread these to different port groups to maximize the number of buffer credits available.


Coarse and Dense WDM Devices

The optics requirements will depend upon the nature of the WDM device.  In addition to the concerns with FC over Dark Fibre, because CWDM and DWDM products are protocol and bit-rate transparent, FOS configuration is identical to connecting FC switches directly to dark fiber, although you should “hard” set the FC port speed to the desired rate to ensure that WDM transponders can lock onto the bit-rate of the ISL.


TDM and FC-SONET/SDH Devices

The optics requirements will depend upon the nature of the WDM device. In addition to the concerns with FC over Dark Fibre a number of other factors may apply. TDM and FC-SONET/SDH devices that do not actively participate in buffer credit management may also require the Brocade Extended Fabric license for optimal configuration. In addition, it is usually required that you configure the E_Port to operate with Idle rather than ARB primitives to maintain synchronization.  Some devices may also require ports to be configured to G_Port mode so that loop initialization is not attempted.  Note that credit recovery and QoS will not be activated on the ISL in these configurations.


For FOS version prior to 6.1.1, the port should be configured for R_RDY mode in order to interoperate with most TDM or SONET devices.  The following example shows how a port can be configured for a 100km link.

Starting with FOS release 6.1.1, there is an option for Extended Fabrics to operate in VC_RDY mode using idle as the fill word. This allows Brocade Frame Trunking to be used with TDM and SONET/SDH assuming total link latency does not exceed the maximum for Frame Trunking.


Optical Transport Network Options and FC Switch Configuration

The following table describes the recommended guidelines for configuring a Brocade switch for various optical transport options. Vendor caveats may apply, so contact the Brocade partner for full details about their products and configurations supported with Brocade switches.


Optical Transport

Extended Fabrics

Idle (not ARB) Primitive

R_RDY Mode

Port Speed Hard Set

G_Port Mode











Add Extended Distance to above



   Optical Transport Network Options with FC Switch Configuration



Key Features



Virtual Fabrics

Cost-effective way to use a single chassis to provide multiple logical switches with only as many ports dedicated for array replication traffic as needed.

Large BB_Credit Pool Allocations to E_Ports

Supports over 1,000 KM distances for ISL connections at full link rate providing high bandwidth over extended distances

Extended Distance License

Enables more BB_Credits for long distances only when required.

Partner WDM Optical Transport

Ensures product and protocol compatibility between Brocade and partner products.






The following lists typical components that can be used in the design templates for this solution.


Core Backbone Components



Brocade DCX Switch

A modular Backbone switch with an internal 8 Gbps switching fabric available in 4 and 8 slot chassis configurations.

Brocade DCX 8510 Switch

An modular Backbone switch with an internal 16 Gbps switching fabric available in 4 and 8 slot chassis configurations.

FOS Extended Fabric License

An optional license use for longer distances to enable a larger BB_Credit pool for extended distance E_Ports.

FOS Adaptive Network License

An optional license required for QoS support on ISL links.


See the Brocade Compatibility Matrix for a list of supported CWDM SFP optics.

Partner WDM Optical Transport

See the Brocade Compatibility Matrix for a list of supported partner products


Partner Components



Partner WDM Optical Transport

See the latest version of the Brocade Compatibility Matrix: Network Solutions (including MAN/WAN) section for a list of supported partner products