For more details, please see ourCookie Policy.

Design & Build

Data Center Solution-Design Guide: Using CloudPlex for Scalable Virtualization

by on ‎10-11-2012 09:33 AM - edited on ‎04-17-2014 01:38 AM by pmadduru (3,886 Views)


Synopsis: Design best practices for Brocade VCS Fabric with Brocade VDX Switches for building a modular, scalable virtual infrastructure suitable for cloud computing.






CloudPlex is a scale-out architecture to simplify virtual data centers and cloud computing services. Virtualization removes the physical link between applications and the server, storage and network equipment. This improves utilization, simplifies disaster recovery and increases availability. Both traditional client/server applications as well as the growing number of Web 2.0 applications with execution modules that dynamically bind to their resources at run time benefit from dynamic resource allocation provided by the CloudPlex architecture. As shown in the CloudPlex architecture diagram, CloudPlex creates virtual pools of compute, network and storage resources using an open architecture with server and storage choice.


Clustering is fundamental to the CloudPlex architecture. Clustering is an essential tool cost-effective scalability with built-in high availability. It connects physical nodes together into a resource pool simplifying how resources are allocated to applications and simplifying life-cycle management. High-availability is achieved from multiple nodes and multiple paths between nodes. Scale-out achieves near linear price/performance via parallelism and virtualization. Virtualization removes physical addresses and physical associations between applications and their resources substituting virtual addresses so physical resources are no longer captive to a single application. All resources in the cluster are available to any application with no manual reconfiguration of the physical infrastructure. A cluster needs a network to interconnect its nodes and to connect  applications to its resources. The network needs to be a high-speed transport layer with uniform bandwidth and latency between all cluster nodes, commonly known as a fabric.


An Ethernet fabric ensures resources are delivered uniformly from the cluster by keeping bandwidth and latency constant no matter which node(s) in a cluster is supplying them. It is isotropic in this respect. Brocade’s VCS Fabric technology combines an Ethernet fabric with clustering technology. VCS Fabric technology makes all equal cost layer 2 paths in the network available for transport with equal-cost multi-path (ECMP) forwarding at layer 2. The distributed control plane in a VCS Fabric provides high availability and resiliency with linear scalability and extensibility without traffic disruption. A VCS Fabric is ideal as the transport layer between cluster nodes and it uses the same scale-out architecture as server and storage clusters simplifying configuration, operations and management.


Purpose of this Document

This document provides design guidance for Brocade's CloudPlex architecture for scaleable virtualization and cloud computing. CloudPlex provides a wide range of scalability: from a single rack of virtualized servers and storage through multiple data centers with full data center virtualization. The scalability of CloudPlex is demonstrated via a series of use cases:


  • Single rack
  • Multirack
  • Multifabric
  • Virtual Data Center
  • Dual Active/Active Data Centers

CloudPlex is a modular architecture based on Brocade's VCS Fabric technology available in the VDX family of data center switches. It include the Brocade MLX router series and the ADX Application Delivery Switch series to achieve larger configurations including the dual active/active data center use case.



This document is intended for data center architects and network designers responsible for deployment of virtual data centers and private cloud computing architectures.



This design guide provides guidance and recommendations for best practices to use in designing a modular, scalable virtual infrastructure suitable for cloud computing.


Related Documents

The following Brocade publications provide information about the Brocade Data Center Infrastructure Base Reference Architecture and features and capabilities of the NOS, NetIron MLX and ServerIron ADX platforms. Any Brocade release notes that have been published for NOS, MLX NetIron and ADX should be reviewed as well.




About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.


Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (


Key Contributors

The content in this guide was provided by the following key contributors.

  • Lead Architect: Marcus Thordal, Strategic Solutions Lab


Document History

Date                  Version        Description

2012-10-23         1.0                Initial Version



Reference Architecture

A challenge for data center architects and designers is how to meet the mixed performance demands of a diverse set of business applications using virtual infrastructrue. Disparate workloads including business analytics with large data sets, aka, “Big Data”, back office applications with high rates of transaction performance (e.g., CRM, ERP, financial OLTP systems), create a wide range of network performance requirements. Server virtualization clusters running tens of workloads per node and multiple hundreds of workloads per cluster can move any workload across server, storage and network resource pools as resource demands dictate. At scale, this requires a network providing uniform latency, use of all least cost paths, high resiliency and configuration simplicity. Storage and network management have become much more complex in the face of this and data centers commonly include disparate types of storage, storage protocols and network topologies. On top of this, cluster technology, the key to scale-out solutions for compute and storage resources, place much more stringent demands on network bandwidth, latency, scalability and availability often exceeding the capabilities of static hierarchical networks based on classic Ethernet. Network management with repeated entry of commands on multiple switches and ports are required to implement a policy change, adjust bandwidth in a link aggregation group (LAG) or trunk, and to tune around a hot-spot on a physical link in a LAG. The manual management and operation model fails to keep pace with a virtual data center where workload migration changes the network traffic flows, polices have to move when the workload migrates and dynamic storage pools change the latency and bandwidth of storage traffic rapidly.


The solution requires a data center architecture that leverages the same design principles for servers, storage and networking: scale-out resource pools, automatic dynamic load balancing, and policy defined configuration. In addition, an architecture based on modularity simplifies design and lowers the cost of scaling while reducing design and management complexity. This reduces both first cost and total cost of ownership with rapid deployment of new applications and non-disruptive scale-up/scale-down of application resources as workloads vary of the application life-cycle.



The high-level reference architecture is shown below.



   CloudPlex Architecture (click to enlarge)



Resource Pools

Resource pools aggregate physical devices removing the limitations of single device being dedicated to an application. Instead of being bound to a specific physical server or storage port, the application uses a virtual container which can access server, network and storage resources anywhere within the physical resource pool.


Fabric Transport Layer

When applications connect to their resources, they expect no difference in resource availability or performance no matter where the physical resource resides in the pool. This means the transport has to have uniform latency, bandwidth and availability at scale. This is precisely what a fabric transport such as Brocade’s VCS Fabric provides. A Brocade VCS Fabric is designed for uniform, low latency across the fabric, use of all least cost paths in the fabric and full utilization of all bandwidth with Brocade ISL Trunks between switches. Links or switches are added or removed removed dynamically without disruption to traffic flowing elsewhere in the fabric providing seamless scalability and uninterrupted operation.


Virtual Compute

Above the resource pools is the virtual compute block. This relies on server virtualization or a work load manager such as found in Hadoop to abstract the operating system and runtime environment from the physical hardware in a resource pool. A virtual compute block runs on standard x86 hardware so customers can choose their favorite hardware vendor. For virtualization software stacks, customers can choose their hypervisor including VMware vSphere, Microsoft HyperV and Linux Zen.


Virtual Network

Network virtualization abstracts the physical network into many logical networks. Each application, or application tier, can use its own network with dedicated resources (network addresses, bandwidth, latency, etc.). A virtual network block includes technologies that can virtualize the switch, it’s resources and the path used for forwarding frames. For layer 2 networks, multi-chassis trunking virtualizes switches, virtual LANs virtualize resources (addresses and bandwidth) and trunks virtualize the path. With Brocade VCS Fabric technology, switches are virtualized acting as a single switch, Virtual LANs (VLANs) provide path virtualization and Brocade ISL Trunks provide resource virtualization.


Virtual Storage

Storage virtualization hides the physical location of storage blocks from the application allowing blocks to be moved as needed to meet application requirements. Multiple tiers of storage are provided within the storage pool covering a range of storage media including solid state disk, fibre channel disk, serial attached SCSI (SAS) disk and serial attached ATA (SATA) disk. The virtual storage block ensures data is stored on the correct tier to meet the performance requires of the application. Customers can choose from their preferred storage vendor (EMC, HDS, HP, IBM, NetApp) and any storage protocol (Fibre Channel, Fiber Channel over Ethernet (FCoE), iSCSI and NAS) to meet their individual requirements.



Management of a CloudPlex stack includes physical and virtual management as well as orchestration, monitoring and alerting. Commonly, the virtualization software vendor includes a management platform. Today, CloudPlex works with VMware vCenter and Microsoft System Center.  Brocade is also actively participating in OpenStack which is a management platform designed for open management of virtualization and cloud computing resources.


CloudPlex and the Future Software Defined Datacenter

Uniform use of cluster architectures (compute, storage and fabrics) enable transitioning to a datacenter where the application developer can specify the level of service needed from the resources of the infrastructure without explicit reference to where the resource comes from. This emerging architecture has been called a software defined datacenter (see figure below). At run-time, resources are allocated from the resource pools to meet the specifications of the application SLA. To scale, more instances of a module are started in new virtual machines. In this model resource allocation occurs at run time and de-allocation occurs by shutting down unneeded virtual machines. Resource allocation is automatic and built-in to the application development process. Performance, capacity and life-cycle management of the physical infrastructure is accomplished non-disruptively by adding/removing physical components (server, storage, network switch) to/from the resource pools.


Network Virtualization Innovation

The software defined datacenter requires new network virtualization tools. Today, VLANs are used to create logical isolation of traffic over shared physical links between network switches and edge devices (servers, storage, load balancers, etc.). However, VLANs have limited scalability. To address this, virtual eXtensible LAN (VXLAN) provides a way to define large quantities of virtual networks (more than 16 million) on top of the physical transport network. VXLAN scalability also relies on the proven scalability of routed IP networks rather than the limited scalability of Ethernet broadcast domains which underpin VLANs. Brocade is demonstrating a VXLAN gateway service for the Brocade ADX Application Delivery Switch family. This allows existing physical data center networks to seamlessly work with virtualized networks.



   A Software Defined Datacenter Architecture (click to enlarge)


OpenFlow addresses the need to separate the control and data planes of the network so that programmatic control of the forwarding tables in routers and switches is an option. Brocade supports OpenFlow in the Brocade MLX router and announced plans to add an OpenFlow API to the Brocade VDX series of switches that power the VCS Fabric technology.


Open Orchestration and Management

OpenStack is an open standard for orchestration and management of center infrastructure. Brocade is demonstrating OpenStack with the Brocade VDX switch family of switches.


In summary, for the software defined datacenter, scale-out architecture is beneficial even if you aren’t Amazon, Google or Simplicity of operation and linear scale of cost to performance are what every IT group wants. Brocade’s CloudPlex architecture brings these benefits today even at a fraction of a rack and can scale all the way to multiple datacenters at with continental separations with a roadmap that includes VXLAN, OpenFlow and OpenStack.



CloudPlex partners include server and storage partners and virtualization software partners. Working with Brocade, these companies integrate their respective products and technology leveraging the CloudPlex architecture.




Accelerate your journey to the cloud with EMC VSPEX Proven Infrastructure. VSPEX is a set of complete virtualization solutions, proven by EMC and delivered to you by your trusted reseller. Designed for flexibility and validated to ensure interoperability and fast deployment, VSPEX gives you the power to choose the technology in your solution while removing the complexity and risk that typically comes with designing, integrating and deploying a best-of-breed solution. With VSPEX, private cloud computing is more accessible than ever.


Hitachi Data Systems Unified Computing System (UCS)

Hitachi Unified Compute Platform (UCP) is a family of completely integrated and flexible reference solutions. Each UCP solution has been specifically configured for immediate deployment and to run top tier infrastructure applications without over-purchasing or provisioning unnecessary equipment. Each custom-built solution has its entire solution stack certified. No more compatibility issues or accusations by different vendors’ technical support.


Fujitsu Dynamic Infrastructures

With the Dynamic Infrastructures portfolio, Fujitsu created a unique and comprehensive offering of IT products, datacenter- and office solutions, Infrastructure-as-a-Service and Managed Infrastructure services. This complete offering enables customers to make the most beneficial choices for their overall enterprise IT Infrastructure architecture and to select the most effective way to leverage alternative sourcing and delivery models at any time and around the globe. That is why Fujitsu calls it: Dynamic Infrastructures.


Design Use Cases

This section provides CloudPlex use cases showing the broad range of scale, from a single rack to dual active/active datacenter configurations, available with the CloudPlex architecture.


Single Rack Template

This use case integrates the infrastructure into resource pools and makes them available to virtualization hypervisors and management platforms. For availability and resiliency, two top-of-rack (ToR) VDX 67xx switches are used, connected together to form a two-node VCS Fabric cluster.  This configuration can start small with 24 or 60 port fixed switch configurations using ports-on-demand to license only the ports needed. As the rack scales-up by adding more servers and storage, more VDX ports can be licensed for cost-effective scale-up of the VCS fabric.


Building Blocks

A single rack VCS Fabric configuration uses two ToR VDX switches configured in a two-node VCS Fabric.  For in-rack IP storage (FCoE, iSCSI and/or NAS), the storage nodes can be connected to the same VCS Fabric as the servers. This building block can be configured with Brocade VDX 6710 (1 GbE servers or storage) or VDX 6720 (10 GbE servers or storage).



   Single Rack Building Block (click to enlarge)



For Fibre Channel storage, an optional configuration with VDX 6730 ToR switches can be used for in-rack FCoE to an out-of-rack Fibre Channel storage pool.The VDX 6730 provides a gateway from FCoE to SAN storage as shown in the building block below. Servers use FCoE and the VDX 6730 converts the FCoE traffic into Fibre Channel traffic used with existing SAN storage.


   Single Rack Building Block with SAN Storage Pool (click to enlarge)



The template shows  CloudPlex single rack building blocks. These blocks can be used as leaf nodes in the following multi-rack template to scale-out the CloudPlex architecture across one or more rows of racks.



   CloudPlex Single Rack Template (click to enlarge)


Multiple Rack Template

This use case scales out the single rack use case extending the two-node ToR VCS Fabric into a multi-node fabric so the VCS Fabric extends across multiple racks.


Building Blocks

There are three blocks that can extend the fabric across multiple racks. Each uses a different topology.

  • Leaf/Spine
  • Collapsed Spine
  • Stacked

Leaf/Spine Blocks

A new building block, the spine, is used to scale-out the VCS Fabric across multiple racks.


   Spine Building Block for Leaf-Spine Topology (click to enlarge)


Each spine switch connects to leaf switches such as the two VDX switches in each rack. This topology provides multiple equal cost paths with uniform bandwidth and latency between all leaf switches and the spine switches as shown in the diagram below.


   Leaf/Spine Topology(click to enlarge)


Each path can be a Brocade ISL Trunk with up to 80 Gbps of bandwidth.  The spine switches can be either modular VDX 8770 switches (as shown) or fixed VDX 6720 switches depending on the diameter of the fabric. The fabric diameter is measured by total number of connections between the spine and leaf switch and the maximum allowed number of switches in the VCS Fabric (see the release notes for the supported maximum number of switches per VCS Fabric.) The spine switches are usually placed at the middle-of-row (MoR) or end-of-row (EoR).


Collapsed Spine Block

One use for this block is when the CloudPlex deployment requires multiple racks but growth will be limited to less than the total number of 1 GbE and 10 GbE ports available in the VDX 8770 switch.



   Collapsed Spine Block(click to enlarge)


The blue lines can be 1 GbE or 10 GbE links to servers and storage nodes and with a maximum of 384 10 GbE ports per VDX 8770, multiple racks of servers/storage can be connected depending on the density of the server and storage nodes as shown below.


   Collapsed Spine Topology (click to enlarge)


Note that this block can be extended using leaf switches should growth eventually exceed the port count in the VDX 8770.


This topology uses ToR VDX 67xx switches and extends the VCS Fabric by connecting additional VDX 67xx switches together using a stacking, or ring, topology.


   ToR Stacking Block and Topology (click to enlarge)


As shown, the layer 3 boundary is outside the VCS Fabric so virtual machine migration between racks is easily supported. The diameter of this topology is limited by the total servers per rack, the total ports available in the VDX 6720 switch and over-subscription ratio between racks and on the up-links to the core or aggregation block.



Either VDX 6720 fixed switches can be used for the spine, or the VDX 8770 chassis switch can be used depending on the connectivity and bandwidth requirements. Dual spine switches are used for higher availability, but single spine configurations could be used. The spine switches can be placed in middle-of-row (MoR) or end-of-row (EoR) racks.


The layer 2/layer 3 boundary can within the VCS Fabric or outside the fabric. If inside the fabric, VRRP or VRRP-E can be configured on multiple spine switches (see the NOS release notes for the limit per fabric) to scale-up traffic flowing to the core.


VDX 6730 switches can be used as the leaf switch in racks that use converged networks to access existing SAN Storage.



The following template shows CloudPlex at multi-rack scale . Note that the dotted red box labeled “VCS Fabric” is a single VCS Fabric. The leaf blocks use the single rack block and extends them by using Brocade ISL Trunks to connect each ToR switch to each spine VDX 8770 switch.


When the single rack block has scaled up to the limit of its available server, storage and network resources, CloudPlex can scale-out by adding additional racks of resources to the resource pools. This provides cost-effective, elastic scalability in a modular way.



   CloudPlex Multiple Rack Template (click to enlarge)



Multiple Fabric Template

This use case applies when multiple VCS Fabrics are deployed. Release notes for NOS define the maximum tested limits for a VCS Fabric including total number of switches per fabric. It may be desirable to limit the diameter of a single fabric to fewer switches than the maximum. Often, there are customer environmental factors that limit the scale of a single fabric, for example, the desire to limit application tiers a separate fabric for easier administration and assurance of meeting SLAs. Whatever the reason, in larger environments, multiple fabrics will exist and traffic needs to be forwarded between fabrics efficiently without going through the core.


Building Blocks

This block extends the scalability of the previous Multi-Rack template to multiple VCS Fabrics via layer 3 routing. This block can be used for any of the following scenarios:


  • Intra-VCS Fabric routing of VLAN traffic,
  • Inter-VCS Fabric routing between separate VCS Fabrics
  • Routing to an IP Services block (ADX Application Delivery Switch, and/or IDS/IPS).

Intra-VCS Fabric Routing of VLAN Traffic

Layer 3 routing can be turned on in ToR switches for routing VLAN traffic within the server rack as shown below. Consider a three tier web services configuration with web, application and database service. These services can run on any virtual machine in the rack and use VLANs for logical traffic isolation. For example, VLAN 10 and 20 are used for application and database traffic and will need to be routed between the application VMs and the database VMs. The VDX ToR switches are configured for layer 3 with a VE port associated with each VLAN. This allows layer 2 switching at the top-of-rack switch to apply to any traffic within a VLAN and layer 3 routing at the top-of-rack switch for any inter-VLAN traffic within the rack. This localizes routing eliminating traffic on uplinks to the aggregation layer or spine improving latency and reducing the opportunity for congestion on the up-links.



   Intra-VCS Fabric VLAN Routing (click to enlarge)


Inter-VCS Fabric Routing

When routing between VCS Fabrics, an aggregation VCS Fabric can be used to interconnect multiple leaf VCS Fabrics as shown below. With VDX 8770 switches, the uplinks between the spine and leaf fabrics can use 40 GbE links and can aggregate these in vLAG trunks providing low over-subscription ratios and high availability and resiliency. This topology is similar to classic aggregation / access layer topologies, but with a VCS Fabric, each fabric operates as if it’s a single large distributed switch simplifying configuration and operations.


The ToR VDX 6720/6710 leaf switches can be configured for VLAN routing while the fabric spine VDX 8770 switches forward traffic to the aggregation fabric VDX 8770 switches.  The aggregation fabric routes traffic between the spine fabrics and the data center core. This configuration localizes traffic to improve performance and lower latency. For example, within a leaf fabric, layer 2 forwarding is used for east/west traffic within that fabric. As required, routing between VLANs within in the rack happens at the ToR VDX switch keeping this traffic within the rack and off the Brocade ISL Trunks between the spine and leaf switches. Client traffic that exits the fabric is routed through the VDX 8770 spine switches which use VRRP-e to provide high availability, resiliency and full use of the uplink bandwidth of both VDX 8770 switches. The aggregation VCS Fabric routes traffic between leaf fabrics, to the IP Services block and the core router.


Notice that the architecture uses a leaf/spine topology within the leaf VCS Fabrics and also between leaf fabrics and the aggregation fabric. The leaf spine topology provides cost-effective scalability and availability with consistent latency which is one reason it is used repeatedly in the CloudPlex architecture.



  Inter-VCS Fabric Routing (click to enlarge)


Routing to an IP Services Block

The spine VCS Fabric is a convenient place to connect to an IP Services block. An IP Services block includes load balancing, intrusion detection (IDS) and intrusion prevention (IPS) services. Traffic flows between clients and servers are transparently rerouted through the IP Services block. The Brocade ADX Application Delivery Switch provides load balancing, SSL termination and has a plug-in for VMware vCenter to coordinate VM creation / deletion based on client traffic requirements. An IP Services block can be used with multiple VCS Fabrics as shown below, or IP services can be deployed so there is one IP service block per fabric depending on scalability and management requirements.


   VCS Inter-Fabric Routing with IP Service Block (click to enlarge)



The IP Services block is shown connected to the spine fabric. An alternative configuration would use one IP Services block per leaf fabric. This could simplify configuration of IP services since the scope of the IP services is limited to a single VCS Fabric. This also reduces the workload on the components of the IP services block so would improve scalability of the services and avoid potential bottlenecks by placing the IP service resources closer to the intended application stack consuming those services.


The ADX includes the Application Resource Broker (ARB) plug-in. ARB communicates monitors network performance and resource utilization against SLA thresholds. If performance of a web tier, such as the application tier, exceeds the threshold for response time, ARB signals the virtual resource orchestration function, for example, VMware vCenter, to create a new VM. When the VM is operational, vCenter tells ARB about the new VM IP address and ARB adds this to the ADX list of load balancing resources for the application layer. The ADX can then rebalance in coming connections to the application tier to lower response time.



The following template shows CloudPlex at multi-fabric. A non-VCS Fabric building block, the IP Services block with the ADX Application Delivery Switch, is easily added to the template.



…CloudPlex Multiple Fabric Template (click to enlarge)


Virtual Datacenter Template

This use case connects the datacenter core to the CloudPlex VCS Fabric templates. The datacenter core routes traffic between the datacenter network, the internet and the WAN. It could also connect to a campus/LAN environment as is commonly the case in healthcare, education, banking and retail.


Building Blocks

The Brocade MLX Router is used for the Datacenter Core.  It uses OSPF routing to connect to the spine VCS Fabric of VDX 8770 switches. BGP is used for connections to the internet and/or WAN.



   Datacenter Core Block (click to enlarge)



The MLX can be configured with virtual routing and forwarding (VRF) to create a multi-tenant virtual data center configuration.


The MLX also provides the OpenFlow API with hybrid configuration whereby only certain traffic on a port has access to the OpenFlow controller while the remaining traffic does not. An OpenFlow controller can directly update the forwarding information tables of the router so customized forwarding can be applied to specific traffic flows without protocol or router reconfiguration.



The following template shows Cloudplex at virtual datacenter scale.



   CloudPlex Virtual Datacenter (click to enlarge)


Global Data Centers Template

Virtualization can be used in multiple data centers to improve resource utilization and simplify disaster recovery using an active/active architecture. This template extends the virtual data center template creating a dual active/active data center template. It provides active/active replication of application state and storage between two data centers. It supports live migration of running applications between servers in a stretched cluster, recovery of applications after an unplanned outage and rerouting of client connections to the lowest latency site with global server load balancing.


Building Blocks

Starting with the preceding CloudPlex Virtual Datacenter template,some of the building blocks are extended.


Datacenter Interconnect Block

The Datacenter Core Routing block is modified by adding MPLS Virtual Private LAN Service (VPLS) via a Brocade CER Router creating a Datacenter Interconnect Block. This block provides layer 2 connectivity over the WAN between for stretched server clusters. The stretched cluster supports live virtual machine migration between data centers.



   Data Center Interconnect Block (click to enlarge)


Global Server Load Balancing Block

An IP Services Block is added with Global Server Load Balancing (GLSB). The Brocade ADX Application Delivery Switch when configured with GSLB adds intelligence to authoritative Domain Name System (DNS) servers by serving as a proxy to these servers and providing optimal IP addresses to the querying clients. As a DNS proxy, the GSLB ServerIron ADX evaluates the IP addresses in the DNS replies from the authoritative DNS server for which the ServerIron ADX is a proxy and places the “best” host address for the client at the top of the DNS response.



   IP Services, GLSB Block (click to enlarge)


GSLB provides the following advantages:

  • No connection delay
  • Client geographic awareness based on DNS request origination
  • Distributed site performance awareness
  • Fair site selection
  • Statistical site performance measurements that minimize impact of traffic spikes
  • Best performing sites get fair proportion of traffic but are not overwhelmed
  • Protection against "best" site failure
  • Straight-forward configuration
  • All IP protocols are supported

In standard DNS, when a client wants to connect to a host and has the host name but not the IP address, the client can send a lookup request to its local DNS server. The DNS server checks its local database and, if the database contains an Address record for the requested host name, the DNS server sends the IP address for the host name back to the client. The client can then access the host.


If the local DNS server does not have an address record for the requested server, the local DNS server makes a recursive query. When a request reaches an authoritative DNS server, that DNS server responds to this DNS query. The client’s local DNS server then sends the reply to the client. The client now can access the requested host.


With the introduction of redundant servers, a domain name can reside at multiple sites, with different IP addresses. When this is the case, the authoritative DNS server for the domain sends multiple IP addresses in its replies to DNS queries. To provide rudimentary load sharing for the IP addresses for domains, many DNS servers use a simple round robin algorithm to rotate the list of addresses in a given domain for each DNS query. Thus, the address that was first in the list in the last reply sent by the DNS server is the last in the list in the next reply sent by the DNS server.


This mechanism can help ensure that a single site for the host does not receive all the requests for the host. However, this mechanism does not provide the host address that is “best” for the client. The best address for the client is the one that has the highest proximity to the client, in terms of being the closest topologically, or responding the most quickly, and so on. Moreover, if a site is down, the simple round robin mechanism used by the DNS server cannot tell that the site is down and still sends that site’s host address on the top of the list. Thus, the client receives an address for a site that is not available and cannot access the requested host.

The Brocade ADX GSLB feature solves this problem by intelligently using health checks and other methods to assess the availability and responsiveness of the host sites in the DNS reply, and if necessary exchanging the address at the top of the list with another address selected from the list. GSLB ensures that a client always receives a DNS reply for a host site that is available and is the best choice among the available hosts.


SAN Storage Replication Block

For storage replication, iSCSI or NAS storage pools will rely on synchronous replication provided by the storage arrays to ensure the storage is synchronized at both sites. But, in the case of Fibre Channel storage pools, adding a SAN Storage Replication block ensures efficient block replication over the WAN. This block optimizes array replication over the WAN reducing latency and bandwidth requirements while encrypting data for security. The block shown below assumes the SAN fabric is a core/edge topology with the Brocade DCX Backbone switch at the core. The Brocade FX8-24 Extension blade uses Fibre Channel over IP (FCIP) to send storage blocks over the WAN and encrypts the traffic for security.



   SAN Core with Distance Extension Block (click to enlarge)



An alternative block for SAN distance extension could use the Brocade 7800 Extension Switches rather than the FC8-24 Extension blades in the DCX Backbone switches. The 7800 Extension Switch connects to the Core Router forwarding FCIP traffic.


The ADX Application Delivery Switch in the IP Services SLB-Firewall-IDS/IPS module can be configured for multi-tenancy creating logical partitions to virtualize ADX resources and allocate them to specific virtual networks (VLANs). This reduces equipment and increases utilization to greatly reduce not only first cost but operating cost.




Datacenter Interconnect Template

The diagram below shows the Datacenter Interconnect template and its building blocks.



   Datacenter Interconnect Template (click to enlarge)


Global Datacenter Template

Below is the CloudPlex global datacenter template for a single datacenter. Each data center would have the same template deployed.



…CloudPlex Active/Active Datacenter Template (click to enlarge)



The Global Data Centers template can be used to design dual active/active data centers for a highly available private or public cloud. Latency is the limiting factor for this option and the distance between data centers can not exceed the maximum latency supported by the hypervisor for live migration of running workloads and for synchronous replication of storage. An design guide for this option is available in the references.