Design & Build

Data Center Infrastructure-Base Reference Architecture

by on ‎05-29-2012 02:19 PM - edited on ‎07-16-2014 02:43 PM by pmadduru (21,663 Views)

Synopsis: Provides Brocade's reference architecture for data center networks. It is modular, flexible, scalable and extensible with design blocks for SAN, NAS, Ethernet and IP networking.

 

Contents

 

Preface

Overview

 

The data center is experiencing a number of important transitions. Among the most influential are:

    • Server virtualization
    • Converged networks
    • Unified Communications
    • Virtual Desktop Infrastructure
    • Private cloud data centers
    • Web 2.0 and social media applications with Service Oriented Architectures (SOA)

Many customers have deployed or are evaluating several of these transitions. Combined with a multi-year world-wide recession that constricted IT spending, this is causing existing network architectures to come under scrutiny.

 

Purpose of This Document

This document defines the Brocade Solution Lab reference architecture for data center network infrastructure. The scope includes both IP and storage traffic in the data center. Along with companion data center design and implementation guides published by the Solution Lab, customers can appreciate how to apply Brocade products and technologies to their data center networking challenges.

 

Audience

This document is of interest to network architects, designers and administrators. Enterprise architects and server and storage administrators will also find this content informative.

 

Objectives

This document provides a modular reference architecture to guide the design of data center networks. It is flexible, scalable and extensible and can be applied to different data center network requirements such as:

      • Traditional three-tier network
      • Brocade VCS Fabric™ network
      • Blade servers with embedded switches (Ethernet, Fibre Channel, Converged Network)
      • Fibre Channel SAN
      • iSCSI and FCoE SAN
      • Data center interconnects

See Related Documents for supporting publications.

 

Related Documents

None.

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was developed by the following key contributors.

        • Lead Architects: Marcus Thordal, Strategic Solutions Lab;

                                  Jeffrey Rametta, Strategic Solutions Lab 

 

Document History

Date

Version

Description

2012-06-20

1.0

Initial Release

2012-07-12

1.0a

Edited graphics for better display in PDF view/print

2012-09-12

1.1

Updated for VDX 8770 and Brocade Network Operating System (NOS) 3.0

2012-10-25

1.2

Added SAN Core Backbone with Extended Distance ISL block.

Added About Brocade and Principal Contributors sections.

Update SAN Block graphics

 

Reference Architecture

The data center network is constantly influenced by new applications and new server and storage technologies. The architecture has to support a diverse set of external requirements, for example, mainframe computing, x86 server virtualization, n-tier web and client/server applications, and connection to the campus LAN, Internet, remote data centers and public cloud computing.

 

With the growth of digital data, consolidation of data centers, virtual servers hosting multiple applications on x86 server clusters, and the introduction of storage networks for iSCSI and Fibre Channel over Ethernet (FCoE), bandwidth requirements have moved from 1 GE to 10 GE at the server/storage edge with 40 GE and 100 GE being considered in the aggregation and core routers. As shown in the reference architecture, Fibre Channel storage area networks (SAN) are commonly used as more applications become mission critical and application uptime requirements increase to being always available. Therefore, the data center network incorporates a diverse mix of networking protocols while facing increasing demands for performance, availability and security.

 

 

DataCenter_RefArch-LogicalModel.JPG

   Reference Architecture, Logical Model (click to enlarge)

 

Design Requirements

Data center design requirements are divided into external and internal requirements as shown in the logical model. External requirements include users, applications, and devices (servers and storage).

Internal requirements include IP, storage and converged networks spanning a number of technologies including:

          • Server/storage Edge (adaptors such as NIC, CNA, HBA, etc.)
          • Layer-2 (Ethernet)
          • Layer-3 (IP Routing )
          • Data Center Fabrics (Ethernet Fabric, etc.)
          • Scale-out NAS for Big Data Analytics
          • SAN (Fibre Channel, iSCSI, FCoE, FCIP, etc.)
          • Data Center Edge (WAN, Campus, xWDM, etc.)
          • IP Network Services (access control policies, security, load balance, QoS, DHCP, etc.)
          • Network Management (configure, monitor, alert, etc.)

Design Modularity

 

Design modularity relies on templates that are used to control complexity while supporting a wide range of scalability and performance options. Templates are connected together to construct a scalable data center network as shown in the following diagram. Changes to one template do not affect other templates as they rely on standard interfaces and protocols.

 

  

Reference Architecture, Design Templates (click to enlarge)

 

The functional model of the reference architecture is shown below.

 

 

   Reference Architecture, Functional Model (click to enlarge)

 

In a data center network, some or all of these functions are required depending on the scale of the network. For example, a data center template may include Access and IP Services functions but not the Aggregation function if it’s a small enough size. A larger network may need a data center template with access, aggregation and IP services. The flexibility of the architecture is provided by building blocks which are connected together to create templates.

 

Multiple data center templates can be defined, tested and deployed out a common set of building blocks. This promotes reusability of building blocks (and technologies), reduces testing and simplifies support. The figure below shows the building blocks used to create a typical data center template. As shown, it connects to a core template and to multiple storage templates.

 

DataCenter_RefArch-Template+BuildingBlocks.JPG

  

Template Showing Building Blocks (click to enlarge)

 

In this example, the access block connects the compute notes to ToR switches. The ToR switches define the extent of the broadcast domain and are the L3/L2 boundary. Different access blocks can be designed to efficiently handle different compute node form factors (shown are a blade server and a 1U server access block). Scale up is accomplished by deploying another access block up to the design limit of the data center template.

An aggregation block is included in this data center template to increase connection scalability. It aggregates traffic from multiple access blocks, connects with IP Service blocks and provides the connection between this data center template and a core template.

 

Each building block includes particular technology categories as illustrated in the following diagram. More detailed breakdowns of each category can be defined, (such as spanning tree, multi-chassis trunking, etc. for Layer-2 or OSPF, IS-IS for Layer-3) if desired, to create a catalog of building blocks that incorporate particular technologies and product features. As new technologies arrive, a block can be designed and tested and then plugged into new or existing templates to introduce the technology in a predictable way over time.

 

DataCenter_RefArch-BlockToTechMap.JPG

   Template with Building Block Technology Mapping (click to enlarge)

 

External Requirements

 

Virtual Machines and Server Virtualization

One of the major transitions in the data center is the rapid adoption of x86 server virtualization. Following the path previous taken by mainframe and open systems UNIX platforms, x86 server virtualization can dramatically lower capital and operating costs by boosting server utilization from 10% to 60-80%. In addition, virtual machine mobility provides a low cost method of achieving continuous up-time for applications eliminating down-time for service windows. And, when deployed across geographically separated data centers, disaster recovery costs and complexity can also be reduced.

 

VMware, Microsoft, Citrix and other vendors provide x86 server virtualization software based on hypervisors that isolate the physical server resources from the operating system and application stacks using them. A logical container, commonly called a virtual machine (VM) or virtual server, exists in software. State is saved in a file and copying the VM file from one physical server to another migrates that VM in real-time with no disruption to client connections. Application work load can now move to a server when necessary to accommodate work load growth or contraction and to allow server upgrades without application downtime.

 

References

 

Virtual Desktop Infrastructure

An application of server virtualization to address the high-cost of desktop computing operations is known as Virtual Desktop Infrastructure (VDI). VDI frequently employs a VM to isolate the configuration of desktop operating systems and their applications. Instead of running on a desktop or laptop PC, VDI moves the desktop operating system, office productivity and client applications stack to servers in the data center. This architecture can dramatically simplify configuration and maintenance of desktop computing environments, centralize data storage and management and improve security of corporate assets. However, it requires more performance and scalability of the datacenter network. And, it impacts the campus network as well since the user is interacting with applications, not on their PC, but on a server in the datacenter, so latency and availability of the campus network must be up to the task.

 

References

 

Unified Communications

Unified communications consolidates what once were disparate devices and applications into a fully integrated, real-time collaboration between employees, customers and suppliers. Telephone, video conferencing, instant messaging, shared applications and presence are all consolidate onto a single device with a simple, ubiquitous interface. Server virtualization is commonly used to deploy UC software components and hardware application delivery controllers (ADP) are commonly used in medium and large scale configurations to provide fine-grained scalability of these software components.

 

References

 

High Performance Applications

One application that is moving from the rarified environments of research computing into the corporate datacenter is Big Data analytics. More data continues to flow between the data center and the internet (video, instant messaging, blogs and Twitter), and more companies connect with customers and suppliers in real-time. The quantities of data available now include publicly available content from social media sites such as Google, Facebook, Twitter, Salesforce.com and Linked-in, to name but a few. This means many companies want to analyze data, discover correlations and secure assets in near real-time. Tools pioneered by social networking companies and by data analytic startups are not becoming available to private companies. They are looking to deploy dedicated, low-latency compute clusters with access to block and file storage pools. The impact on the access layer of the datacenter network is 10GE Ethernet on the server with 40 GE and higher moving into the aggregation and core.

 

References

 

Network Requirements

Server/Storage Edge

The data center network primarily connects servers and storage devices together. With growth of x86 server virtualization and the adoption of blade server chassis with embedded switches, the network edge is extending into the server chassis. Careful consideration of the network adaptor/NIC and planning for how to integrate network policies and configuration with the network components in the server are considered in this domain. Server technology trends, such as the following, first affect the access layer of the network.

 

10 GE LAN on Motherboard

Intel recently announced the introduction of 10 GE Ethernet LAN on Motherboard (LOM) in latest generation of motherboards. As server racks are deployed with 10 GE LOM, 10 GE access switches will be deployed at the top-of-rack and 40 GE uplinks will be used between the aggregation and access layer.

 

Multi-Protocol IO Adaptors

Directly supporting a strategy of private cloud computing in the datacenter, Brocade introduced AnyIO™ technology with the Brocade 1860 Fabric Adaptor family. An adaptor port can be software configured for either 10GE or 16 Gbps Fibre channel.

 

Key features of the Brocade 1860 Fabric Adaptor include:

·        Extends Fibre Channel and Ethernet fabric services to the server and applications

·        Consolidates I/O devices by partitioning physical adapters into virtual adapters through Brocade vFLink technology

·        Simplifies the transition to private cloud by supporting current and emerging virtualization workloads and technologies

·        Simplifies and unifies the management of adapter, SAN, and LAN resources with Brocade Network Advisor

Brocade Fabric Link (vFLink) technology partitions a single adapter into as many as 8 virtual adapters that can be configured as virtual HBAs (vHBAs, for Fibre Channel or FCoE) or virtual NICs (vNICs, for Ethernet). Bandwidth can be allocated to these virtual fabric links in 100 Mbps increments, up to the maximum of 16 Gbps for Fibre Channel or 10 Gbps for Ethernet. This helps overcome the proliferation of adapters in virtual environments while maintaining management isolation and fine-tuning for the different networks, including production, backup, management, or live migration as shown in the following figure.

 

Brocade1860FabricAdaptor_Virtualization.JPG

   Brocade 1860 Fabric Adaptor Deployment

 

References

 

Blade Servers with Embedded Switching

Blade servers with embedded switching have attracted more attention with the growth of virtualization. These frequently create multi-tier switching layer at the access layer. In some configurations, converged networks (IP and storage traffic on the same adaptor and cable) have reduced cabling cost and complexity at the access layer. There are a number of options for block storage IO including Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI as well as file access via NAS and NFS. Blade servers are often the entry point for 10GE with Data Center Bridging (DCB) to the server and for virtual IO channels using SR-IOV and other technologies.

 

References

 

Layer-2 Ethernet

The server/storage edge of the network commonly uses Ethernet to connect servers and IP-based storage to switches. Ethernet relies on broadcasts to discover paths and devices and broadcast domains have to be designed to provide scalability, performance, resiliency and high availability. New standards such as Data Center Bridging (DCB) and Transparent Interconnection of Lots of Links (TRILL) affect the architecture and are important when server virtualization and converged networks are deployed in the data center.

 

Spanning Tree Protocol

Spanning Tree protocol (STP) provides frame forwarding between switches in a broadcast domain. Enhancements to STP include Multiple Spanning Tree (MSTP), Rapid Spanning Tree (RSTP), and Per VLAN Spanning Tree (PVST/PVST+).

 

Although Spanning Tree is wide spread, it has limitations. For example, only one link can be active at a time and all traffic within the broadcast domain is halted whenever there is a change in the network (link added/removed, switch added/removed) so STP can rebuild the topology. Link aggregation groups (LAG), multi-chassis trunking (MCT) and VCS Fabrics are additional capabilities that can be used in Ethernet networks. For these reasons, and to simplify network deployment and operation, this Reference Architecture avoids use of Spanning Tree using the following features.  (See Access and VCS Fabric in the Building Blocks section for more details.)

 

Link Aggregation Group

Link aggregation groups (LAG), sometimes called EtherChannels, are commonly used to increase bandwidth and improve resiliency. As server and application performance increased, a single link couldn’t provide sufficient bandwidth. LAG allows multiple links at the same link rate to be combined into a single logical link. Failure of a link with the LAG is detected and traffic continues to flow without requiring STP to rebuild the topology. LAG can be used between switches and also between servers and switches provided the host operating system supports it. Brocade supports LAG on all data center switches.

 

References

 

Multi-chassis Trunking

Where LAG provides greater bandwidth and improved resiliency within a link, multi-chassis trunking provides fault tolerance at the switch level. With MCT, two switches are connected using inter-chassis links (ICL) and present themselves to other switches and hosts as a single logical switch. Should one switch go off-line, traffic continues to flow to the other switch without requiring STP to rebuild the topology. Brocade supports MCT on the MLX and FastIron SX series of switches.

 

References

 

VCS Fabric

Transparent Interconnect of Lots of Links (TRILL) is a recent protocol developed to remove STP. TRILL avoids loops using a link layer routing protocol adapted for use with Ethernet. Brocade provides an implementation of TRILL with the VCS Fabric technology. With VCS fabrics, multiple links between switches automatically form trunks, called Brocade ISL Trunks. Within the Brocade ISL Trunk, all traffic flows are striped in hardware across all links providing 95% utilization of all links without the hot spots found with classical LAG where flow hashing statically assigns a flow to a single physical link in the trunk. Traffic flow within the fabric is automatically load balanced on multiple Brocade ISL Trunks using equal cost multi-path (ECMP) forwarding at Layer 2. A VCS Fabric connects to any other Ethernet switch using virtual LAG (vLAG) on the VCS fabric side and LAG on the Ethernet switch side of the link. A vLAG can also be used with devices that support NIC Teaming for a resiliency and high availability. vLAG is not limited to two switches but allows links in the vLAG to terminate on multiple VCS Fabric switches providing excellent resiliency, scalability and availability.

 

With the release of NOS 3.0, layer 3 services are also available on any switch in a VCS Fabric. This allows a flatter network with multipath traffic forwarding at layer 1, 2 and 3.

 

References

 

Other important features in the Ethernet domain include Data Center Bridging (DCB) and virtual LAN (VLAN).

 

Data Center Bridging

DCB is a set of extensions to Ethernet that provide improved flow control and bandwidth management. DCB includes several standards: Data Center Bridging Capability Exchange Protocol (DCBX), Enhanced Transmission Selection (ETS) and Per-class Flow Control (PFC). The goal was to provide a “lossless” Ethernet instead of relying on higher layer protocols to provide lossless service. DCB is required for Fibre Channel over Ethernet (FCoE) traffic and also benefits iSCSI or any other traffic where avoiding frame loss at a lower layer improves overall performance.

 

Brocade supports DCB on the Brocade 8000 Converged Switch and with the VDX switch series with VCS Fabric technology.

 

References

 

VLAN

A VLAN) is used to improve utilization of the physical network, provide logical isolation of traffic and to apply network policies to Ethernet traffic. Frames have a tag added and removed with a VLAN identifier based on policies set within the server, storage or switch. Class of Service (CoS) identifiers are included in the tag so switches can optimize frame forwarding when congestion occurs. VLANs are commonly used to segregate traffic for security reasons. For example, servers within a cluster maybe assigned to one VLAN, management traffic to another and client traffic to yet another VLAN.  Each VLAN’s traffic is logically isolated from other VLAN traffic unless specifically routed at a Layer 3 router. Brocade supports VLANs on all data center switches.

 

Layer 3 IP Routing

To scale the network, IP routing services are required and an important consideration is the size of the Ethernet domain (layer 2 network) and the location of the IP routing domain (layer 3 network). It is common to locate the Layer 2/Layer 3 boundary in a Top-of-rack (ToR) switch in the server rack or in the middle-of-row rack switch (MoR). But, this is changing with sever virtualization and live migration of virtual machines (VM) that can eliminate downtime for maintenance windows and improve resource utilization. VM migration is limited to an IP subnet and a single VLAN. Today, with more racks of virtualized x86 servers being deployed, the architecture has some racks of servers with the L2/L3 boundary at the ToR switch while other racks will move the boundary to a separate aggregation module or the core switching module. Brocade provides L2/L3 ToR with the Brocade FCX and TurboIron series of switches and L2/L3 middle-of-row MoR with the MLX and SX series.

 

References

 

In other environments such as x86 server virtualization, a large physical extent for the broadcast domain is desired. In these configurations, the layer 2 broadcast domain extends across multiple racks of switches in the access layer and the IP routing domain is provided at the aggregation layer switch.  Brocade’s VDX and 8000 switches do not support IP routing services and when deployed, will connect to aggregation switches for IP routing services. Brocade provides the MLX and SX series of switches for the aggregation layer.

 

Routing Services and Protocols

In the data center, RIP, OSPF and IS-IS are common choice for routing services. In addition, IPv4 is beginning to transition to IPv6 but IPv6 is not backward compatible with IPv4. Brocade provides dual routing stacks for the MLX series and IPv4 to IPv6 translation services with the ADX series of application delivery switches to help with IPv4 and IPv6 co-existence in the data center.

 

References

 

Virtual Routing Redundancy Protocol (VRRP) and an extended version (VRRP-e), are commonly used to provide high availability and resiliency for IP gateway addresses. Brocade provides VRRP-e for active/active clustering of IP routers improving utilization as both routers are active. VRRP, only provides an active/passive cluster configuration.

 

SAN

Storage area networks (SAN) provide shared storage pools and have proven valuable in improving utilization, providing high availability and keeping up with the performance demands of x86 server virtualization. For this reason, shared storage is commonly provided via a data center SAN. Server virtualization and live VM migration require shared storage and storage area networks (SAN) are so the SAN keeps growing to larger scales. Today, there are several protocol choices besides traditional Fibre Channel including iSCSI and Fibre Channel over Ethernet (FCoE). Therefore, server connectivity to the SAN has many options and protocols to choose from.

 

Fibre Channel, FCoE and iSCSI

While SANs originated with Fibre Channel, today’s SAN includes iSCSI and FCoE protocols as well. Brocade provides Backbone, Director and switch class SAN products with the DCX, 5000 and 6000 series of Fibre Channel switches. And, for blade chassis servers, Brocade provides embedded Fibre Channel switches for all brands of blade server chassis.

 

References

 

With the arrival of iSCSI and Fibre Channel over Ethernet (FCoE), Ethernet and IP based storage area networks are being deployed in the data center. The use of the IP network for transporting storage is sometimes referred to as a converged network. While it is true that both IP and block storage traffic can be combined on the same network to reduce first cost of network switches, other considerations including operational responsibility, resiliency, availability and timely fault-isolation have to be considered before comingling IP and block storage traffic on the same physical network.

 

Use of iSCSI is common in the data center but best practice has been to deploy on a dedicated IP network. FCoE is also beginning to be used and initially was closely tied to converged networking. However, the extent of network convergence is frequently limited to the ToR switch where native IP and Fibre Channel traffic are split off into their separate network domains. Brocade provides a ToR FCoE switch, the Brocade 8000 and the innovative VCS Fabric technology for the VDX series of switches. For example, both the Brocade 8000 and VDX 6730 switch can be used for converged networking at the ToR. In addition, a VCS Fabric with multiple 6720 and 6730 switches can be used to connect multiple racks of services using VDX 6720 switches at the ToR and then terminate the converged network at the middle of row (MoR) or end of row (EoR) with VDX 6730 switches that forward traffic into an existing Fibre Channel SAN.

 

References

 

SAN Adaptors

As noted earlier, Brocade supplies storage adaptors and converged network adaptors (CNA) for servers. Brocade recently introduced the Brocade 1860 Fabric Adaptor providing 16 Gbps Fibre Channel, 10 GE and 10 GE CNA functionality that can be applied to any adaptor port via software selection using Brocade’s AnyIO™ technology. Another innovation, Brocade vFLink™ technology, virtualizes a single port into multiple logical ports with independent policies and quality of service. This moves network virtualization services to the application via the adaptor while off-loading hypervisors from processing iSCSI and FCoE protocols.

 

References

 

SAN Services

One other important consideration in the SAN domain is SAN Services. These include distance replication with SCSI device emulation over metropolitan and wide area networks and encryption services for data at rest within the SAN.

 

Storage replication over distance is an important requirement for disaster recovery and business continuance. Fibre Channel extension services are commonly used for this reason as they are optimized for long distance and high bandwidth. Brocade provides the FX8-24 Extension Blade for the DCX backbone and the Brocade 7800 Extension switch for this purpose.

 

Brocade provides Fibre Channel SAN encryption services with the Brocade Encryption SAN Switch and the FS8-18 Encryption Blade for the DCX backbone. This optimized hardware encryption solution is supported leading suppliers of data security services.

 

References

 

Data Center Edge

The data center network connects to the outside world at the data center edge. This domain includes a number of architectural choices. In some enterprises, the campus LAN is large enough to have it’s own core routing services, or the campus can connect to the data center core to reach applications running in the data center and the Internet. Disaster recovery and business continuance often mean there is a dedicated interconnect between data centers in a region and/or globally between regions. In many enterprises, storage array replication services use the wide area network (WAN) or a metropolitan access network (MAN) to move replicated data between data centers. And with the growth of public cloud services providing Infrastructure as a Service (IaaS), platform as a service (PaaS) and software as a service (SaaS), the data center network can interconnect with multiple cloud service providers.

 

Data Center to Campus Network

Depending on the scale of the enterprise, the data center core routers may serve as the core routers for the campus network, or they may connect with separate core routers dedicated to the campus network.

 

Data Center to Internet

IPv6 is becoming necessary as the earlier IPv4 address space reaches exhaustion. The rate of depletion of IPv4 addresses varies by region and industry. However, IPv6 is not backward compatible with IPv4 so migration requires thoughtful consideration of the options. The datacenter network transition to IPv6 usually occurs first at the core rather than the access layer where legacy applications continue to operate using IPv4. Various technologies are available to ease migration. Dual-stacking and address translation techniques are useful techniques when considering how to integrate IPv6.

 

References

 

Data Center to Remote Data Center

In many companies, regional data centers are connected for disaster recovery. Application recovery requires sufficient servers, networking and storage infrastructure as well as application software to be deployed so that transition from the primary to secondary data center meets business recovery time (RTO) and recovery point (RPO) objectives. As virtual server adoption increases, disaster recovery solutions relying on layer 2 networking for real-time application fail-over between datacenters separated by up to 100 Km is an option. In conjunction with our partners, Brocade, in cooperation with partners, has developed datacenter replication solutions for leading server virtualization platforms.

 

For storage, Fibre Channel over IP (FCIP) is commonly used with array-to-array storage replication for mission critical applications. The bandwidth required for replication has been steadily increasing as the volume of data center storage grows from Terabyte (TB) to Petabyte (PB) moving to 10 GE WAN links.

 

References

 

Data Center to Public Cloud Computing Services

As public cloud services increase, enterprises are increasingly incorporating them into their IT strategy. Commonly, test and development environments are frequently the first to adopt public cloud Infrastructure as a Service (IaaS). Specific applications such as CRM and Email are also early candidates for public cloud computing services. VPN services are commonly deployed in the data center core to integrate public clouds with the data center.

 

IP Network Services

IP network services include load balancing client connections, high-performance SSL termination, IPv4 to IPv6 translation services, firewalls and intrusion detection / prevention services. Network services may be located in more than one place within the architecture depending on scalability and performance requirements

Application delivery controllers, such as Brocade’s ADX series, optimize client communications to servers by pre-processing some of the server handshake such as TCP proxy, SSL offloads and balancing client traffic load across servers based on various optimization algorithms that use parameters such as server health, work load and latency. Other services commonly found in the data center are VPN termination, caching and wide-area optimization services such as compression, encryption and TCP acceleration.

 

In the data center, the access layer consists of Layer 2 switching domains and is commonly constructed with two tiers of Layer 2 switches, top-of-the-rack (ToR) switches, and end- of-row (EoR) or middle-of-the-row (MoR) switches. Network services are applied to traffic from layer 2 switching tiers at the aggregation layer via layer 2/3 switches or routers.

 

References

 

Network Management

Data center management includes application, server, virtualization, network, and storage management. Increasingly, these separate management domains are becoming more tightly integrated due to the growth of x86 server virtualization.

 

A variety of tools are used for network management including orchestration services for virtualization platforms, network traffic monitoring using sFlow, element management and monitoring using SNMP, and intelligent work load monitoring for load balancers . Brocade provides an integrated network management platform, Brocade Network Advisor (BNA), for management and monitoring of both IP and storage networks, products, and services. Leading server virtualization orchestration and management platforms use the BNA API to integrate Brocade’s Application Resource Broker (ARP) and VCS Fabric Automated Migration of Port Profiles (AMPP) into their management and monitoring platform.

 

References

 

Network Requirements

Server/Storage Edge

The data center network primarily connects servers and storage devices together. With growth of x86 server virtualization and the adoption of blade server chassis with embedded switches, the network edge is extending into the server chassis. Careful consideration of the network adaptor/NIC and planning for how to integrate network policies and configuration with the network components in the server are considered in this domain. Server technology trends, such as the following, first affect the access layer of the network.

 

10 GE LAN on Motherboard

Intel recently announced the introduction of 10 GE Ethernet LAN on Motherboard (LOM) in latest generation of motherboards. As server racks are deployed with 10 GE LOM, 10 GE access switches will be deployed at the top-of-rack and 40 GE uplinks will be used between the aggregation and access layer.

 

Multi-Protocol IO Adaptors

Directly supporting a strategy of private cloud computing in the datacenter, Brocade introduced AnyIO™ technology with the Brocade 1860 Fabric Adaptor family. An adaptor port can be software configured for either 10GE or 16 Gbps Fibre channel.

 

Key features of the Brocade 1860 Fabric Adaptor include:

            • Extends Fibre Channel and Ethernet fabric services to the server and applications
            • Consolidates I/O devices by partitioning physical adapters into virtual adapters through Brocade vFLink technology
            • Simplifies the transition to private cloud by supporting current and emerging virtualization workloads and technologies
            • Simplifies and unifies the management of adapter, SAN, and LAN resources with Brocade Network Advisor

Brocade Fabric Link (vFLink) technology partitions a single adapter into as many as 8 virtual adapters that can be configured as virtual HBAs (vHBAs, for Fibre Channel or FCoE) or virtual NICs (vNICs, for Ethernet). Bandwidth can be allocated to these virtual fabric links in 100 Mbps increments, up to the maximum of 16 Gbps for Fibre Channel or 10 Gbps for Ethernet. This helps overcome the proliferation of adapters in virtual environments while maintaining management isolation and fine-tuning for the different networks, including production, backup, management, or live migration.

 

References

 

Blade Servers with Embedded Switching

Blade servers with embedded switching have attracted more attention with the growth of virtualization. These frequently create multi-tier switching layer at the access layer. In some configurations, converged networks (IP and storage traffic on the same adaptor and cable) have reduced cabling cost and complexity at the access layer. There are a number of options for block storage IO including Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI as well as file access via NAS and NFS. Blade servers are often the entry point for 10GE with Data Center Bridging (DCB) to the server and for virtual IO channels using SR-IOV and other technologies.

 

References

 

Layer-2 Ethernet

The server/storage edge of the network commonly uses Ethernet to connect servers and IP-based storage to switches. Ethernet relies on broadcasts to discover paths and devices and broadcast domains have to be designed to provide scalability, performance, resiliency and high availability. New standards such as Data Center Bridging (DCB) and Transparent Interconnection of Lots of Links (TRILL) affect the architecture and are important when server virtualization and converged networks are deployed in the data center.

 

Spanning Tree Protocol

Spanning Tree protocol (STP) provides frame forwarding between switches in a broadcast domain. Enhancements to STP include Multiple Spanning Tree (MSTP), Rapid Spanning Tree (RSTP), and Per VLAN Spanning Tree (PVST/PVST+).

 

Although Spanning Tree is wide spread, it has limitations. For example, only one link can be active at a time and all traffic within the broadcast domain is halted whenever there is a change in the network (link added/removed, switch added/removed) so STP can rebuild the topology. Link aggregation groups (LAG), multi-chassis trunking (MCT) and VCS Fabrics are additional capabilities that can be used in Ethernet networks. For these reasons, this Reference Architecture avoids use of Spanning Tree using the following features. See the Access Blocks section for more details.

 

Link Aggregation Group

Link aggregation groups (LAG), sometimes called EtherChannels, are commonly used to increase bandwidth and improve resiliency. As server and application performance increased, a single link couldn’t provide sufficient bandwidth. LAG allows multiple links at the same link rate to be combined into a single logical link. Failure of a link with the LAG is detected and traffic continues to flow without requiring STP to rebuild the topology. LAG can be used between switches and also between servers and switches provided the host operating system supports it. Brocade supports LAG on all data center switches.

 

References

 

Multi-chassis Trunking

Where LAG provides greater bandwidth and improved resiliency within a link, multi-chassis trunking provides fault tolerance at the switch level. With MCT, two switches are connected using inter-chassis links (ICL) and present themselves to other switches and hosts as a single logical switch. Should one switch go off-line, traffic continues to flow to the other switch without requiring STP to rebuild the topology. Brocade supports MCT on the MLX and FastIron SX series of switches.

 

References

 

VCS Fabric

Transparent Interconnect of Lots of Links (TRILL) is a recent protocol developed to remove STP. TRILL avoids loops using a link layer routing protocol adapted for use with Ethernet. Brocade provides an implementation of TRILL with the VCS Fabric technology. With VCS fabrics, multiple links between switches automatically form trunks. Traffic flow within the fabric is automatically load balanced on multiple links using equal cost multi-path (ECMP) forwarding. A VCS Fabric connects to any other Ethernet switch using virtual LAG (vLAG) on the VCS fabric side and LAG on the Ethernet switch side of the link. vLAG allows links in the vLAG to terminate on multiple VCS Fabric switches providing excellent resiliency and availability.

 

References

 

Other important features in the Ethernet domain include Data Center Bridging (DCB) and virtual LAN (VLAN).

 

Data Center Bridging

DCB is a set of extensions to Ethernet that provide improved flow control and bandwidth management. DCB includes several standards: Data Center Bridging Capability Exchange Protocol (DCBX), Enhanced Transmission Selection (ETS) and Per-class Flow Control (PFC). The goal was to provide a “lossless” Ethernet instead of relying on higher layer protocols to provide lossless service. DCB is required for Fibre Channel over Ethernet (FCoE) traffic and also benefits iSCSI or any other traffic where avoiding frame loss at a lower layer improves overall performance.

 

Brocade supports DCB on the Brocade 8000 converged switch and with the VDX switch series with VCS Fabric technology.

 

References

 

VLAN

A VLAN) is used to improve utilization of the physical network, provide logical isolation of traffic and to apply network policies to Ethernet traffic. Frames have a tag added and removed with a VLAN identifier based on policies set within the server, storage or switch. Class of Service (CoS) identifiers are included in the tag so switches can optimize frame forwarding when congestion occurs. VLANs are commonly used to segregate traffic for security reasons. For example, servers within a cluster maybe assigned to one VLAN, management traffic to another and client traffic to yet another VLAN.  Each VLAN’s traffic is logically isolated from other VLAN traffic unless specifically routed at a Layer 3 router. Brocade supports VLANs on all data center switches.

 

Layer 3 IP Routing

To scale the network, IP routing services are required and an important consideration is the size of the Ethernet domain (layer 2 network) and the location of the IP routing domain (layer 3 network). It is common to locate the Layer 2/Layer 3 boundary in a Top-of-rack (ToR) switch in the server rack or in the middle-of-row rack switch (MoR). But, this is changing with sever virtualization and live migration of virtual machines (VM) that can eliminate downtime for maintenance windows and improve resource utilization. VM migration is limited to an IP subnet and a single VLAN. Today, with more racks of virtualized x86 servers being deployed, the architecture has some racks of servers with the L2/L3 boundary at the ToR switch while other racks will move the boundary to a separate aggregation module or the core switching module. Brocade provides L2/L3 ToR with the Brocade FCX and TurboIron series of switches and L2/L3 middle-of-row (MoR) with the MLX and SX series.

 

References

 

In other environments such as x86 server virtualization, a large physical extent for the broadcast domain is desired. In these configurations, the layer 2 broadcast domain extends across multiple racks of switches in the access layer and the IP routing domain is provided at the aggregation layer switch.  The Brocade 8000 Converged Switch does not support IP routing services and when deployed, it should be connected to aggregation switches for IP routing services. Brocade provides the MLX and SX series of switches for the aggregation layer.

 

Routing Services and Protocols

In the data center, RIP, OSPF and IS-IS are common choice for routing services. In addition, IPv4 is beginning to transition to IPv6 but IPv6 is not backward compatible with IPv4. Brocade provides dual routing stacks for the MLX series and IPv4 to IPv6 translation services with the ADX series of application delivery switches to help with IPv4 and IPv6 co-existence in the data center.

 

References

 

Virtual Routing Redundancy Protocol (VRRP) and an extended version (VRRP-e), are commonly used to provide high availability and resiliency for IP gateway addresses. Brocade provides VRRP-e for active/active clustering of IP routers improving utilization as both routers are active with Short Path Forwarding. VRRP, only provides an active/passive cluster configuration.

 

SAN

Storage area networks (SAN) provide shared storage pools and have proven valuable in improving utilization, providing high availability and keeping up with the performance demands of x86 server virtualization. For this reason, shared storage is commonly provided via a data center SAN. Server virtualization and live VM migration require shared storage and storage area networks (SAN) are so the SAN keeps growing to larger scales. Today, there are several protocol choices besides traditional Fibre Channel including iSCSI and Fibre Channel over Ethernet (FCoE). Therefore, server connectivity to the SAN has many options and protocols to choose from.

 

Fibre Channel, FCoE and iSCSI

While SANs originated with Fibre Channel, today’s SAN includes iSCSI and FCoE protocols as well. Brocade provides Backbone, Director and switch class SAN products with the DCX, 5000 and 6000 series of Fibre Channel switches. And, for blade chassis servers, Brocade provides embedded Fibre Channel switches for all brands of blade server chassis.

 

References

 

With the arrival of iSCSI and Fibre Channel over Ethernet (FCoE), Ethernet and IP based storage area networks are being deployed in the data center. The use of the IP network for transporting storage is sometimes referred to as a converged network. While it is true that both IP and block storage traffic can be combined on the same network to reduce first cost of network switches, other considerations including operational responsibility, resiliency, availability and timely fault-isolation have to be considered before comingling IP and block storage traffic on the same physical network.

 

Use of iSCSI is common in the data center but best practice has been to deploy on a dedicated IP network. FCoE is also beginning to be used and initially was closely tied to converged networking. However, the extent of network convergence is frequently limited to the ToR switch where native IP and Fibre Channel traffic are split off into their separate network domains. Brocade provides a ToR FCoE switch, the Brocade 8000 Converged Switch, and the innovative VCS Fabric technology for the VDX series of switches. For example, both the Brocade 8000 Converged Switch and VDX 6730 switch can be used for converged networking at the ToR. In addition, a VCS Fabric with multiple 6720 and 6730 switches can be used to connect multiple racks of servers using VDX 6720 switches at the ToR and then terminate the converged network at the middle of row (MoR) or end of row (EoR) with VDX 6730 switches that forward traffic into an existing Fibre Channel SAN.

 

References

 

SAN Adaptors

As noted earlier, Brocade supplies storage adaptors and converged network adaptors (CNA) for servers. Brocade recently introduced the Brocade 1860 Fabric Adaptor providing 16 Gbps Fibre Channel, 10 GE and 10 GE CNA functionality that can be applied to any adaptor port via software selection using Brocade’s AnyIO™ technology. Another innovation, Brocade vFLink™ technology, virtualizes a single port into multiple logical ports with independent policies and quality of service. This moves network virtualization services to the application via the adaptor while off-loading hypervisors from processing iSCSI and FCoE protocols.

 

References

 

SAN Services

One other important consideration in the SAN domain is SAN Services. These include distance replication with SCSI device emulation over metropolitan and wide area networks and encryption services for data at rest within the SAN.

 

Storage replication over distance is an important requirement for disaster recovery and business continuance. Fibre Channel extension services are commonly used for this reason as they are optimized for long distance and high bandwidth. Brocade provides the FX8-24 Extension Blade for the DCX backbone and the Brocade 7800 Extension switch for this purpose.

 

Brocade provides Fibre Channel SAN encryption services with the Brocade Encryption SAN Switch and the FS8-18 Encryption Blade for the DCX backbone. This optimized hardware encryption solution is supported leading suppliers of data security services.

 

References

 

Data Center Edge

The data center network connects to the outside world at the data center edge. This domain includes a number of architectural choices. In some enterprises, the campus LAN is large enough to have it’s own core routing services, or the campus can connect to the data center core to reach applications running in the data center and the Internet. Disaster recovery and business continuance often mean there is a dedicated interconnect between data centers in a region and/or globally between regions. In many enterprises, storage array replication services use the wide area network (WAN) or a metropolitan access network (MAN) to move replicated data between data centers. And with the growth of public cloud services providing Infrastructure as a Service (IaaS), platform as a service (PaaS) and software as a service (SaaS), the data center network can interconnect with multiple cloud service providers.

 

Data Center to Campus Network

Depending on the scale of the enterprise, the data center core routers may serve as the core routers for the campus network, or they may connect with separate core routers dedicated to the campus network.

 

Data Center to Internet

IPv6 is becoming necessary as the earlier IPv4 address space reaches exhaustion. The rate of depletion of IPv4 addresses varies by region and industry. However, IPv6 is not backward compatible with IPv4 so migration requires thoughtful consideration of the options. The datacenter network transition to IPv6 usually occurs first at the core rather than the access layer where legacy applications continue to operate using IPv4. Various technologies are available to ease migration. Dual-stacking and address translation techniques are useful techniques when considering how to integrate IPv6.

 

References

 

Data Center to Remote Data Center

In many companies, regional data centers are connected for disaster recovery. Application recovery requires sufficient servers, networking and storage infrastructure as well as application software to be deployed so that transition from the primary to secondary data center meets business recovery time (RTO) and recovery point (RPO) objectives. As virtual server adoption increases, disaster recovery solutions relying on layer 2 networking for real-time application fail-over between datacenters separated by up to 100 Km is an option. In conjunction with our partners, Brocade, in cooperation with partners, has developed datacenter replication solutions for leading server virtualization platforms.

 

For storage, Fibre Channel over IP (FCIP) is commonly used with array-to-array storage replication for mission critical applications. The bandwidth required for replication has been steadily increasing as the volume of data center storage grows from Terabyte (TB) to Petabyte (PB) moving to 10 GE WAN links.

 

References

 

Data Center to Public Cloud Computing Services

As public cloud services increase, enterprises are increasingly incorporating them into their IT strategy. Commonly, test and development environments are frequently the first to adopt public cloud Infrastructure as a Service (IaaS). Specific applications such as CRM and Email are also early candidates for public cloud computing services. VPN services are commonly deployed in the data center core to integrate public clouds with the data center.

 

IP Network Services

IP network services include load balancing client connections, high-performance SSL termination, IPv4 to IPv6 translation services, firewalls and intrusion detection / prevention services. Network services may be located in more than one place within the architecture depending on scalability and performance requirements

Application delivery controllers, such as Brocade’s ADX series, optimize client communications to servers by pre-processing some of the server handshake such as TCP proxy, SSL offloads and balancing client traffic load across servers based on various optimization algorithms that use parameters such as server health, work load and latency. Other services commonly found in the data center are VPN termination, caching and wide-area optimization services such as compression, encryption and TCP acceleration.

 

In the data center, the access layer consists of Layer 2 switching domains and is commonly constructed with two tiers of Layer 2 switches, top-of-the-rack (ToR) switches, and end- of-row (EoR) or middle-of-the-row (MoR) switches. Network services are applied to traffic from layer 2 switching tiers at the aggregation layer via layer 2/3 switches or routers.

 

References

 

Network Management

Data center management includes application, server, virtualization, network, and storage management. Increasingly, these separate management domains are becoming more tightly integrated due to the growth of x86 server virtualization.

 

A variety of tools are used for network management including orchestration services for virtualization platforms, network traffic monitoring using sFlow, element management and monitoring using SNMP, and intelligent work load monitoring for load balancers . Brocade provides an integrated network management platform, Brocade Network Advisor (BNA), for management and monitoring of both IP and storage networks, products, and services. Leading server virtualization orchestration and management platforms use the BNA API to integrate Brocade’s Application Resource Broker (ARP) and VCS Fabric Automated Migration of Port Profiles (AMPP) into their management and monitoring platform.

 

References

 

Building Blocks

 

This section defines a palette of building blocks. Blocks are grouped into the following classes:

              • Access
              • Aggregation
              • Core
              • IP Service
              • SAN
              • Management

The table below lists the blocks by class with a hyperlink to the subsection describing the building block. After the table, a description of each class of blocks and a description of each block is provided.

 

Type

Building Block

Access

Top-of-Rack L2/L3

Access

Top-of-Rack L2 with Multi-chassis Trunking (MCT)

Access

Top-of-Rack L2 Converged

Access

Top-of-Rack Stacking

Access

Top-of-Rack VCS Fabric

Access

Top-of-Rack VCS Fabric, Converged

Access

End-of-Row vToR

VCS Fabric

Spine Block, Leaf Switch

VCS Fabric

Spine Block, Collapsed

VCS Fabric

Leaf Block, 1 GbE Devices

VCS Fabric

Leaf Block, 10 GbE Devices

VCS Fabric

Leaf Block, Converged Network Devices

Core

Data Center Routing

Core

Internet Access

Core

Data Center Interconnect

IP Services

Inline Six-Pack

IP Services

Layer-3 Lollipop

Aggregation

Multi-chassis Trunking

Aggregation

Virtual Routing Redundancy Protocol-Extended

Aggregation

Combined MCT+VRRP-E

SAN

Edge-Switch

SAN

Edge-Access Gateway Switch

SAN

Core-Backbone Switch

SAN

Core-Fibre Channel Routing

SAN

Core-Integrated SAN Services

SAN

Core-Array-to-Array Distance Replication Service

SAN

iSCSI SAN Blocks

SAN

FCoE SAN Blocks

Management

Brocade Network Advisor

Management

sFlow Monitoring

Management

Virtualization Orchestration and Monitoring

 

Access Blocks

An access block connects devices (servers and storage) to the network edge. The network edge is a switch that can be inside the server, at the top of rack (ToR), the middle of row (MoR) or the end of row (EoR). A Blade server chassis includes slots for multiple special form factor switch cards while server virtualization software includes a “soft switch” inside the virtualization software stack. In both these instances, the network edge is integrated with the server hardware or virtualization software running on the server hardware. When blade servers or server virtualization are not used, the edge of the network ends at the edge switch.

 

Availability and Resiliency

As more applications become “always on”, network availability and resiliency (AR) are essential. When physical switches are used, availability and resiliency is provided by two independent switches connecting to each edge server. NIC teaming, bonding, and Multi-chassis trunking (MCT) can be used to provide resiliency at the server connection. Stacking or MCT can be used with edge switches to provide AR for the edge switches.

 

Performance

Traffic flows from the edge switches on uplinks toward the core. The total uplink bandwidth is commonly less than the total server bandwidth which means the uplinks are oversubscribed. An acceptable amount of oversubscription depends on the application work load.

 

Access Blocks—Top of Rack

There are several configurations that can be designed based on where the boundary between Ethernet and IP routing (L2/L3) is placed.

 

Top-of-Rack L2/L3

This block uses L3 routing in the ToR switch to isolate the Ethernet domain within the server rack. LAG on the uplinks from the ToR switches provides more bandwidth via link aggregation (LAG) and can be used to adjust the oversubscription. Point-to-point L3 routing provides availability and resilience for the up links.

 

DataCenter_BlockAccess_ToR-L2L3.JPG

  

Access Block, ToR Layer-2/Layer-3 (click to enlarge)

 

Top-of-Rack L2 with Multi-chassis Trunking

This block moves the L2/L3 boundary out of the access block. The ToR switches use multi-chassis trunking (MCT) to create a single logical Ethernet switch. The switches participating in the MCT cluster use inter-chassis links (ICL) to maintain state and provide non-disruptive fail-over should one switch go off-line. Servers use NIC teaming to provide availability and resiliency as well as increased bandwidth. Up links from the ToR switches LAG to provide availability and resiliency. Additional links can be added to the LAG to adjust the oversubscription.

 

DataCenter_BlockAccess_ToR-L2MCT.JPG

    Access Block, ToR Layer-2 with MCT (click to enlarge)

 

Top-of-Rack L2 Converged

This block moves the L2/L3 boundary out of the access block and converges IP and FCoE traffic within the rack. The ToR switches support converged networks via DCB, FCoE and 10GE Ethernet. Fibre Channel traffic is split off at the switch and connected to the SAN. Fibre Channel configuration options include Access Gateway mode to avoid adding too many Fibre channel switch domains to the existing fabric and Fibre Channel N_Port Trunking to provide resiliency and more bandwidth.  N_Port ID Virtualization (NPIV) is used so multiple server N_Ports can be trunked onto a single physical Fibre Channel switch port. Each ToR converged switch can connect to separate Fabric “A” and “B” for an "air gap" high availability SAN design.

 

Servers are configured with the Brocade 1010/1020 converged network adaptors (CNA) or the 1860 Fabric Adaptor. Both support 10GE, DCB and FCoE traffic. The switches participating in the MCT cluster use inter-chassis links (ICL) to maintain state and provide non-disruptive fail-over should one switch go off-line. Servers use NIC teaming to provide availability and resiliency as well as increased bandwidth. Up links from the ToR switches LAG to provide availability and resiliency. Additional links can be added to the LAG to adjust the oversubscription.

 

DataCenter_BlockAccess_ToRConverged.JPG
   Access Block, ToR Layer-2 Converged Network (click to enlarge)

 

Top-of-Rack Stacking

This block moves the L2/L3 boundary into the access block. The ToR switches support switch stacking. This provides a single logical switch composed of multiple physical switches, provides switch RA as any single switch can fail and traffic routes around it. Uplink ports use LAG with links from any switch in the stack so links can be added to the LAG to adjust uplink oversubscription. Any configuration settings are applied once to the stack and all switches receive that configuration. Uplink ports are Layer-3 ports and rely on ECMP and routing for RA and load balancing.

 

The stacking ports are usually high-speed (10GE, 16GE, etc.) providing good oversubscription rates between switches in the stack.

 

Servers can use NIC Teaming to provide RA as the stack appears as a single logical switch.

 

DataCenter_BlockAccess_ToRStacking.JPG

   Access Block, ToR Stacking (click to enlarge)

 

Top-of-Rack VCS Fabric

This block moves the L2/L3 boundary out of the access block. The ToR switches support Brocade’s VCS Fabric technology providing multi-pathing, automatic load balancing, automatic trunking, link and switch RA with very low fabric convergence times. When switches are connected together, they use 10GE ports and automatically form Brocade ISL Trunks. A LAG to servers and uplink switches provides HA with the added benefit of Brocade’s Virtual LAG (vLAG) supporting links within the LAG connecting to multiple VCS Fabric switches.

 

Servers can use NIC Teaming to provide HA with the links terminating on different VCS Fabric switches.

Automatic Migration of Port Profiles (AMPP) ensures network policies move to the proper switch port when virtual machines migrate within a server cluster.

 

DataCenter_BlockAccess_ToRVCSFabric.JPG

   Access Block, ToR VCS Fabric (click to enlarge)

 

Top-of-Rack VCS Fabric, Converged

A VCS Fabric supports converged network traffic. FCoE traffic can transit multiple fabric switches (sometimes called multi-hop FCoE) exiting at the fabric edge via a VCS Fabric switch with Fibre Channel ports as shown below. In this configuration, the Fibre Channel ports on the VCS Fabric switch connect to a Backbone Fabric using Fibre Channel routing. See Core Fibre Channel Routing for details about Fibre Channel routing.

 

DataCenter_BlockAccess_ToRVCSFabricConverged.JPG

   Access Block, ToR VCS Fabric, Converged (click to enlarge)

 

Access Blocks —End of Row

These configurations rely on end-of-row (EoR) or middle-of-row (MoR) switches. Commonly this allows a smaller number of chassis switches to be used instead of a greater number of ToR switches simplifying management, configuration, increased redundancy at the edge and potentially lowering cost.

 

End-of-Row vToR

This block uses a special patch panel at the top of the rack with RJ-45 to MRJ-21 connectors. This is called virtual Top of Rack, or vToR, since there isn’t a layer-2 switch at the top of rack, just a special cable concentrator. The MRJ-21 connectors are cabled to chassis switches at the EoR where 48 port MRJ cards are used. Only a few cables run from each rack to the end of the row. All configuration and management is performed at the chassis switch which provides the L2/L3 boundary. With high density GE cards, large numbers of servers can be networked economically.

 

Servers can use NIC Teaming with connections terminating on different port cards in the chassis providing excellent redundancy.

 

DataCenter_BlockAccess_EoRvToR.JPG

   Access Block, EoR Virtual Top-of-rack (vToR) (click to enlarge)

 

References

 

VCS Fabric Blocks

VCS Fabric blocks flatten the network using Brocade’s VCS Fabric technology. Within a single fabric, both layer 2 and layer 3 switching are available on any or all switches in the fabric. As described in the Access block section, a VCS Fabric of ToR switches can be configured to create a layer 2 fabric with layer 2 links to an aggregation block. In this set of building blocks the aggregation and access switching are combined into a single VCS Fabric of VDX switches. A single fabric is a single logical management domain simplifying configuration of the network.

 

VCS Fabric Topologies

Fabric topology is also flexible. For example, a leaf-spine topology is a good design choice for virtualized data centers where consistently low latency, constant bandwidth is required between end devices. Fabric resiliency is automatic so link or port failures on inter-switch links or Brocade ISL Trunks are detected and traffic is automatically rerouted on the remaining least cost paths. Below is an example of a leaf-spine topology for a VCS Fabric.

 

DataCenter_VCSFabricLeafSpineTopology-L3Spine.JPG

 

   Leaf-Spine VCS Fabric Topology, L3 at Spine (click to enlarge)

 

 

Each leaf switch at the bottom is connected to all spine switches at the top. The connections are Brocade ISL Trunks for resiliency which can contain up to 16 links per trunk. All servers can connect with each other with two switch hops in between. As shown, all leaf switches are at layer 2 and spine switches create the layer 2-layer 3 boundary. However, the layer 2/layer 3 boundary can be at the leaf switch as well as shown below.

DataCenter_VCSFabricLeafSpineTopology-L3Leaf.JPG

   Leaf-Spine VCS Fabric Topology, L3 at Leaf (click to enlarge)

 

In this option, VLAN traffic is routed across the spine and each leaf switch includes layer 3 routing services. Brocade ISL Trunks continue to provide consistent latency and large cross-sectional bandwidth with link resiliency. However, ECMP at layer 3 provides multipath forwading rather than ECMP at layer 2.

An alternative is a collapsed spine typically using VDX 8770 switches as shown below.

 

DataCenter_VCSFabricCollapsedTopology.JPG

   Collapsed Spine VCS Fabric Topology (click to enlarge)

 

The VDX 8770 is a modular chassis switch with high density of 10 GbE and/or 40 GbE ports. A collapsed spine topology can be an efficient building block for server virtualization with NAS storage pools. Multiple racks of virtualized servers and NAS servers are connected to a middle of row (MoR) or end of row (EoR) cluster of VDX 8770 switches. The collapsed spine topology lends itself to data center scale out that relies on pods of compute, storage and networking connected to a common data center routing core. For cloud computing environments, pod-based scale-out architectures are attractive.

 

References

 

The following describe several VCS Fabric building blocks.

 

Spine Blocks

VCS Fabric Spine Block, Leaf-Spine Topology

A VCS Fabric leaf-spine topology can be used to create a scalable fabric with consistent latency, high bandwidth multipath switch links and automatic link resiliency. This block forms the spine with each spine switch connecting to all leaf switches. Fabric connections in red are Brocade ISL Trunks with up to 16  links per auto-forming trunk. Layer 2 traffic moves across the fabric while layer 3 traffic exits the fabric on port configured for a routing protocol. As shown by the black arrows, uplinks to the core router would be routed, for example using OSPF. And connection to an IP Services block would also use layer 3 ports on spine switches.

 

The blue links show layer 2 ports that can be used to attach NAS storage to the spine switches. This option creates a topology for NAS storage that is similar to best practices for SAN storage fabrics based on a core/edge topology. For most applications, storage IOPS and bandwidth is less per server than a NAS port can service. An economical use of NAS ports, particularly when using 10 GbE ports, is to fan-out multiple servers to each NAS port. Therefore, attaching NAS storage nodes to the spine switches facilitates this architecture.

 

DataCenter_BlockVCSFabric_Spine-LeafSpine.JPG

 

   VCS Fabric, Spine Block, Leaf-Spine Topology (click to enlarge)

 

VCS Fabric Spine Block, Collapsed Spine

This block is a collapsed spine with a two switch VCS Fabric. Typically, high port count modular switches such as the VDX 8770 series would be used. This block works efficiently for data centers that scale-out by replicating a pod of compute, storage and networking. Each pod is connected via layer 3 routing to the data center core routers. Local traffic within the pod does not transit the core routers, but inter-pod traffic does. The collapsed spine uses VRRP/VRRP-E for IP gateway resiliency with the VCS Fabric providing layer 2 resiliency. As shown, the collapsed spine can be used effectively when connecting a large number of compute nodes to NAS storage as is commonly found in cloud computing environments and data analytics configurations such as a Hadoop cluster. The blue arrows represent 10 GbE links that use vLAG for link resiliency within the VCS Fabric and NIC Teaming for NAS server and compute server resiliency. As shown, IP Services blocks can be attached to the spine switches providing good scalability for load balancing and IDS/IPS services.

 

DataCenter_BlockVCSFabric_Spine-Collapsed.JPG

   VCS Fabric, Spine Block, Collapsed Spine Topology (click to enlarge)

 

Leaf Blocks

These blocks can be used with the VCS Fabric Spine Block for Leaf-Spine Topology. They can also be used to convert the VCS Fabric Spine Block for Collapsed Spine into a Leaf-Spine topology.

 

VCS Fabric Leaf Block, 1 GbE Devices

This block can be used to cost-effectively connect 1 GbE devices to a VCS Fabric. There are a large number of 1 GbE servers in service and many of these are being retrofitted with server virtualization software with good effect.

This block uses the VDX 6710 switch with Brocade ISL Trunks to each spine switch in a leaf-spine topology. The Brocade ISL Trunk uses 10GbE links. The server connections are 1 GbE and can use cost effective copper connections. vLAG can be used with two VDX 6710 switches for fabric resiliency while NIC Teaming can be used on the servers. Server virtualization configurations can take advantage of Automated Migration of Port Profiles (AMPP) to keep the network policies in synchronization with the virtual machine vSwitch port group policies.

 

DataCenter_BlockVCSFabric_Leaf1GEDevices.JPG

   VCS Fabric, Leaf Block for 1 GbE Devices (click to enlarge)

 

VCS Fabric Leaf Block, 10 GbE Devices

This block can be used to connect 10 GbE devices to a VCS Fabric. With the advent of 10 GbE LAN on Motherboard (LOM), as servers are refreshed, more servers support built-in dual 10 GbE interfaces.

This block uses the VDX 6720 switch. Brocade ISL Trunks with 10 GbE links are connected to each spine switch in the VCS Fabric. vLAG is used from both VDX 6720 to servers which can use NIC Teaming to provide resiliency and high availability. Server virtualization configurations can take advantage of Automated Migration of Port Profiles (AMPP) to keep the physical network policies in synchronization with the virtual machine soft switch policies.

 

An alternative for this block is to enable layer 3 routing in the VDX 6720 switches. The IP traffic from the leaf to the spine switches takes advantage of the Brocade ISL Trunk for low latency, lossless layer 2 frame forwarding as well as frame striping across all links in the trunk which delivers 95% link utilization of all links in the trunk and avoids traffic flow hot spots caused by classic LAG implementations that rely on static hashing to place a flow on a single physical link in the aggregated link.

 

DataCenter_BlockVCSFabric_Leaf10GEDevices.JPG

   VCS Fabric, Leaf Block for 10 GbE Devices (click to enlarge)

 

 

VCS Fabric Leaf Block, Converged Network Devices

This block can be used with converged network devices. A VCS Fabric supports converged network traffic. FCoE traffic can transit multiple fabric switches (sometimes called multi-hop FCoE) exiting at the fabric edge via a VCS Fabric switch with Fibre Channel ports such as the VDX 6730. In this block, the Fibre Channel ports on the VDX 6730 switch connect to a Backbone Fabric using Fibre Channel routing. See Core Fibre Channel Routing for details about Fibre Channel routing.

 

As with the VCS Fabric Leaf Block, 10 GbE Devices, the VDX 6730 has 10 GbE ports and connects to servers with 10 GbE CNA for lossless Ethernet traffic. Brocade ISL Trunks connect this block to each spine switch in the Spine Block while vLAG is used in the fabric for resiliency and high availability to each server that can use NIC Teaming.

 

Note: for FCoE each CNA will log into the fabric independently and storage traffic will follow the servers multipath driver configuration.



DataCenter_BlockVCSFabric_LeafConvergedDevices.JPG

   VCS Fabric, Leaf Block for Converged Networking Devices (click to enlarge)

 

Core Blocks

Core blocks connect multiple access, aggregation or VCS Fabric building blocks together and provide access to networks outside the data center. They provide routing services between the routing protocols inside the data center (e.g., RIP, OSPF, IS-IS) and external routing protocols used by the Internet, data center interconnects and the Campus LAN.

 

Data Center Routing

This block provides core routing in the data center to route traffic between Aggregation and VCS Fabric blocks. Data center traffic can use a variety of routing protocols (static, RIP, OSPF, IS-IS). Dual routers are configured for resiliency and high availability. Connection to the Internet provides access to customers, suppliers and employees. WAN links can be optical (OC-48, OC-192) or use T1 or T3 links. External routed connections use an appropriate protocol such as BGP and/or MPLS. Virtual Private Network (VPN) services can be used for securing connections with employees at remote offices, on the road or at home.

 

DataCenter_BlockCore_DataCenterRouting.JPG

   Core Block, Data Center Routing (click to enlarge)

 

Internet Access

Connection to the Internet provides access to customers, suppliers and employees. WAN links can be optical (OC-48, OC-192) or using T1 or T3 links. Virtual Private Network (VPN) services can be used for securing connections with employees at remote offices, on the road or at home.

 

In smaller environments, the core connects to access blocks and IP service blocks. In larger environments, the core block connects to one or more aggregation blocks.

 

DataCenter_BlockCore_Internet.JPG

   Core Block, Internet Access (click to enlarge)

 

Data Center Interconnect

Data center interconnects support data center-to-data center traffic supporting disaster recovery and business continuance. Traffic can include server clusters, data replication using IP or Fibre Channel over IP (FCIP) and high availability or disaster recovery clusters for virtual server environments.

 

For virtualization DR configurations, live migration can be a cost-effective method to meet very low recovery time and recover point objectives (RTO, RPO respectively). Live migration of virtual machines requires the same subnet be maintained after recovery to avoid client disruption. One way to provide this is by layer-2 tunnels over MPLS. This uses customer premises routers (CPE) between the core routers and the service provider’s MPLS routers.

 

Virtualization recovery also requires data replication between data centers. A common method to accomplish this with Fibre Channel storage is to use Fibre Channel extension with FCIP over leased circuits or the internet. Note that virtual machines are files, so data replication protects both the machine state as well as the application’s data.

 

For other applications, application outage is acceptable provided the time to restart the application and connect to a copy of its data does not exceed RPO/RTO requirements. Common methods to achieve this rely on data replication between data centers, again, using array replication and FCIP to transport storage between sites.

In smaller environments, the core block connects to access blocks and IP service blocks. In larger environments, the core model would connect to one or more aggregation blocks.

 

DataCenter_BlockCore_DCInterconnect.JPG

   Core Block, Data Center Interconnect (click to enlarge)

 

IP Service Blocks

IP service blocks connect IP network services to specific traffic flows, (e.g., between clients outside the data center and servers inside the data center, in n-tier web applications, services are inserted between the web, application and database tiers).

 

IP network services include server load balancing (SLB), firewall, SSL termination, intrusion detection/prevention services (IDS/IPS) and IPv4/IPv6 translation. A service block can connect to a core or aggregation block depending on the overall scale of the data center network.

 

Inline Six-Pack

By far the most common reference model in data center network design is best described as the Inline six-pack. Services are deployed as a six-pack, inline between the access blocks and the core block. The six-pack is formed by the inline deployment of redundant sets of services; firewalls, intrusion detection and prevention systems and server load balancers (SLB). Ingress traffic from an enterprise client to the application server flows “north to south” through the core and aggregation network devices entering the firewall, IDS/IPS system and SLB. The SLB then directs traffic to the appropriate server to avoid bottlenecks as server workloads vary. Egress traffic from the server to the client follows the same path back to the client through the SLB, IDS/IPS and firewall, returning to the aggregation router and exiting the data center through core routers.

 

The benefit of the six-pack inline design is the simplicity of configuration and the fact the ingress and egress path are symmetric. Symmetric path are important for SLBs to perform their server proxy and Layer 7 optimization functions such as cookie persistence.  The one negative for the inline design is that all traffic is required to flow through the service devices so they are often large, high capacity devices.

 

DataCenter_BlockIPSvcs_Inline6Pack.JPG

   IP Services Block, In-line Six-Pack (click to enlarge)

 

Layer-3 Lollipop

An alternative configuration is the Layer-3 Lollipop. While the logical flow of traffic is similar to that in the six-pack design, the physical configuration represents services as if they are lollipops sticking out of the Layer-3 aggregation switch as shown below. Ingress traffic from the core block is forced by the aggregation block to the firewall that redirects it to the SLB back through the aggregation router. A copy of the traffic is also sent to the IDS/IPS for processing. The SLB performs its application optimization function and redirects traffic to the appropriate server back via the aggregation router the appropriate switch in the access block. The egress traffic is forced to follow a symmetric path, through the SLB and firewall by the aggregation router and exits the data center through the core layer.

 

The benefit of the lollipop is that unrelated traffic can pass through directly to the access block from the aggregation block without going through service devices. This enables deployment of smaller capacity service devices as less traffic is expected to flow through them. The downside is that traffic flow to the service devices and the aggregation block is complex to configure. Particular care must be taken to ensure that the ingress/egress paths are symmetric so as not break Layer 7 optimization rules used by the SLB as well as L4-L7 state full firewall rules.

 

DataCenter_BlockIPSvcs_Layer3Lollipop.JPG

   IP Services Block, Layer-3 Lollipop (click to enlarge)

 

An alternate configuration of the Layer-3 Lollipop for a VCS Fabric leaf-spine topology is shown below. In this configuration, the IP Services components connect to each spine switch in the VCS Fabric.

 

DataCenter_BlockIPSvcs_Layer3Lollipop-VCSFabricLeafSpine.JPG

   IP Services Block, VCS Fabric Leaf-Spine Layer-3 Lollipop (click to enlarge)

 

Aggregation Blocks

This optional block is used when the scale of the data center requires more connectivity between the server/storage edge and the data center edge. It is inserted between the core and access blocks. It is common to also connect service blocks to the aggregation block so network services scale better as well.

 

Multi-chassis Trunking

This block improves availability for access blocks that move the layer-2/layer-3 boundary outside the access block. Since a LAG must have all links in the same switch, Multi-chassis Trunking (MCT) creates a single logical switch using a two switch cluster. Access block LAG can terminate into each of the switches in the MCT cluster.

This aggregation block can connect to an IP Service and/or IP Storage block.

 

DataCenter_BlockAggregation_MCT.JPG

   Aggregation Block, Multi-chassis Trunking (click to enlarge)

 

Virtual Routing Redundancy Protocol-Extended

This block improves RA for IP gateways. A switch cluster using VRRP-E provides an active/active IP gateway address for servers. Both switches share a logical IP gateway address that is advertized to the servers. If one switch goes off-line, the other continues to forwarding traffic via the gateway IP address. Unlike VRRP, VRRP-E is active/active allowing both switches to forward traffic and provides sub-second fail-over should a switch go off-line.

This aggregation block can also connect to an IP Service and/or IP Storage Block.

DataCenter_BlockAggregation_VRRPE.JPG

   Aggregation Block, VRRP-E (click to enlarge)

Combined MCT+VRRPE

This block combines MCT with VRRPE in the same aggregation block. It can connect with an IP Service and/or IP Storage block.

DataCenter_BlockAggregation_MCT+VRRPE.JPG

   Aggregation Block, MCT+VRRP-E (click to enlarge)

SAN Blocks

Storage area networks (SAN) are commonly used in the Enterprise. With server virtualization requiring shared storage for live virtual machine migration, increased storage IO bandwidth when consolidating multiple applications on a single server, and use of data migration to replicate VMs for disaster recovery, the SAN continues to be a fundamental part of data center network architecture.

Although SANs have been tightly aligned with Fibre Channel, over the past decade IP-based SANs have become standardized. Two of types, iSCSI and Fibre Channel over Ethernet (FCoE) are common.

This section is focused on block storage, not file storage. File servers (NAS, NFC, CIFS, Lustre, etc.) rely on servers and a client/server architecture to share storage. Consequently, Access and/or VCS Fabric blocks should be used when connecting file servers and scale-out NAS clusters to the data center network.

Large scale Fibre Channel fabrics use a core/edge topology. This has proven benefits for scalability, performance and low latency. Core switches are commonly chassis switches of the director or backbone class while edge switches are commonly 1U or 2U rack mount switches. In some data centers, the edge switches are deployed in the middle-of-row or end-of-row while others prefer to deploy the edge switches at the top-or-rack next to IP switches.

Fibre Channel Edge Blocks

Edge, Switch

The edge connects to servers via IO adaptors using either Brocade’s host bus adaptors (HBA) or Brocade’s 1860 Fabric Adaptor. Fibre channel carries storage traffic which has stringent requirements on latency, in order frame delivery and rapid convergence when switches or links change in the fabric. Considering that many Fibre Channel fabrics carry mainframe, OLTP databases and other tier 1 applications with stringent up time requirements, it’s common to deploy dual physical fabrics between servers and hosts commonly referred to as Fabric “A” and Fabric “B”. This implies two HBA per server to continue the HA all the way to the server backplane.

DataCenter_BlockSAN_EdgeSwitch.JPG

   SAN Block, Edge Switch (click to enlarge)

 

Edge, Access Gateway

It’s common practice to deploy servers in racks with IP switching in the rack. Fibre channel switches are also commonly deployed the same way. But, a Fibre Channel fabric has a limit on the total number of switches so deploying a Fibre Channel switch per rack can limit scalability. To overcome this, Brocade has Access Gateway mode (AG). When a switch uses AG mode, it acts as port extender and not a Fibre Channel switch. Fiber Channel switch services are supplied by an edge or core switch.

 

With blade chassis, use of AG mode is very attractive for embedded Fibre Channel switch cards greatly increasing Fibre Chanel edge ports in a single fabric.

 

DataCenter_BlockSAN_AccessGateway.JPG

   SAN Block, Access Gateway Switch (click to enlarge)

 

Fiber Channel Core Blocks

 

Core, Backbone Switches

The Brocade DCX is a backbone-class chassis switch. Chassis with different slot capacities are available for inserting Port cards and different SAN Services cards. A SAN Service card can provide data at rest encryption or Fibre Channel distance extension and can be combined in the same chassis.

It’s common practice to deploy dual physical fabrics called Fabric “A” and Fabric “B” for RA. Multiple core switches can be connected using either ISL connections or with the DCX series, inter-chassis links (ICL) which consume not ports on port cards as they are built-in to the backbone chassis.

DataCenter_BlockSAN_CoreBackbone.JPG

   SAN Block, Core Backbone (click to enlarge)

Core, Backbone with Extended Distance ISLs

This block supports array-to-array replication over extended distances up to 120 KM. If the added latency is acceptable for the application using replicated storage, and the array supports synchronous replication, this block can be used as part of an active/active data center design providing very low Recovery Point Objective (RPO) and Recovery Time Objective (RTO). This block also uses Virtual Fabrics available on the DCX and DCS Backbone (among other products). Virtual Fabrics allows a physical switch to be logically partitioned into multiple logical switches. Each logical switch can connect into different SAN fabrics of physical or other logical switches. Ports are allocated to a logical switch when needed so this provides a cost-effective means to isolate array replication traffic over a long distance optical network from application driven IO between servers and storage.

DataCenter_BlockSAN_CoreExtendedDistanceISLs.jpg

   San Block, Core Backbone with Extended Distance ISLs (click to enlarge)

Core, Backbone with Fibre Channel Routing

Similar to IP Routing, Fibre Channel Routing isolates layer-2 SAN traffic within a fabric but allows specified devices in separate fabrics to send traffic to each other. Brocade’s Advanced Zoning has logical SAN zones (LSAN zones) that are used with Fibre Channel routing. An LSAN zone is defined to enforce security policies for devices in separate fabrics so only authorized devices can route traffic to each other.

A separate “Backbone Fabric” is created via EX_Ports which connect to E-Ports in Edge Fabrics. As shown below, Brocade’s Virtual Switch feature can be used to logically segment a core backbone switch into multiple logical switches each in an independent fabric. Brocade Integrated Routing (IR) ports are configured in the Backbone Fabric logical switch each associated with an Edge Fabric. Brocade 8 and 16 Gbps switches and Backbones support IR ports so any switch port can be configured as a routing port, or EX_Port. ISL connections (E_Ports) in logical switches can extend a Backbone Fabric to include multiple logical switches each in a separate Backbone switch. Inter-fabric links (IFL) from the FC router ports to an Edge Fabric can be trunked (IFL Trunk) for RA.

As shown below, a common use of Fibre Channel routing is to share tape libraries and drives with hosts, backup servers and storage arrays in multiple fabrics. A second use is connecting VCS Fabrics with converged traffic (IP and FCoE) to a SAN Fabric. See Top-of-Rack VCS Fabric, Converged for details about using VCS Fabrics for converged traffic.

DataCenter_BlockSAN_CoreFCRouting.JPG

   SAN Block, Fibre Channel Routing (click to enlarge)

Core, Backbone with Integrated SAN Services

Service blocks connect SAN services with specific traffic flows between servers and storage in a shared storage environment. Important SAN services include data at rest encryption and SAN distance extension over IP with device emulation and compression that is used with array-to-array storage replication and for tape emulation for backup to remote tape drives. Device emulation reduces latency for FICON and SCSI IO over distance and compresses data. This reduces the WAN bandwidth required and extends the distance for array-based storage replication.

Integrated SAN services are deployed either as blades inserted into chassis slots in backbone switches (shown in Fabric-A below), or as separate switches connected to the core ( shown in Fabric-B below).

DataCenter_BlockSAN_CoreIntegratedSvcs.JPG

   SAN Block, Core Integrated SAN Services (click to enlarge)

Core, Distance Extension Switch Over IP Service

In some array replication configurations, Fibre Channel ports on separate distance extension switches are connected directly to array ports carrying array-to-array replication traffic over IP. The Ethernet ports on the distance extension switch connect directly to the IP core router. Traffic on the Ethernet links use the Fibre Channel over IP (FCIP) protocol. Support for QoS ensures bandwidth is reserved for critical storage replication traffic should IP router congestion occur.

DataCenter_BlockSAN_ArrayReplicationOverIPSvcs.JPG

   SAN Block, Core Distance Extension Switch Over IP Service (click to enlarge)

iSCSI SAN Blocks

An iSCSI SAN uses IP network connections (Layer-3) for block storage traffic. It’s common practice to provide a physically separate network for iSCSI storage traffic. The physical separation includes server NIC cards and L2/L3 switches. The following design blocks can be used to design an iSCSI SAN. By connecting both storage and servers to the switches.

References

FCoE SAN Blocks

The following design blocks can be used to design an FCoE SAN.

References

Management Blocks

Management blocks can be network centric or integrated platforms that include server and storage management. With the growth of x86 virtual servers, virtualization orchestration and management platforms are beginning to provide integrated network, server and storage configuration, orchestration and management. Traffic monitoring tools such as sFlow, which are open standards, are also used to provide fine grained monitoring at the application, virtual machine or web component level.

Brocade Network Advisor

Brocade Network Advisor (BNA) is a software-based management platform designed to integrate element management, monitoring and configuration of all Brocade data center products, both IP and SAN. As shown below, BNA provides role-based access control so IP and SAN administration functions can be securely controlled.

DataCenter_BlockMngmnt_BNA.JPG

   Brocade Network Advisor Block (click to enlarge)

sFlow Monitoring

Brocade integrates sFlow monitoring into all of the IP products. sFlow is an open standard for IP traffic monitoring. A number of partners provide sFlow platforms that integrate traffic monitoring all the way to the individual virtual machine.

Virtualization Orchestration and Monitoring

Via published API’s from x86 server virtualization platform vendors, Brocade provides direct integration with virtualization platform orchestration and monitoring services. Examples include

·     Automatic Migration of Port Profiles (AMPP) integration with vSphere port groups via a vSphere plug-in. Port groups are automatically discovered and the corresponding AMPP and policies are automatically created in a VCS Fabric.

·     Brocades ADX Application Resource Broker (ARP) provides dynamic virtual machine configuration based on application and client workload via a vSphere plug-in or Microsoft System Center Virtual Machine Manager (MSCVMM 2012) API.

Templates

As mentioned previously, building blocks are combined into templates. A data center template can include one or more access blocks, an optional aggregation block and one or more IP services blocks. A core template can include a core block and optionally IP service blocks.

SAN blocks can be used to build a storage template. A storage template connects to one or more data center templates and can also connect to a core template for data replication services.

The following provides an example of how to create templates from building blocks. The final section shows how to use multiple templates to construct a canonical architecture for a data center network.

Data Center Template, Converged Networking

This block provides converged networking using ToR converged switches and a VCS Fabric for converged traffic. As show below, it includes the following modules.

DataCenter_TemplateDC-LargeConverged.JPG

   Data Center Template, Converged Network (click to enlarge)

References

This template can scale to large environments as it includes an aggregation module. Multiple access blocks can be added to scale out the server edge. The ToR converged access block connects to a SAN template using direct fabric attach and/or Access Gateway mode. The VCS Fabric blcok connects to the SAN template using Fibre Channel routing. Below is an exploded view of showing the details of the building blocks included in this template.

DataCenter_TemplateDC-LargeConverged-Exploded.JPG

   Data Center Template, Converged Network Exploded View (click to enlarge)

Data Center Template, Server Virtualization

This template supports server virtualization. It includes 1GE servers using the End-of-Row vTor block and IP services using the Layer-3 lollipop IP service block. It does not require an aggregation block as the EoR switches collapse access and aggregation into the same block and the block connects to the IP service block. Additional End-of-Row vTor blocks can be added to scale out this template.

DataCenter_TemplateDC-EORVirtualization.JPG

   Data Center Template, EoR Server Virtualization (click to enlarge)

References

DataCenter_TemplateDC-EORVirtualization-Exploded.JPG

   Data Center Template, EoR Server Virtualization Exploded View (click to enlarge)

Data Center Template, VCS Fabric Leaf-Spine

This template provides a flat data center network using Brocade VCS Fabric technology. Diverse workloads of physical and virtual servers benefit from using a VCS Fabric.

DataCenter_TemplateDC-VCSFabricLeafSpine.JPG

   Data Center Template, VCS Fabric Leaf-Spine (click to enlarge)

References

  DataCenter_TemplateDC-VCSFabricLeafSpine-Exploded.JPG

   Data Center Template, VCS Fabric Leaf-Spine Exploded View (click to enlarge)

SAN Template, Core/Edge with Routing

This SAN template includes the edge switch) and Access Gateway blocks to provide block storage for data center templates. It also includes a Fibre Channel routing block to connect to the VCS Fabric, block and to provide shared tape library access for backup.

DataCenter_TemplateSAN-CoreEdgeRoute.JPG

   SAN Template, Core/Edge with Routing (click to enlarge)

References

DataCenter_TemplateSAN-CoreEdgeRoute-Exploded.JPG

   SAN Template, Core/Edge with Routing Exploded View (click to enlarge)

Core Template, Data Center Interconnect

This template connects to one or more data center templates and to a SAN Template with a SAN replications services block. It provides access to a remote data center for DR and access to the Internet.

DataCenter_TemplateCore-DCInterconnect-Exploded.JPG

   Core Template, Data Center Interconnect (click to enlarge)

Management Template

This template uses Brocade Network Advisor, an sFlow monitoring block and a plug-in block for VMware vCenter orchestration and management with VCS Fabric AMPP.

DataCenter_TemplateMngmnt-BNAsFlowvCenter-NoTitle.JPG

   Management Template, BNA with sFlow and VMware vCenter Plug-ins (click to enlarge)

Canonical Architecture

Below is a canonical architecture for a data center network constructed from a core, two data centere, one SAN and one management template. Connections are shown between the appropriate building blocks.

Templates can scale-up and templates can be replicated to scale-out the architecture.

DataCenter_CanonicalArchitecture-Templates.JPG

   Example Canonical Architecture Built from Templates (click to enlarge)

Contributors