Design & Build

Data Center Solution-Design Guide: EMC Isilon Scale-out NAS and VMware vSphere 5

by on ‎09-06-2012 08:48 AM - edited on ‎04-09-2014 04:16 PM by pmadduru (3,850 Views)
 
 
 

Synopsis: Design best practices for Brocade VCS Fabric with Brocade VDX Switches, VMware vSphere 5 and Isilon OneFS scale-out NAS. The design includes Brocade MLX Routers and Brocade ADX application delivery switches.

 

 

Contents

 

Preface

Overview

Unstructured data is fueled by the growth of the Web 2.0 applications, social media, video and large-scale data analytics commonly called “Big Data”. Client/server computing enabled network attached storage (NAS) in the 1980’s. NAS is file-based storage that can be shared with multiple client computers. However, scaling NAS become problematic as the file system name space, as well as storage pool, were limited by the scale-up of a high performance server. Today, scale-out NAS clusters built from commodity compute components can create very large storage pools, excellent IO performance, storage capacity in the tens of petabytes in a single file system name space, and cluster redundancy levels that handle multiple node failures.

 

EMC Isilon® storage is a leader in scale-out NAS achieving 15 petabytes (PB) per highly available storage pool with integrated volume management in a single file system name space. Isilon OneFS® further reduces complexity and operating cost by combining RAID, volume management and file system management into one unified software layer. OneFS provides a single point of management for large storage pools, faster access to large files, built-in high-availability, and the ability to easily scale the file system from a few Terabytes (TB) to more than 15 (PB) of storage.

 

Virtual servers for x86 platforms also rely on scale-out clusters built from commodity compute components. The leader in server virtualization is VMware with vSphere® 5. VMware vSphere is now a key building block that IT uses to dramatically reduce infrastructure costs while adjusting resources dynamically to fit application workloads. Combining vSphere with Isilon Scale-out NAS avoids the limitations of static compute and storage environments, allowing full virtualization of compute and storage resources, live migration of workloads, automated creation of virtual machines on demand with options for high availability and disaster recovery services.

 

Until recently, the network was not so scalable, nimble nor virtualized. To address this, Brocade introduced Brocade VCS® Fabric technology. Brocade VCS Fabric removes several limitations of classical layer 2 networks including Spanning Tree Protocol (STP), a single active path per broadcast domain, congestion “hot-spots” in link aggregation groups (LAG), and the requirement to halt all traffic in the Ethernet network whenever changes are made to the topology (add/remove switches and links). Brocade VCS Fabric scale-out cluster architecture is similar to the how storage and server clusters scale-out to create resource pools. It transforms static, hierarchical layer 2 networks into a dynamic resource pool delivering multi-path forwarding on all least-cost paths, uniform low latency forwarding between all devices, and non-disruptive automatic scale-out when new links and switches are added to the fabric. The VCS fabric provides a simple, scalable and highly available transport layer for low-latency, high bandwidth traffic between VMware ESXi servers and Isilon storage nodes. When combined with the Brocade MLX router and ADX Series of application delivery switches, network designers can create an end-to-end data center network that can scale as dynamically as server virtualization and scale-out NAS does while simplifying network operations and management lowering total cost of ownership.

 

The following Brocade platforms are used in this solution design.

  • Brocade Network Operating System (NOS) for VDX™ series switches
  • Brocade NetIron® Operating System for MLX™ series routers
  • Brocade ServerIron ADX Application Delivery switch

 

Purpose of This Document

Brocade VDX switches with VCS Fabric technology, Brocade MLX core routers, Brocade ADX application delivery switches and Brocade Network Advisor are used in this design. It includes EMC Isilon scale-out NAS and VMware vSphere 5.

 

This design guide supports information found in EMC publications about EMC Isilon with VMware vSphere 5 (Reference Architecture, Deployment Guide, Best Practices Guide and Sizing Guide) which should be reviewed.

This guide is based uses templates constructed from building blocks described in Brocade’s Data Center Network Infrastructure Base Reference Architecture (See Related Documents). Design options include a smaller scale and full-scale configuration. The smaller scale configuration can be easily extended to the full-scale configuration as storage volume and compute workloads demand.

 

Audience

This document is intended for data center architects and network designers responsible for deployment of virtual data centers and private cloud computing architectures.

 

Objectives

This Design Guide provides guidance and recommendations for best practices when designing a data center network with VMware vSphere 5 and Isilon OneFS scale-out NAS and Brocade VDX switches with VCS Fabric technology, MLX routers and ADX application delivery switches.

 

Restrictions and Limitations

Release of NOS 3.0 is required for deployment of the VDX 8770 and layer 3 routing services (OSPF, VRRP/VRRP-E). Check the NOS 3.0 release notes for other restrictions or limitations.

 

Related Documents

The following EMC documents are valuable resources about EMC Isilon and VMware vSphere 5 solutions.

 

References

 

The following Brocade publications provide information about the Brocade Data Center Infrastructure Base Reference Architecture and features and capabilities of the NOS, NetIron MLX and ServerIron ADX platforms. Any Brocade release notes that have been published for NOS, MLX NetIron and ADX should be reviewed as well.

 

References

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Lead Architect: Marcus Thordal, Strategic Solutions Lab

 

Document History

Date                  Version        Description

2012-09-12         1.0                Initial Version

 

Reference Architecture

This design guide is based on Brocade’s Data Center Network Infrastructure, Base Reference Architecture (DCNI-RA). DCNI-RA provides a flexible, modular way to architect and design data center networks. As shown below, VMware vSphere 5 is overlaid on top of the DCNI-RA to create this solution design.

 

The DCNI-RA is constructed from building blocks which are combined into templates. Building blocks are repeatable component configurations commonly found in data center networks. Templates provide repeatability, incorporate scale-out and simplify architecture, design and operation of the data center network.

 

Leaf/Spine Fabric Design

This guide contains two designs based on the Brocade VCS Fabric technology with the VDX family of switches. One design is based on a Leaf/Spine VCS Fabric as shown below.

 

 

DataCenter-Solution_IsilonVMware-VCSLeafSpineReferenceArchitecture.JPG

   Leaf/Spine VCS Fabric Design Reference Architecture (click to enlarge)

 

The design uses VDX 8770 switches at the spine and VDX 6710 and VDX 6720 switches at the leaf. Leaf switches connect to ESXi servers. The VDX 6710 is optimized for 1 GbE and the VDX 6720 for 10 GbE devices. Both switches join a VCS Fabric eliminating Spanning Tree Protocol (STP). The leaf/spine VCS Fabric topology provides uniform latency with minimal switch hops while providing low oversubscription ratios with scalability to very large numbers of servers. The diagram below shows the reference architecture for the leaf/spine design. The design includes a Core Template, Data Center Template with Leaf/Spine VCS Fabric, and Management Template combined with VMware vSphere 5 and EMC Isilon OneFS Scale-out NAS.

 

Collapsed Fabric Design Option

 

The other design, shown below, is a collapsed VCS Fabric that is simpler than the leaf/spine VCS Fabric design but not as scalable.

 

DataCenter-Solution_IsilonVMware-VCSCollapsedReferenceArchitecture.JPG

   Collapsed VCS Fabric Design Reference Architecture (click to enlarge)

 

The collapsed design uses a pair of VDX 8770 switches to form a two switch VCS Fabric with layer 3 routing to the core. The collapsed design can easily be expanded into the leaf/spine design by adding GE or 10 GE leaf switches should growth require the better scalability of the leaf/spine design. Similar to the leaf/spine design, the reference architecture includes a Core Template, Data Center Template and Management Template along with VMware vSphere 5 and EMC Isilon OneFS Scale-out NAS as shown below.

 

References

 

Business Requirements

A challenge for data center architects and designers is how to meet the mixed performance demands of a diverse set of business applications. Disparate workloads including business analytics with large data sets, aka, “Big Data”, back office applications with high rates of transaction performance (e.g., CRM, ERP, financial OLTP systems), create a wide range of network performance requirements. Server virtualization clusters running tens of workloads per node and multiple hundreds of workloads per cluster can move any workload across server, storage and network resource pools as resource demands dictate. At scale, this requires a network providing uniform latency, use of all least cost paths, high resiliency and configuration simplicity. Storage and network management have become much more complex in the face of this and data centers commonly include disparate types of storage, storage protocols and network topologies. On top of this, cluster technology, the key to scale-out solutions for compute and storage resources, place much more stringent demands on network bandwidth, latency, scalability and availability often exceeding the capabilities of static hierarchical networks based on classic Ethernet. Network management with repeated entry of commands on multiple switches and ports are required to implement a policy change, adjust bandwidth in a link aggregation group (LAG) or trunk, and to tune around a hot-spot on a physical link in a LAG. The manual management and operation model fails to keep pace with a virtual data center where workload migration changes the network traffic flows, polices have to move when the workload migrates and dynamic storage pools change the latency and bandwidth of storage traffic rapidly.

 

The solution requires a data center architecture that leverages the same design principles for servers, storage and networking: scale-out resource pools, automatic dynamic load balancing, and policy defined configuration. Isilon OneFS, VMware vSphere 5 and Brocade VCS Fabric technology are designed with the same principles. Deploying them for data center virtualization and private cloud compute projects eliminates the restrictions inherent in the previous generation of data center architecture.

 

Special Considerations

This Design Guide covers a single data center environment. However, larger enterprises often deploy dual data centers for disaster recovery and business continuance leveraging the ability to synchronize virtual machine state and storage state across distance. Brocade publishes a Design Guide for VMware vSphere 5 Site Recovery Manager (SRM). It can be combined with this Design Guide to construct a dual active/active data center with vSphere 5 and Isilon OneFS Scale-out NAS.

 

References

 

Design

VCS Fabric Technology

 

Storage traffic places stringent demands on the network, particularly Isilon scale-out NAS where storage volumes can be as much as 15 Petabyte per storage pool. A VCS Fabric eliminates Spanning Tree Protocol (STP) within the VCS fabric thereby removing many issues with Ethernet as the transport for virtual data centers. A VCS Fabric provides least cost routing across all active links at layer 2 with TRILL-based link layer routing at layer 2. With Brocade Network Operating System (NOS) 3.0, layer 3 routing is integrated within the fabric reducing network tiers for a flatter, low latency and lower cost network. All switches in a VCS Fabric are aware of each other and the fabric topology. Changes to the topology (adding/removing links and/or switches) do not halt traffic on unaffected links and fabric convergence is much quicker than STP. Multiple links between switches automatically create a Brocade ISL Trunk with a maximum of eight links per trunk. A Brocade ISL Trunk with hardware assisted frame stripping across all links eliminates hot-spots common to LAG based trunks that rely on one time hashing of flows and static flow allocation to a specific link. A VCS Fabric includes data center bridging (DCB) for lossless layer 2 transport and jumbo frames for improved performance with NAS where blocks up to 8 KB can be forwarded in a single Ethernet frame.

 

For server virtualization environments and live VM migration between servers, Brocade VCS Fabric provides Automated Migration of Port Profiles (AMPP) to ensure all network policies are automatically applied to the ingress port of the fabric regardless of which port traffic from a VM enters. AMPP is enhanced with a VMware plug-in so AMPP is VM aware. With the plug-in, VCS Fabric port profile creation is automatic and synchronized with VM creation. vSphere sends a message to the VCS Fabric with information about the VM and its port group so VCS Fabric can create a matching AMPP port profile. When a VM migrates, an alert is sent to the VCS Fabric so the new ingress fabric port for VM traffic is explicitly identified in advance.

 

Leaf/Spine Fabric Topology

The diagram below shows the topology used for a leaf/spine fabric topology. It uses VDX 8770 switches at the spine, with the option to use the four or eight slot model. The spine switches form a VCS Fabric with the VDX 67xx switches used at the leaf. Each leaf is connected to every spine using one or more Brocade ISL trunks. Each Brocade ISL Trunk can include up to eight 10GbE links.

 

The leaf/spine topology provides high availability and resiliency, excellent utilization of the Brocade ISL Trunk bandwidth (95%), low oversubscription ratios. and easy scale-out of NAS nodes at the spine and ESXi servers at the leaf.

 

DataCenter-Solution_IsilonVMware-LeafSpineTopology.JPG  

   Leaf/Spine VCS Fabric Topology (click to enlarge)

 

Collapsed Fabric Topology

The topology below includes a pair of VDX 8770 switches, either the four or eight slot model. The switches form a two switch VCS Fabric. Uplinks go to the core template and to the IP services template. These use VCS Fabric vLAG for high availability and resiliency. Servers and NAS nodes use NIC Teaming to connect to each of the VDX 8770 switches again using vLAG for high availability and resiliency.

 

DataCenter-Solution_IsilonVMware-CollapsedTopology.JPG

   Collapsed VCS Fabric Topology (click to enlarge)

 

This topology provides a network that is highly available, resilient with excellent scalability that avoids the use of Spanning Tree protocol.

 

Base Design

This section describes the base design for the solution. Any design options or optimizations of this base design are documented in later sections.

 

The network design uses three templates, data center network, data center core, and network management. Each template is constructed from one or more building blocks documented in the Data Center Infrastructure Reference Architecture. In this design, there are two options for the Data Center template, a Leaf/Spine VCS Fabric and a Collapsed VCS Fabric. Either can be used with the Data Center Core and Data Center Management templates

 

The network is designed in an open way to accommodate a variety of server products. However, the choices are restricted by the VMware hardware compatibility requirements for vSphere 5.

 

Isilon OneFS Scale-out NAS Building Block

 

Synopsis

EMC Isilon storage is based on an architecture model that differs from traditional storage platforms to enable efficient storage at large scale. An Isilon scale-out NAS cluster can scale to over 15 petabyte with all storage managed in a single file system name space. Isilon OneFS creates a distributed, scale-out storage cluster integrated with a distributed file system. OneFS simplifies storage by integrating three layers of storage management, RAID, volume management and a file system, under a single point of management. OneFS uniquely stripes files and meta-data across multiple storage nodes with the cluster, an improvement over striping only across individual disks in a single storage device. This fully distributed scale-out architecture provides superior performance, plug and play scaling with integrated high availability and fault tolerance.

OneFS provides each node with the knowledge of the entire file system layout and the location of each file and parts of the file. All nodes in the cluster use a backend dedicated Infiniband network for messaging, control and node synchronization. Accessing any storage node in the cluster (via 1 GE or 10 GE interfaces) provides access to any and all files in the file system regardless of which nodes its data blocks are stored. Thus, there are no volumes or shares, no inflexible volume size limits, no downtime for reconfiguration or expansion of storage and no sprawling list of network drives to manage.

 

Each cluster consists of from three to 144 Isilon IQ nodes. A node can be a S-Series, X-Series or NL-Series with integrated CPU, disk, optional SSD and network interfaces (1 GbE or 10 GbE). By mixing nodes from different series, optimized storage pools are available to match differing application workloads. Data can automatically migrate from one series of node to another based on Information Lifecycle Management (ILM) policies further reducing total cost of ownership.

 

Licensed software modules are provided including:

 

  • SmartConnect Advanced ® - Provides for policy-based network access and load balancing with failover for high availability.
  • SmaprtPools ® - Data management using different disk tiers, applying Information lifecycle Management (ILM) policies based on standard file attachments.
  • LinsightIQ ® - Powerful yet simple analytics platform to identify storage cluster performance trends, hot spots, and key statistics and information.
  • Isilon Plugin for vSphere ® - Integrates backup and restore tasks through the vCenter client.

 

Block Diagram

DataCenter-Solution_IsilonVMware-IsilonOneFSScaleOutNASTemplate.JPG  

   Isilon OneFS Scale-out NAS Building Block (click to enlarge)

 

 

Key Features

OneFS Distributed File System

Scale-out NAS File system with more than 15 PB of storage per cluster

SmartPools

Policy-based storage pool configuration and management

SmartConnect

Policy-based network access and load balancing

SmartCache

Globally-coherent caching across all cluster storage nodes

InSightIQ

Virtual appliance for detailed monitoring of long-term storage usage statistics

 

 

References

 

VMware vSphere 5 Building Block

 

Synopsis

VMware vSphere 5 is the leading x86 virtualization platform used in thousands of IT data centers, world-wide. vSphere virtualizes computing hardware resources into pools of CPU, memory, storage and network controllers. A virtual machine (VM) container is defined with a specified amount of resources to host an application workload. The VM runs the operating system and application code as if the VM was a physical server. vSphere, in conjunction with vCenter management tools, provides a virtual data center in which applications can be deployed from pre-configured templates in minutes instead of months. vCenter plug-ins enable partners to integrate vCenter management with partner products including Isilon NAS and Brocade VDX switches with VCS Fabric technology.

A high value feature of vSphere 5 is live vMotion allowing migration of a virtual machine and its running applications from one server in an ESXi scale-out cluster to another without interruption to the application processing or client connections. Live vMotion enables dynamic resource allocation and optimization of resources as application workloads vary.

The high availability requirements of the virtual data center have increased as more applications are expected to operate 365 x 24 without downtime. Live vMotion is used to move applications off compute hardware so it can be serviced or refreshed without application downtime. But, disaster recovery and HA requirements need more than this and VMware provides Distributed Resource Scheduler (DRS), Storage vMotion and Storage DRS as well as Site Recovery Manager (SRM) for active/active data centers. These optional modules enable seamless migration and management of virtual machines between hosts and data stores.

 

 

Block Diagram

DataCenter-Solution_IsilonVMware-VMwareTemplate.JPG 

   VMware vSphere 5 Building Block (click to enlarge)

 

 

Key Features

vSphere 5

X86 server virtualization platform

vCenter 5

Integrated virtual data center management platform

VMotion

Live VM migration for high resource utilization and non-stop hardware upgrade

 

 

 

References

 

 

Data Center Template, Leaf/Spine VCS Fabric Design

The following diagram shows the Data Center template for the leaf/spine VCS Fabric design with the building blocks used.

 

DataCenter-Solution_IsilonVMware-DataCenterTemplate-LeafSpine.JPG

   Data Center Template, Leaf/Spine VCS Fabric Design (click to enlarge)

 

 

The Spine building block connects to the core template and the Layer 3 Lollipop via IP routing. It forms a VCS fabric with the leaf switches at layer 2 automatically forming Brocade ISL Trunks between spine and leaf switches with up to 80 Gbps of bandwidth. Brocade ISL Trunks are highly efficient compared to classical LAG trunks since they do not use hashing to place flows on physical links. Instead, all flows are frame striped across all physical links eliminating workflow caused hot spots, providing near perfect load balancing with up to 95% trunk utilization.

 

The following describes the building blocks used to create this template.

 

Leaf/Spine VCS Fabric, Spine Block

Synopsis

The spine block creates the layer 3 / layer 2 boundary in the fabric. Uplinks to the core routers use OSPF or static routing services from ports in the spine switches. The links form spine switches to the core can be configured with VCS Fabric vLAG for bandwidth aggregation with high availability.

As shown by the red links, Brocade ISL trunks automatically form between a spine switch and a leaf switch. Each spine switch connects to all leaf switches. Both spine switches are connected together with their own Brocade ISL Trunks for messaging and to form a VRRP/VRRP-E resilient gateway to the core.

Isilon NAS nodes connect to 10 GbE ports in both spine switches. VCS Fabric vLAG provides link resiliency within the fabric while NIC Teaming is used on the Isilon node to provide high availability and resilency.

The spine switches also connect to dual IP Services Blocks providing load balancing and IDS/IPS services for load balancing and securing client access to applications running in VMs hosted on the VMware ESXi cluster.

All member switches in the VCS Fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between ESXi servers.

Either VDX 8770 switches or VDX 6720 switches can be used for the spine. The VDX 8770 provides up to 384 10 GbE ports if all eight slots of the VDX 8770-8 have 48-port 10 GbE cards installed. The VDX 8770 also supports 40 GbE port cards.  The VDX 6720 provides up to 60 ports of 10 GbE connectivity and can be used for smaller configurations.

 

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSLeafSpine-SpineBuildingBlock.JPG

   Leaf/Spine VCS Fabric, Spine Block (click to enlarge)

 

 

Key Features

Automatic network formation

The VCS fabric automatically forms when connecting switches, enabling ease of deployment and non-disruptive scaling on demand.

All links are forwarding

VCS fabric automatically provides multipath traffic flow at layer 2 and eliminates the need for Spanning Tree Protocol (STP).

Adjacent links automatically form Brocade ISL Trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

Highly available layer 3 routing and gateway

Configuring VRRP / VRRP-E on spine switches ensures highly available gateway to the core.

 

 

References

 

 

VCS Fabric, GE Server Leaf Block

Synopsis

Leaf block switches connect to the spine using Brocade ISL Trunks. Each leaf switch has one or more Brocade ISL Trunks to each spine switch.

 

Many data centers still have large numbers of servers with 1 GbE interfaces. The VDX 6710 switch provides lower cost 1 GbE copper connections these servers. Brocade ISL Trunks use higher performance 10 GbE ports providing very low over subscription rates so workload migration will not create congestion and network bottlenecks.

 

Storage traffic from Isilon NAS nodes attached to the spine flows on a VLAN to logically isolate this traffic from vMotion and client traffic. A separate management network switch (not shown) can be added to the server rack for physical isolation of management traffic.

 

All member switches in the VCS fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between ESXi servers.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSFabricGELeafBuildingBlock.JPG

   VCS Fabric, GE Server Leaf Block (click to enlarge)

 

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

References

 

 

VCS Fabric, 10GE Server Leaf Block

Synopsis

Leaf block switches connect to the spine using Brocade ISL Trunks. Each leaf switch has one or more Brocade ISL Trunks to each spine switch.

 

As server refresh cycles continue and 10 GbE LAN on Motherboard (LOM) configurations become more common, ESXi clusters can be built with 10 GbE connections using NIC or Converged Network Adaptors (CNA) supporting Data Center Ethernet (DCE) enhancements including for lossless transport at layer 2.

 

The VDX 6720 switch provides as many as 60 ports of low latency, 10 GbE connectivity using either active Twin-ax or SFP+ optical connections. Brocade ISL Trunks use 10 GbE ports providing lossless links with very low over subscription rates to the spine so workload migration will not create congestion and network bottlenecks.

Storage traffic from Isilon NAS nodes attached to the spine flows on a VLAN to logically isolate this traffic from vMotion and client traffic. A separate management network switch (not shown) can be added to the server rack for physical isolation of management traffic.

 

All member switches in the VCS fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between ESXi servers.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSFabric10GELeafBuildingBlock.JPG

   VCS Fabric, 10 GE Server Leaf Block (click to enlarge)

 

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

 

References

 

Layer-3 Lollipop IP Services Block

Synopsis

The data center template includes an IP Services block with global load balancing.

 

An active/active data center design should provide minimal disruption to client connections during a fail-over to the secondary data center. In each data center, Brocade ADX Application Delivery switches are used to provide server load balancing of client connections. A feature of the ADX the Application Resource Broker, along with ADX hardware provides optimal distribution of client access and integration vCenter to automatically provision and de-provision VMs based on the client connection load. Between data centers, the ADX provides global server load balancing of client connections between the data centers with the ability to direct clients to closest data center reducing connection latency.

 

Although not part of the validation of this design, third party security and IDS/IPS products can be used with the ADX Application Delivery switches.

 

 

Block Diagram

DataCenter-Solution_IsilonVMware-IPServicesLayer3LollipopBuildingBlock.JPG

   IP Services Block, Layer-3 Lollipop for VCS Fabric Spine (click to enlarge)

 

 

Key Features

Global Load Balancing

Directs client connections to the closest data center and distributes client loads between data centers

Application Resource Broker (ARB)

Provides dynamic server resource allocation / deallocation based on application workload via a plug-in to vCenter.

Brocade OpenScript Engine

A scalable application scripting engine that can help application developers and delivery professionals create and deploy network services faster, and is based on the widely used Perl programming language. Organizations can use OpenScript to augment Brocade ADX Series services with user-provided custom logic.

Brocade Application Resource Broker vSphere client plug-in

Monitoring of application workload and automatic allocation / deallocation of VM as required maintaining client SLA.

 

 

References

 

 

Data Center Template, Collapsed VCS Fabric Design

 

The following diagram shows the Data Center template for the collapsed VCS Fabric design option and the building blocks used.

 

DataCenter-Solution_IsilonVMware-DataCenterTemplateCollapsed.JPG

   Data Center Template, Collapsed VCS Fabric Design (click to enlarge)

 

This template includes the same IP Services block and Isilon NAS Node blocks as the leaf/spine VCS Fabric Design. It does not include any VCS Fabric leaf blocks. And, the VCS Fabric spine block is altered slightly as described below.

 

Collapsed VCS Fabric, Spine Block

Synopsis

This building block provides an alternate design for the VCS Fabric spine. In this design, both Isilon NAS nodes and VMware ESXi server cluster nodes attach to a pair of spine switches and there are no Brocade ISL Trunks connecting leaf switches to the spine switches. Due to the need for high density 10 GbE ports, the VDX 8770 switch is preferred for this building block.

 

The pair of VDX 8770 switches form a two-node VCS fabric. Similar to the Leaf/Spine VCS Fabric, Spine Block, layer 3 ports are used for the gateway to the core routers and for connecting to the IP Services building block. The ports connected to the core use OSPF routing and can be configured using VCS Fabric vLAG for more bandwidth and high availability uplinks.

 

Layer 2 ports with vLAG connect to both ports of the Isilon NAS node and to two 10 GbE ports of the ESXi server. The Isilon NAS node and ESXi servers use NIC Teaming to provide high availability connections to the VCS fabric spine switches.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSCollapsed-SpineBuildingBlock.JPG

  Collapsed VCS Fabric, Spine Block (click to enlarge)

 

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

References

 

Core Template

The following diagram shows the core template and the building blocks used.

DataCenter-Solution_IsilonVMware-CoreTemplate.JPG

   Core Template (click to enlarge)

 

 

The core template provides routing services at the data center core. The core can connect multiple Data Center templates to scale-out the number of VCS Fabrics deployed. The routers used provide internet routing services such as BGP.

 

The following block is used to construct the core template.

 

Core Routing Block

Synopsis

The Brocade MLX router is used for the data center core building block. A pair of routers is used for high availability configured via Multi-chassis Trunking (MCT). OSPF is configured for ports connecting to the spine switches in a VCS fabric. Multiple VCS fabrics can connect to the core to scale-out the data center network. Border Gateway Protocol (BGP) is used on ports connecting to the internet to advertize internal routes to the internet service provider routers.

 

 

Block Diagram

DataCenter-Solution_IsilonVMware-CoreBuildingBlock.JPG

   Core Routing Block (click to enlarge)

 

 

Key Features

OSPF

Intra-data center routing between multiple VCS Fabrics

BGP

Border Gateway Protocol routing for Internet connection

Multi Chassis Trunking (MCT)

Multi Chassis Trunking allows two switches to appear as one enabling design of a resilient and redundant router implementation.



References

 

Management Template

Synopsis

Monitoring and management of the underlying network infrastructure in a unified way minimizes risk, reduces configuration error and provides early detection of traffic flows that are experiencing high latency and bottlenecks. Further, integration of monitoring and reporting of the network with VMware vCenter provides virtualization administrators with needed status and insights about the operational health of the NAS storage traffic, client connections and application resource requirements. Brocade Network Advisor provides this network management platform.

 

Other vCenter plug-ins for management include Application Resource Broker support for the ADX series of Application Delivery switches and the VCS Fabric Automated Migration of Port Profiles plug-in to automatically create and synchronize VCS Fabric port profiles with virtual machine port groups.

 

Traffic monitoring is a valuable service for active-active dual data centers and Brocade includes the open standard sFlow monitoring in its NetIron, ServerIron and VDX family of products. Via third party sFlow monitoring tools, network and virtualization administrators can see traffic performance at the individual VM and workload in both data centers.

 

Block Diagram

DataCenter_TemplateMngmnt-BNAsFlowvCenter-NoTitle.JPG  

   Management Template (click to enlarge)

 

 

Key Features

sFlow

Traffic monitoring down to the individual virtual machine.

vCenter management plug-in VMware Aware Automation

Coordination of VM port group creation and changes with VCS Fabric AMPP service to automatically create VCS Fabric Port Groups and policies.

Application Resource Broker plug-in for vCenter

ADX application delivery switch monitors device utilization and network bandwidth for load balanced connections between clients and applications running in VMs. As needed, additional VMs can be brought up or shut down as client workloads vary.

 

 

 

References

 

 

Components

The following lists typical components that can be used in the design templates for this solution.

 

VMware vSphere 5 Components

 

VMware vSphere 5

VMware License

VMware vCenter 5

VMware License

 

Isilon OneFS Scale-out NAS Components

Isilon OneFS

Scale-out NAS Cluster software with NFS, CIFS and SMB file system support.

Isilon S-Series Node

Node optimized for high-performance enterprise workloads for transaction or file-based application workloads. Storage includes serial attached SCSI (SAS) and optional solid state disk (SSD) with high throughput and low latency and up to 96GB of cache.

Isilon X-Series Node

Node optimized for cost-effective balance of performance and storage capacity. This node is ideal for workloads having high concurrent and sequential IO which is found in most virtual server environments. Storage includes serial ATA (SATA) and optional SSD and up to 48GB of cache.

Isilon NL-Series Node

Node optimized for reliability and economy and is commonly used for archival storage, a disk-to-disk backup storage pool, and/or disaster recovery storage pool. Storage includes SATA drives and up to 16 GB of cache.

 

 

Data Center Template Components

Brocade VDX® 8770 switch

Optimized for high density 10GE ports and 40 GE ports in a modular chassis in two form factors, four slot in 8U and  slot in15U.

Brocade VDX 6720 switch

Optimized for 10GE server connectivity in fixed configuration with two form factors, 1U 24 port and 2U 60 ports.

Brocade VDX 6710 switch

Optimized for 1 GE server with six 10GE VCS Fabric attachment ports.

Brocade NOS 3.0

Includes layer 3 routing services, OSPF, static routes, and Virtual Routing Redundancy Protocol (VRRP/VRRP-e) for highly availability IP gateway.

 

 

Core Template Components

Brocade MLX Router

Select based on number of slots to meet scalability requirements and AC/DC to meet power requirement.

Brocade ServerIron ADX Application Delivery Switch

Select based on number of CPU cores and number of ports to meet scalability requirements.

-PREM

Premium features—Layer 3 routing, IPv6, and Global Server Load Balancing (GSLB)

Brocade OpenScript Engine

The Brocade OpenScript engine is a scalable application scripting engine that can help create and deploy network services faster. It is based on the widely used Perl programming language. OpenScript is used to augment Brocade ADX Series services with user-provided custom logic. The OpenScript engine includes the Brocade OpenScript performance estimator tool, which mitigates deployment risk by predicting the performance of scripts before they go live.

Brocade Application Resource Broker

Brocade Application Resource Broker (ARB) is a software component that simplifies the management of application resources within IT data centers by automating on-demand resource provisioning. ARB helps ensure optimal application performance by dynamically adding and removing application resources such as Virtual Machines (VMs). ARB–working in tandem with the Brocade ADX Series–provides these capabilities through real-time monitoring of application resource responsiveness, traffic load information, and capacity information from the application infrastructure.

 

 

Management Template Components

BNA 12.1

Single pane of glass management platform for SAN and IP network

VMware vCenter Management plug-in for Brocade AMPP

vCenter integration plug-in the automates creation of Brocade VCS Fabric port profiles when VM is assigned to or changes are made to a vSwitch port group.

VMware vSphere client plug-in for Brocade Application Resource Broker

vSphere integration plug-in

Comments
by wbolton
on ‎04-10-2013 02:38 AM

When this is viewed in pdf format some of the diagrams get lost.

Examples are:-

Collapsed VCS Fabric Topology

Block Diagram of Isilon OneFS NAS building block

VMware vSphere 5 Building Block

...

I'm using Adobe ReaderX 10.1.6

by
on ‎04-10-2013 07:42 AM

Hi Bill,

Thanks for coming by and taking a look at the publications.

Yeap, there are a couple of bugs in the PDF viewer that are being worked on.

You can highlight the entire document (left mouse click, drag to the bottom, release), or just a portion, copy (Ctrl "C") and then paste (Ctrl "V") into a word document.  That copies all the diagrams faithfully.  You can print that out if you wish.

I hope this interim fix helps until the bugs in the PDF viewer get fixed.

Contributors