Design & Build

Data Center Infrastructure, Storage-Design Guide: Scale-out NAS Storage Templates

by Community Manager ‎10-18-2012 10:22 AM - edited ‎08-06-2014 08:34 AM (638 Views)

Synopsis: Designs with best practices for data center networks with NAS storage using Brocade VDX switches with VCS Fabric technology, MLX routers and ADX application delivery switches.

 

Contents

Preface

Overview

Unstructured data is fueled by the growth of the Web 2.0 applications, social media, video and large-scale data analytics commonly called “Big Data”. Client/server computing enabled network attached storage (NAS) in the 1980’s. NAS is file-based storage that can be shared with multiple client computers. However, scaling NAS become problematic as the file system name space, as well as storage pool, were limited by the scale-up of a high performance server. Today, scale-out NAS clusters built from commodity compute components can create very large storage pools, excellent IO performance, storage capacity in the tens of petabytes in a single file system name space, and cluster redundancy levels that handle multiple node failures. Scale-out NAS is commonly used with physical server clusters that in High Performance Computing (HPC) environments such as Big Data analytics using Hadoop, oil & gas exploration, genomics, climate analysis, etc. But, it is also attractive for building virtual storage polls connected to server virtualization clusters.

 

Virtual servers for x86 platforms also rely on scale-out clusters built from commodity compute components. Server virtualization is now a key building block that IT uses to dramatically reduce infrastructure costs while adjusting resources dynamically to fit application workloads. Combining server virtualization with Scale-out NAS avoids the limitations of static compute and storage environments, allowing full virtualization of compute and storage resources, live migration of workloads, automated creation of virtual machines on demand with options for high availability and disaster recovery services.

 

Until recently, the network was not so scalable, nimble nor easy to configure with virtualization. To address this, Brocade introduced Brocade VCS® Fabric technology. Brocade VCS Fabric removes several limitations of classical layer 2 networks including Spanning Tree Protocol (STP), a single active path per broadcast domain, congestion “hot-spots” in link aggregation groups (LAG), and the requirement to halt all traffic in the Ethernet network whenever changes are made to the topology (add/remove switches and links). Brocade VCS Fabric scale-out cluster architecture is similar to the how storage and server clusters scale-out to create resource pools. It transforms static, hierarchical layer 2 networks into a dynamic resource pool delivering multi-path forwarding on all least-cost paths, uniform low latency forwarding between all devices, and non-disruptive automatic scale-out when new links and switches are added to the fabric. The VCS fabric provides a simple, scalable and highly available transport layer for low-latency, high bandwidth traffic for virtual server clusters and scale-out NAS storage clusters. When combined with the Brocade MLX router and ADX Series of application delivery switches, network designers can create an end-to-end data center network that can scale as dynamically as server virtualization and scale-out NAS does while simplifying network operations and management lowering total cost of ownership.

 

The following Brocade platforms are used in this solution design.

  • Brocade Network Operating System (NOS) for VDX™ series switches
  • Brocade NetIron® Operating System for MLX™ series routers
  • Brocade ServerIron ADX Application Delivery switch

Purpose of This Document

Brocade VDX switches with VCS Fabric technology, Brocade MLX core routers, Brocade ADX application delivery switches and Brocade Network Adviser are used in this design. Scale-out NAS clusters have network requirements similar to those found in storage area networks (SAN) commonly used for block storage. SAN's have proven that fabrics are an excellent networking transport layer for block storage. Likewise, VCS Fabrics provide the same proven fabric transport layer for scale-out NAS using TCP/IP over Ethernet.

 

Audience

This document is intended for data center architects and network designers responsible for deployment of virtual data centers and private cloud computing architectures.

 

Objectives

This Design Guide provides guidance and recommendations for best practices when designing a data center network for scale-out NAS using Brocade VDX switches with VCS Fabric technology, MLX routers and ADX application delivery switches.

 

Restrictions and Limitations

Release of NOS 3.0 is required for deployment of the VDX 8770 and layer 3 routing services (OSPF, VRRP/VRRP-E). Check the NOS 3.0 release notes for other restrictions or limitations.

 

Related Documents

The following Brocade publications provide information about the Brocade Data Center Infrastructure Base Reference Architecture and features and capabilities of the NOS, NetIron MLX and ServerIron ADX platforms. Any Brocade release notes that have been published for NOS, MLX NetIron and ADX should be reviewed as well.

 

References

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Lead Architect: Marcus Thordal, Strategic Solutions Lab

 

Document History

Date                  Version        Description

2012-10-18         1.0                Original

 

Reference Architecture

This design guide is based on Brocade’s Data Center Network Infrastructure, Base Reference Architecture (DCNI-RA). DCNI-RA provides a flexible, modular way to architect and design data center networks. As shown below, this design guide uses the DCNI-RA. The DCNI-RA includes building blocks which are combined into reusable templates. Building blocks are repeatable component configurations commonly found in data center networks.

 

Leaf/Spine Fabric Design

This guide contains two designs based on the Brocade VCS Fabric technology with the VDX family of switches. One design is based on a Leaf/Spine VCS Fabric as shown below. 

 

 

Storage-Infrastructure_NASStorage.jpg

   Leaf/Spine VCS Fabric Design Reference Architecture  (click to enlarge)

 

The design uses VDX 8770 switches at the spine and VDX 6710 and VDX 6720 switches at the leaf. Leaf switches connect to ESXi servers. The VDX 6710 is optimized for 1 GbE and the VDX 6720 for 10 GbE devices. Both switches join a VCS Fabric eliminating Spanning Tree Protocol (STP). The leaf/spine VCS Fabric topology provides uniform latency with minimal switch hops while providing low oversubscription ratios with scalability to very large numbers of servers. The diagram below shows the reference architecture for the leaf/spine design. The design includes a Core Template, Data Center Template with Leaf/Spine VCS Fabric, and Management Template combined with VMware vSphere 5 and EMC Isilon OneFS Scale-out NAS.

 

Collapsed Spine Design Option

The other design, shown below, is a collapsed VCS Fabric that is simpler than the leaf/spine VCS Fabric design but not as scalable.

 

DataCenter-Solution_IsilonVMware-VCSCollapsedReferenceArchitecture.JPG

   Collapsed Spine VCS Fabric Design Reference Architecture (click to enlarge)

 

The collapsed design uses a pair of VDX 8770 switches to form a two switch VCS Fabric with layer 3 routing to the core. The collapsed design can easily be expanded into the leaf/spine design by adding GE or 10 GE leaf switches should growth require the better scalability of the leaf/spine design. Similar to the leaf/spine design, the reference architecture includes a Core Template, Data Center Template and Management Template along with VMware vSphere 5 and EMC Isilon OneFS Scale-out NAS as shown below.

 

References

 

Business Requirements

A challenge for data center architects and designers is how to meet the mixed performance demands of a diverse set of business applications. Disparate workloads including business analytics with large data sets, aka, “Big Data”, back office applications with high rates of transaction performance (e.g., CRM, ERP, financial OLTP systems), create a wide range of network performance requirements. Server virtualization clusters running tens of workloads per node and multiple hundreds of workloads per cluster can move any workload across server, storage and network resource pools as resource demands dictate. At scale, this requires a network providing uniform latency, use of all least cost paths, high resiliency and configuration simplicity. Storage and network management have become much more complex in the face of this and data centers commonly include disparate types of storage, storage protocols and network topologies. On top of this, cluster technology, the key to scale-out solutions for compute and storage resources, place much more stringent demands on network bandwidth, latency, scalability and availability often exceeding the capabilities of static hierarchical networks based on classic Ethernet. Network management with repeated entry of commands on multiple switches and ports are required to implement a policy change, adjust bandwidth in a link aggregation group (LAG) or trunk, and to tune around a hot-spot on a physical link in a LAG. The manual management and operation model fails to keep pace with a virtual data center where workload migration changes the network traffic flows, polices have to move when the workload migrates and dynamic storage pools change the latency and bandwidth of storage traffic rapidly.

 

The solution requires a data center architecture that leverages the same design principles for servers, storage and networking: scale-out resource pools, automatic dynamic load balancing, and policy defined configuration.

 

Special Considerations

The use of NAS storage has also been growing in traditional IT data centers along with the growth in web 2.0 application stacks and the growing interest in Big Data analytics. In these environments, server virtualization can also provide excellent scalability and resiliency for the computing processes. Adding physical server clusters with virtualization software stacks that access NAS storage is an option that can be used with this design.

 

Design

VCS Fabric Technology

Storage traffic places stringent demands on the network, particularly Isilon scale-out NAS where storage volumes can be as much as 15 Petabyte per storage pool. A VCS Fabric eliminates Spanning Tree Protocol (STP) within the VCS fabric thereby removing many issues with Ethernet as the transport for virtual data centers. A VCS Fabric provides least cost routing across all active links at layer 2 with TRILL-based link layer routing at layer 2. With Brocade Network Operating System (NOS) 3.0, layer 3 routing is integrated within the fabric reducing network tiers for a flatter, low latency and lower cost network. All switches in a VCS Fabric are aware of each other and the fabric topology. Changes to the topology (adding/removing links and/or switches) do not halt traffic on unaffected links and fabric convergence is much quicker than STP. Multiple links between switches automatically create a Brocade ISL Trunk with a maximum of eight links per trunk. A Brocade ISL Trunk with hardware assisted frame stripping across all links eliminates hot-spots common to LAG based trunks that rely on one time hashing of flows and static flow allocation to a specific link. A VCS Fabric includes data center bridging (DCB) for lossless layer 2 transport and jumbo frames for improved performance with NAS where blocks up to 8 KB can be forwarded in a single Ethernet frame.

 

For server virtualization environments using live VM migration between physical servers, Brocade VCS Fabric provides Automated Migration of Port Profiles (AMPP) to ensure all network policies are automatically applied to the ingress port of the fabric regardless of which port traffic from a VM enters. AMPP is enhanced with a VMware plug-in so AMPP is VM aware. With the plug-in, VCS Fabric port profile creation is automatic and synchronized with VM creation. vSphere sends a message to the VCS Fabric with information about the VM and its port group so VCS Fabric can create a matching AMPP port profile. When a VM migrates, an alert is sent to the VCS Fabric so the new ingress fabric port for VM traffic is explicitly identified in advance.

 

Leaf/Spine Fabric Topology

The diagram below shows the topology used for a leaf/spine fabric topology. It uses VDX 8770 switches at the spine, with the option to use the four or eight slot model. The spine switches form a VCS Fabric with the VDX 67xx switches used at the leaf. Each leaf is connected to every spine using one or more Brocade ISL trunks. Each Brocade ISL Trunk can include up to eight 10 GbE links.

 

The leaf/spine topology provides high availability and resiliency, excellent utilization of the Brocade ISL Trunk bandwidth (95%), low over-subscription ratios. and easy scale-out of NAS nodes at the spine and physical servers at the leaf. For the virtualization option, the physical servers are clustered to create a server pool hosting virtual machines.

 

DataCenter-Solution_IsilonVMware-LeafSpineTopology.JPG   

   Leaf/Spine VCS Fabric Topology (click to enlarge)

 

Collapsed Fabric Topology

The topology below includes a pair of VDX 8770 switches, either the four or eight slot model. The switches form a two switch VCS Fabric. Uplinks go to the core template and to the IP services template. These use VCS Fabric vLAG for high availability and resiliency. Servers and NAS nodes use NIC Teaming to connect to each of the VDX 8770 switches again using vLAG for high availability and resiliency.

 

DataCenter-Solution_IsilonVMware-CollapsedTopology.JPG

   Collapsed VCS Fabric Topology (click to enlarge)

 

This topology provides a network that is highly available and resilient with excellent scalability that avoids the use of Spanning Tree protocol.

 

Base Design

This section describes the base design for the solution. Any design options or optimizations of this base design are documented in later sections.The network design uses three templates, data center network, data center core, and network management. Each template is constructed from one or more building blocks documented in the Data Center Infrastructure Reference Architecture. In this design, there are two options for the Data Center template, a Leaf/Spine VCS Fabric and a Collapsed VCS Fabric. Either can be used with the Data Center Core and Data Center Management templates. The network is designed in an open way to accommodate a variety of server products. However, the choices are restricted by the VMware hardware compatibility requirements for vSphere 5.


Data Center Template, Leaf/Spine VCS Fabric Design

The following diagram shows the Data Center template for the leaf/spine VCS Fabric design with the building blocks used.

 

DataCenter-Solution_IsilonVMware-DataCenterTemplate-LeafSpine.JPG

   Data Center Template, Leaf/Spine VCS Fabric Design (click to enlarge)

 

The Spine building block connects to the Core template and the IP Services template via IP routing. It is in a VCS fabric with the leaf switches at layer 2 automatically forming Brocade ISL Trunks between spine and leaf switches with up to 80 Gbps of bandwidth. Brocade ISL Trunks are highly efficient compared to classical LAG trunks since they do not use hashing to place flows on physical links. Instead, all flows are frame striped across all physical links eliminating workflow caused hot spots, providing near perfect load balancing with up to 95% trunk utilization.

 

The following describes the building blocks used to create this template.

 

Leaf/Spine VCS Fabric, Spine Block

 

Synopsis

The spine block creates the layer 3/layer 2 boundary in the fabric. Uplinks to the core routers use OSPF or static routing services from ports in the spine switches. The links form spine switches to the core can be configured with VCS Fabric vLAG for bandwidth aggregation with high availability.

 

As shown by the red links, Brocade ISL trunks automatically form between a spine switch and a leaf switch. Each spine switch connects to all leaf switches. Both spine switches are connected together with their own Brocade ISL Trunks for messaging and to form a VRRP/VRRP-E resilient gateway to the core.

 

NAS server nodes connect to 10 GbE ports in both spine switches. VCS Fabric vLAG provides link resiliency within the fabric while NIC Teaming is used on the Isilon node to provide high availability and resilency.

The spine switches also connect to two IP Services blocks providing load balancing via the ADX Application Delivery Controller (ADP) and intrusion detection service (IDS) and intrusion protection service (IPD) IDS/IPS. The IP Services block provides scalable load balancing and security commonly found in web tier applications (client, application, database).

 

All member switches in the VCS Fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between physical servers in a server virtualization cluster.

 

Either VDX 8770 switches or VDX 6720 switches can be used for the spine. The VDX 8770 provides up to 384 10 GbE ports when all eight slots of the VDX 8770-8 have 48-port 10 GbE cards installed. The VDX 8770 also supports 40 GbE port cards.  The VDX 6720 provides up to 60 ports of 10 GbE connectivity and can be used for smaller configurations.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSLeafSpine-SpineBuildingBlock.JPG   

   Leaf/Spine VCS Fabric, Spine Block (click to enlarge)

 

Key Features

Automatic network formation

The VCS fabric automatically forms when connecting switches, enabling ease of deployment and non-disruptive scaling on demand.

All links are forwarding

VCS fabric automatically provides multipath traffic flow at layer 2 and eliminates the need for Spanning Tree Protocol (STP).

Adjacent links automatically form Brocade ISL Trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

Highly available layer 3 routing and gateway

Configuring VRRP / VRRP-E on spine switches ensures highly available gateway to the core.

 

 

References

 

VCS Fabric, GE Server Leaf Block

 

Synopsis

Each switch in a leaf block connects to all switches in the spine block using Brocade ISL Trunks so each leaf switch has one or more Brocade ISL Trunks to each spine switch.

 

Many data centers still have large numbers of servers with 1 GbE interfaces. The VDX 6710 switch provides lower cost 1 GbE copper connections to these servers. Brocade ISL Trunks use higher performance 10 GbE ports providing very low over subscription rates so workload migration will not create congestion and network bottlenecks.

 

Storage traffic from NAS nodes attached to the spine are assigned to a VLAN to logically isolate this traffic from VM migration and client traffic. A separate management network switch (not shown) can be added to the server rack for physical isolation of management traffic.

 

All member switches in the VCS fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between physical servers in the virtual server cluster.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSFabricGELeafBuildingBlock.JPG  

   VCS Fabric, GE Server Leaf Block (click to enlarge)

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

References

 

VCS Fabric, 10GE Server Leaf Block

 

Synopsis

Leaf block switches connect to the spine using Brocade ISL Trunks. Each leaf switch has one or more Brocade ISL Trunks to each spine switch.

 

As server refresh cycles continue and 10 GbE LAN on Motherboard (LOM) configurations become more common, ESXi clusters can be built with 10 GbE connections using NIC or Converged Network Adaptors (CNA) supporting Data Center Ethernet (DCE) enhancements including for lossless transport at layer 2.

 

The VDX 6720 switch provides as many as 60 ports of low latency, 10 GbE connectivity using either active Twin-ax or SFP+ optical connections. Brocade ISL Trunks use 10 GbE ports providing lossless links with very low over subscription rates to the spine so workload migration will not create congestion and network bottlenecks.

Storage traffic from NAS nodes attached to the spine are assigned to a VLAN to logically isolate this traffic from VM migration and client traffic. A separate management network switch (not shown) can be added to the server rack for physical isolation of management traffic.

 

All member switches in the VCS fabric can participate in Automatic Migration of Port Profiles (AMPP) which assures network policies are synchronized with virtual machines when they migrate between physical servers in the virtual server cluster.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSFabric10GELeafBuildingBlock.JPG   

   VCS Fabric, 10 GE Server Leaf Block (click to enlarge)

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

References

 

IP Services Block

 

Synopsis

The data center template includes an IP Services block with ADC for load balancing using the Brocade ADX application delivery switch, and partner supplied firewall and IDS/IPS components.

 

A feature of the ADX, the Application Resource Broker, along with ADX hardware provides optimal distribution of client access and integration with VMware vCenter to automatically provision and de-provision VMs based on the client connection load. Between data centers, the ADX provides global server load balancing of client connections between the data centers with the ability to direct clients to closest data center reducing connection latency.

 

Block Diagram

DataCenter-Solution_IsilonVMware-IPServicesLayer3LollipopBuildingBlock.JPG  

    IP Services Block for VCS Fabric Spine (click to enlarge)

 

Key Features

Global Load Balancing

Directs client connections to the closest data center and distributes client loads between data centers

Application Resource Broker (ARB)

Provides dynamic server resource allocation / deallocation based on application workload via a plug-in to VMware vCenter.

Brocade OpenScript Engine

A scalable application scripting engine that can help application developers and delivery professionals create and deploy network services faster, and is based on the widely used Perl programming language. Organizations can use OpenScript to augment Brocade ADX Series services with user-provided custom logic.

Brocade Application Resource Broker vSphere client plug-in

Monitoring of application workload and automatic allocation / deallocation of VM as required maintaining client SLA.

 

 

References

 

VCS Fabric Leaf/Spine Template, Exploded View

 

The following shows an exploded view of the complete VCS Fabric Leaf/Spine template.

 

DataCenter-Infrast_SONAS_VCSLeafSpineTemplate-Exploded.jpg

   Data Center Leaf/Spine Template, Exploded View (click to enlarge)

 

Core Template

The following diagram shows the core template and the building blocks used. This template can be used with both the base and alternate designs.

 

DataCenter-Solution_IsilonVMware-CoreTemplate.JPG

   Core Template (click to enlarge)

 

The core template provides routing services at the data center core. The core can connect multiple Data Center templates to scale-out the number of VCS Fabrics deployed. The routers used provide internet routing services such as BGP.

 

The following block is used to construct the core template.

 

Core Routing Block

 

Synopsis

The Brocade MLX router is used for the data center core building block. A pair of routers is used for high availability configured via Multi-chassis Trunking (MCT). OSPF is configured for ports connecting to the spine switches in a VCS fabric. Multiple VCS fabrics can connect to the core to scale-out the data center network. Border Gateway Protocol (BGP) is used on ports connecting to the internet to advertize internal routes to the internet service provider routers.

 

Block Diagram

DataCenter-Solution_IsilonVMware-CoreBuildingBlock.JPG   

   Core Routing Block (click to enlarge)

 

Key Features

OSPF

Intra-data center routing between multiple VCS Fabrics

BGP

Border Gateway Protocol routing for Internet connection

Multi Chassis Trunking (MCT)

Multi Chassis Trunking allows two switches to appear as one enabling design of a resilient and redundant router implementation.

 

 

References

 

Management Template

Synopsis

Monitoring and management of the underlying network infrastructure in a unified way minimizes risk, reduces configuration error and provides early detection of traffic flows that are experiencing high latency and bottlenecks. Further, integration of monitoring and reporting of the network with VMware vCenter provides virtualization administrators with needed status and insights about the operational health of the NAS storage traffic, client connections and application resource requirements. Brocade Network Advisor provides this network management platform.

 

Other vCenter plug-ins for management include Application Resource Broker support for the ADX series of Application Delivery switches and the VCS Fabric Automated Migration of Port Profiles plug-in to automatically create and synchronize VCS Fabric port profiles with virtual machine port groups.Traffic monitoring is a valuable service for active-active dual data centers and Brocade includes the open standard sFlow monitoring in its NetIron, ServerIron and VDX family of products. Via third party sFlow monitoring tools, network and virtualization administrators can see traffic performance at the individual VM and workload in both data centers.

 

Block Diagram

DataCenter_TemplateMngmnt-BNAsFlowvCenter-NoTitle.JPG    

   Management Template (click to enlarge)

 

Key Features

sFlow

Traffic monitoring down to the individual virtual machine.

vCenter management plug-in VMware Aware Automation

Coordination of VM port group creation and changes with VCS Fabric AMPP service to automatically create VCS Fabric Port Groups and policies.

Application Resource Broker plug-in for vCenter

ADX application delivery switch monitors device utilization and network bandwidth for load balanced connections between clients and applications running in VMs. As needed, additional VMs can be brought up or shut down as client workloads vary.

 

 

References

 

Design Alternate: Collapsed VCS Fabric

This design alternate is suitable for smaller scale deployments.  It can be easily extended to the Base Design by adding leaf switches and moving server connections to the leaf switches.

 

Data Center Template, Collapsed VCS Fabric Design

The following diagram shows the Data Center template for the collapsed VCS Fabric design option and the building blocks used.

 

DataCenter-Solution_IsilonVMware-DataCenterTemplateCollapsed.JPG  

   Data Center Template, Collapsed VCS Fabric Design (click to enlarge)

 

This template includes the same IP Services block and NAS node blocks as the leaf/spine VCS Fabric Design and works with the same Core template. It does not include any VCS Fabric leaf blocks. And, the VCS Fabric spine block configuration is altered slightly as described below.

 

Collapsed VCS Fabric, Spine Block

 

Synopsis

This building block provides an alternate design for the VCS Fabric spine. In this design, both NAS cluster nodes and server virtualization cluster nodes attach to a pair of spine switches. One or more Brocade ISL Trunks is connects the spine switches to create a two node VCS fabric. Due to the need for high density 10 GbE ports, the VDX 8770 switch is preferred for this building block.

 

The pair of VDX 8770 switches form a two-node VCS fabric. Similar to the Leaf/Spine VCS Fabric, Spine Block, layer 3 ports are used for the gateway to the core routers and for connecting to the IP Services building block. The ports connected to the core use OSPF routing and can be configured using VCS Fabric vLAG for more bandwidth and high availability up-links.

 

Layer 2 ports with vLAG connect to both ports of the each NAS node and to two 10 GbE NICs of each server in the virtualization server cluster. The NAS nodes and virtualization servers use NIC teaming to provide high availability connections to the VCS Fabric spine switches.

 

Block Diagram

DataCenter-Solution_IsilonVMware-VCSCollapsed-SpineBuildingBlock.JPG

   Collapsed VCS Fabric, Spine Block (click to enlarge)

 

Key Features

Automatic network formation

VCS Ethernet fabric automatically form when connecting switches, enabling ease of deployment and (non-disruptive) scaling on demand

All links are forwarding

VCS Ethernet fabric automatically provide multipath traffic flow and eliminates the need for spanning tree

Adjacent links automatically trunk

All VLANs are automatically carried on fabric Inter Switch Links (ISLs) and in addition traffic is load balanced at the frame-level providing completely even traffic distribution

Topology agnostic

The VCS Ethernet is topology agnostic, enabling topology design to support traffic flows

AMPP with VMware vCenter plug-in

Brocade VM-aware network automation provides secure connectivity and full visibility to virtualized server resources with dynamic learning and activation of port profiles. By communicating directly with VMware vCenter, it eliminates manual configuration of port profiles and supports VM mobility across VCS fabrics within a data center. In addition to providing protection against VM MAC spoofing, AMPP and VM-aware network automation enable organizations to fully align virtual server and network infrastructure resources, and realize the full benefits of server virtualization.

 

 

References

 

IP Services Block

Same as the IP Services block in the Base Design

VCS Fabric Collapsed Spine Template, Exploded View

The following shows an exploded view of the complete VCS Fabric Collapsed Spine template.

 

DataCenter-Infrast_SONAS_VCSCollapsedSpineTemplate-Exploded.jpg

   Data Center Collapsed Spine Template, Exploded View (click to enlarge)

 

Core Template

Same as the Core Template in the Base Design.

 

Management Template

Same as the Management Template in the Base Design.

 

Components

The following lists typical components that can be used in the design templates for this solution.

 

Data Center Template Components

 

Brocade VDX® 8770 switch

Optimized for high density 10GE ports and 40 GE ports in a modular chassis in two form factors, four slot in 8U and  slot in15U.

Brocade VDX 6720 switch

Optimized for 10GE server connectivity in fixed configuration with two form factors, 1U 24 port and 2U 60 ports.

Brocade VDX 6710 switch

Optimized for 1 GE server with six 10GE VCS Fabric attachment ports.

Brocade NOS 3.0

Includes layer 3 routing services, OSPF, static routes, and Virtual Routing Redundancy Protocol (VRRP/VRRP-e) for highly availability IP gateway.

 

Core Template Components

Brocade MLX Router

Select based on number of slots to meet scalability requirements and AC/DC to meet power requirement.

Brocade ServerIron ADX Application Delivery Switch

Select based on number of CPU cores and number of ports to meet scalability requirements.

-PREM

Premium features—Layer 3 routing, IPv6, and Global Server Load Balancing (GSLB)

Brocade OpenScript Engine

The Brocade OpenScript engine is a scalable application scripting engine that can help create and deploy network services faster. It is based on the widely used Perl programming language. OpenScript is used to augment Brocade ADX Series services with user-provided custom logic. The OpenScript engine includes the Brocade OpenScript performance estimator tool, which mitigates deployment risk by predicting the performance of scripts before they go live.

Brocade Application Resource Broker

Brocade Application Resource Broker (ARB) is a software component that simplifies the management of application resources within IT data centers by automating on-demand resource provisioning. ARB helps ensure optimal application performance by dynamically adding and removing application resources such as Virtual Machines (VMs). ARB–working in tandem with the Brocade ADX Series–provides these capabilities through real-time monitoring of application resource responsiveness, traffic load information, and capacity information from the application infrastructure.

 

Management Template Components

BNA 12.1

Single pane of glass management platform for SAN and IP network

VMware vCenter Management plug-in for Brocade AMPP

vCenter integration plug-in the automates creation of Brocade VCS Fabric port profiles when VM is assigned to or changes are made to a vSwitch port group.

VMware vSphere client plug-in for Brocade Application Resource Broker

vSphere integration plug-in

Contributors