Campus Networks

Campus Network Infrastructure, Best Practices-HyperEdge Architecture Design with Multi-Chassis Trunking

by sharmaj on ‎03-28-2013 10:33 AM - edited on ‎08-06-2014 08:44 AM by (3,676 Views)

 

Contents

Preface

 

Overview

Today’s Campus network is critical for business connectivity to customers, vendors, partners, and employees. To ensure business agility and competitiveness, the campus network must easily support new applications, cloud-based services, and mobile users. The campus network for today and tomorrow should be flexible, easy to manage, and cost-effective. The HyperEdge™ Architecture from Brocade seamlessly integrates new innovations with legacy technologies to dramatically improve network flexibility and reduce management complexity allowing organizations to deploy applications quickly and cost-effectively.

MCT_TraditionalvsHyperEdgeArchitecture.jpg

  Traditional and HyperEdge Architecture with Mixed Stacking (click to enlarge)

Further, Bring Your Own Device (BYOD) is emerged as a top IT initiative industry-wide. To meet escalating demands for growing network traffic, which includes high definition quality video, interactive multimedia and mobile access, network administrators need to build a high performance, scalable enterprise network from the core to the edge.

Two key technologies enable the Brocade HyperEdge Architecture:

  • Brocade’s Multi Chassis Trunking (MCT) provides scale in the HyperEdge Architecture with active-active links for interconnecting multiple HyperEdge domains into a large network core to increase scalability and expand the forwarding domain.
  • Mixed Stacking capability across Brocade’s entire portfolio of ICX access switches (ICX6610, ICX6450)

MCT offers several key features and enhancements for the traditional enterprise campus network: active-active links to eliminate the use of Spanning Tree Protocol (STP); high availability and resilience for switching.

 

Purpose of This Document

This document discusses how to use Brocade MCT to increase network performance and flexibility, while lowering administrative cost. Readers will gain an understanding of:

  • How to configure MCT
  • The role and benefits of MCT in a HyperEdge Architecture
  • How to use MCT to improve scalability and simplify management

Audience

Enterprise architects, network architects and network designers who need a simple solution to build a scalable campus network while reducing TCO.

 

Objectives

The objective is to show how to design and architect a scalable Enterprise campus network using Brocade MCT in the Brocade HyperEdge Architecture.

 

Related Documents

The following are links to related information.

References

 

Key Contributors

The content in this guide was developed by the following key contributors.

  • Lead Architect: Jeevan Sharma, Technical Marketing Engineer

 

Document History

Date                Version        Description

2013-04-09      1.0                Initial Release

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Reference Architecture

The HyperEdge Architecture is an evolutionary campus network architecture based on integrated and distributed technologies that radically simplify networks by eliminating legacy protocols such as spanning tree and controller-centric tunneled wireless traffic. The HyperEdge Architecture meets the key challenges created by minimal innovation over the last few years and is ready to take on the challenges of streaming video, unified communications, VDI (Virtual Desktop Infrastructure), cloud-based applications, and mobility. The HyperEdge Architecture provides a foundation for a flexible, application-centric, automated, and cost-effective network solution that Brocade calls The Effortless Network™.

 

MCT_CampusReferenceArchitecture.jpg

  Campus Network Reference Architecture (click to enlarge)

 

Multi Chassis Trunking (MCT) allows two switches, or routers, to cluster together and appear as a single logical switch. The two switches are connected together with inter-switch links (ICL) that carry messages and control plane traffic needed to synchronize the state of the switches. MCT is used to scale-out the network by interconnecting multiple access domains and then aggregating their traffic before forwarding it to the core.

In the HyperEdge architecture, a domain is a single group of network switches and wireless access points that enable distributed network services, consolidate management, and share a common configuration. Traditional stacking and Brocade’s mixed stacking are two examples of a domain. Each domain connects to both physical switches in the MCT cluster through one or more uplinks

Easy to deploy and manage campus networks are key to deliver services to users in a cost-effective manner. The HyperEdge Architecture is used to achieve these objectives. However, improving and providing campus networks to be easier to deploy is not effective if they can’t scale to meet the performance requirements for today’s bandwidth intensive applications. MCT provides active-active links that connect with multiple traditional and/or mixed stack domains. Active/active links increase bandwidth scalability and enhance system availability. MCT provides switch redundancy and loop prevention without the use of STP. MCT in the HyperEdge Architecture delivers flexibility and scalability simultaneously improving business agility.

The Campus Network Base Reference Architecture shows how customers can design and build high performance, large scale enterprise networks using the HyperEdge Architecture.

The diagram below shows how MCT is used to interconnect several domains to a pair of FastIron SX Modular Switches to create a scalable Enterprise Network.

 

 

MCT_DeploymentWithTraditional&MixedStacking.jpg

      MCT Deployment with Stacking at the Access Layer (click to enlarge)

 

References

 

Business Requirements

The campus network is critical for business connectivity to customers, vendors, partners and employees for an organization to remain competitive and agile. New bandwidth-intensive and quality-of-service sensitive applications, new sophisticated devices, and evolving usage patterns and traffic flows are putting immense stresses on IT.

 

Modern Applications

Video, Unified Communications (UC), and Virtual Desktop Infrastructure (VDI) improve user productivity but they also increase traffic and availability requirements in the Campus network. As enterprises evolve to meet new economic conditions and global business requirements, where and how business is transacted is changing faster than ever. User experience driven by Web 2.0 technologies and new applications is causing a fundamental shift in the way people do their jobs and how they use these applications and services. Content and applications must be available instantly, whether they are delivered from a workstation, a virtual data center, or the Internet.

 

New Devices/Technology

With the broad adoption of mobile devices such as smart phones, tablets, and laptops, users have become more demanding. Anytime, anywhere mobile access has become ubiquitous with an explosion of mobile devices connected to the network. Bring Your Own Device (BYOD) or Bring Your Own Technology (BYOT) has emerged as a top IT initiative.

 

MCT_BusinessTrends.jpg

  Business Trends Driving Enterprise Networks

 

Existing enterprise campus networks have become complex, rigid and unable to scale to support today’s modern applications. The popularity of multimedia video and real time video conferencing applications require a network design that delivers higher bandwidth with low latency and jitter. Organizations today need to deploy applications faster but the complexity and rigidity of existing enterprise networks can delay deployment of newer applications and require costly. Here are some of the reasons for these limitations.

  • Limited Flexibility: Deployed as three tiers, optimized for legacy infrastructure capabilities
  • Limited Efficiency: Spanning Tree disables links to prevent loops, limiting network utilization
  • Complex Administration: Each switch has to be managed individually
  • Expensive:  Higher Capital Expenses and Operating Expenses

Organizations must design and build network infrastructure that is business-optimized and application-friendly for today’s requirements, yet is flexible enough to ensure that both current and future requirements are met. At the same time, they face continued pressure to reduce costs as IT is asked to do more with less. To support new applications and new devices the campus network needs to support the following design requirements.

 

Design Overview

The HyperEdge Architecture is for those organizations that need to design their campus network that is easy to deploy and manage. MCT with active-active links overcomes the limitations of STP allowing enterprise network to scale-out by interconnecting multiple access switches and aggregate the traffic to a high-performance and scalable network core.

Eliminating STP unlocks the potential of the network to deliver higher bandwidth and active-active data paths in the distribution and core of the network. As more applications with high quality user experience requirements push traffic onto the campus network, outages and disruptions will need to be avoided.

A Stack in the HyperEdge Architecture is a single group of network switches and wireless access points that enable distributed network services, consolidate management, and share network configuration information. In this design approach we will consider two stack types: Mixed Stack and Traditional Stack.

 

Mixed Stack

This is a collection of devices that are a part of a mixed stacking configuration that allows for the combination of Premium Switches (Stackable) and Non-Premium Switches (Stackable) to form a Multi-Dimensional stack. There are two ways in which customers can build a mixed stack domain

  1. Add Non-Premium Switches to a stack of Premium Switches:  ICX 6450s can be added to the stack to increase port density if there is an existing network with ICX 6610s running premium or advanced features.
  2. Add Premium Switches to a stack of Non-Premium Switches:  ICX 6610s can be added to the top of an ICX 6450 stack to enable more advanced Layer 3 features to the stack if there is an existing network with ICX 6450s running Layer 2 and basic Layer 3.

Traditional Stack

This type of stack includes switches with the same capabilities and features. A traditional stack can be configured with either Layer 2 or Layer 3 features on all switches in the stack.

MCT_Traditional&MixedStacks.jpg

  Traditional and Mixed Stacking Domains (click to enlarge)

 

MCT is a way to ensure high availability for Layer 2 traffic offering a choice of active/passive or active/active configurations. MCT creates a single logical switching device from two physical chassis connected through an inter-switch link. Combining physical links into logical link aggregation group (LAG) connections is a common way to increase bandwidth and resiliency between the access and distribution switches. By adding MCT to the distribution switches, the physical links of a LAG can terminate on both distribution switches. MCT is a cost-effective way to extend high availability so if a distribution switch goes off-line, traffic continues to flow. When MCT is installed on a pair of switches, both switches are connected with an Inter-Chassis Link (ICL) to enable data flow and control messages between them creating a single logical chassis or switch. All links in a LAG from the access switches are active and are load shared using a hash algorithm. If one switch in the MCT cluster fails, the data path can still use the other switch within a few milliseconds for traffic rerouting. This dramatically increases the network availability, resilience and performance.

The HyperEdge Architecture has two ways to integrate mixed stacks with traditional stacks in a scalable campus network.

  1. Use Layer 3 routing to aggregate traffic from the stacks. This confines Layer 2 VLANs to a single stack. Note, a mixed stack includes the Layer 2 / Layer 3 boundary within the stack confining Layer 2 VLANs to a single mixed stack. See the Hybrid Layer 2 and Layer 3 Design section for more details.
  2. Use Layer 2 bridging from the access layer stacks to the distribution layer allowing VLANs to span across multiple stacks.

The choice of having access layer traffic switched or routed to the distribution layer is an important one. Each has its pros can cons and the final choice depends on the solution requirements.

The placement of the Layer 3 boundary at the access layer, as occurs with mixed stacking, eliminates STP so all uplinks can be used to the distribution layer via Layer 3 equal cost multipath (ECMP) routing. However, this limits Layer 2 mobility since VLANs are confined to the mixed stack. Although Layer 2 mobility of applications and users is restricted to a single stack, it can be enabled by using MPLS VRF technologies but this increases complexity.

Layer 2 switching in the access layer provides the ability to simplify network design by removing Layer 3 routing and allows logical grouping of devices, users and applications without requiring any complex overlay technology. However traditional (classical) Layer 2 designs require the use of STP to ensure loop free traffic flows. But STP only allows one path between switches and that introduces bandwidth inefficiencies and adds long network convergence delays in case of link failures or when a switch goes off-line.

Each method has strengths and weaknesses, and a hybrid design can provide the best of both.

Hybrid Layer 2 and Layer 3 Designs

Shown below are the Layer 2 and Layer 3 design options for connecting access layer traffic to the distribution layer.

MCT_AccessDistributionLayerConnectionOptions.jpg

  Access to Distribution Layer Connection Options (click to enlarge)

 

Many organizations prefer a hybrid design combining Layer 2 and Layer 3 for flexibility. Legacy applications need to interconnect multiple stacks with a simple Layer 2 switched access design as they require a direct Layer 2 path to a server or management station. These applications need a common subnet for a logical group of end devices that are spread across multiple stacks. VLANs carrying routable traffic from an end-station are terminated directly on the access switches and traffic is routed to the distribution switch. Traffic that must be bridged at Layer 2 to an end-host is forwarded on 802.1Q trunks to the distribution switch for forwarding to the appropriate access layer stack. This supports legacy applications requiring Layer 2 bridging and modern applications using Layer 3 routing between the access and distribution layers.

The MCT Deployment reference architecture diagram shows a combination of traditional and mixed stacks of switches connected to a pair of distribution switches. The distribution switches are clustered together using MCT and act like a single logical switch to the stacks connecting to both the switches. The hybrid design approach proposed here supports various kinds of network deployments

  • Layer 2 VLANs can be extended between multiple stacks over MCT LAG links allowing applications and devices to move between stacks.
  • Layer 3 VLANs can be terminated at the stack level and traffic routed over common Layer 3 VLANs shared between various stacks

MCT Operation

Member links of the LAG are connected to both distribution switches configured with MCT. The distribution switches are connected to each with an Inter-Chassis Link (ICL) for data traffic and for control plane messages to synchronize their state. All physical links are active with load sharing based on a hashing algorithm.

MCT includes two primary functions.

  1. LAG operation between MCT client and server:  MCT clients perform only the LAG operations defined in IEEE 802.1AX. The LAG can be a static or dynamic LACP trunk.
  2. Cluster Communication Protocol (CCP) between the MCT peer switches:  CCP is a reliable protocol that runs between the MCT peers over the ICL. It maintains control plane state and synchronizes MAC table entries on the two peer switches.

The figure below shows MCT and LAG functions.

MCT_TypicalConfiguration.jpg

  Typical MCT Configuration (click to enlarge)

 

These are common terms used to describe the operation of MCT.

  • MCT cluster: A pair of devices (switches) that is clustered together using MCT to appear as a single logical device. The devices are connected as peers through an Inter-Chassis Link (ICL).
  • MCT peer device: From the perspective of an MCT cluster device, the other device in the MCT cluster.
  • MCT cluster client or MCT client: A device that connects with MCT cluster devices through static or dynamic trunks. It can be a switch or an endpoint server host in the single-level MCT topology or another pair of MCT devices in a multi-tier MCT topology.
  • Inter-Chassis Link (ICL): A single-port or multi-port 1 GbE or 10 GbE interface between the two MCT cluster devices. It provides the control path for CCP for the cluster and also serves as the data path between the two devices.
  • MCT keep-alive VLAN: The VLAN that provides a backup control path in the event that ICL goes down.
  • Cluster Communication Protocol (CCP): A Brocade proprietary protocol that provides reliable, point-to-point transport to synchronize information between MCT cluster devices. It is the default MCT control path between the two peer devices. CCP comprises two main components:
  • CCP peer management: CCP peer management deals with establishing, and maintaining a TCP transport session between peers
  • CCP client management: CCP client management provides event-based, reliable packet transport to CCP peers.
  • Cluster Client Edge Port (CCEP): A physical port or trunk group interface on an MCT cluster device that is connected to client devices.
  • Cluster Edge Port (CEP): A port on an MCT cluster device that belongs to the MCT VLAN and connects to an upstream core switch/router, but is neither a CCEP not an ICL.
  • RBridgeID: RBridgeID is a value assigned to MCT cluster devices and clients to uniquely identify them, and helps in associating the source MAC address with an MCT device.

Adding a switch or server as a client to the MCT cluster is a simple process. The client is connected to both MCT switches and the CCP protocol manages the rest. Traffic from the client is load balanced over the LAG ports using a hashing algorithm. The MCT switches forward the traffic to the destination directly. The CCP ensures that the MAC table in the two nodes is synchronized and in a consistent state. The ICL traffic is kept to a minimum to limit overhead. Downstream traffic on MCT switches is directly sent to the client switch or server.

 

Design Best Practices

It is recommended to use multiple physical links configured as a LAG trunk for the ICL between the MCT peer switches for resiliency. An optional keep-alive VLAN can be configured to allow keep-alive and health messages to flow between the MCT switches when the ICL link fails. Only one VLAN can be configured as the keep-alive VLAN.

While still operating with two separate control planes, MCT ensures that the neighboring client devices perceive the MCT peers as a single link aggregation interface entity. In lieu of a static LAG configuration, it is recommended that Link Aggregation Control Protocol (LACP) is so that the negotiation process is in place before the LAG interface comes up to minimize the likelihood of misconfiguration.

 

Design Benefits

A campus network design with MCT provides the following benefits.

  • Better Network Performance and Scalability: MCT with the HyperEdge Architecture eliminates Spanning Tree Protocol and increases network bandwidth by making all paths active-active in the network and load sharing improving utilization.
  • High Availability: In addition to port level redundancy Multi Chassis Trunks provide switch level redundancy, by extending trunk across two switches.
  • Distributed Services: Mixed stacking allows mixing of different classes of switches within a single stack, extending and sharing the advanced features and services of premium switches to all of the switches in the stack.
  • Reliability: With MCT Traffic disruption upon failure is in sub-seconds compared to Spanning Tree Protocol that can take up to 30 seconds.
  • Simplified Network Management: fewer management touch points to manage the network.

With MCT and the HyperEdge Architecture, organizations can build scalable and resilient campus network using MCT and the HyperEdge Architecture to support business needs today and scale to meet future needs.

 

Limitations

  • MCT is better than standard LAG technology to scale HyperEdge Architecture with Layer 2 traffic on the uplinks from the access to distribution layers. However if the distribution layer involves Layer 3 switching and routing then the classical hierarchical network design is a better solution.
  • MCT is currently limited to two switches.
  • MCT imposes certain limitations on the network design that should be considered including:
    • MCT on FastIron SX or ICX 6650 doesn’t support Layer 3 multicast traffic. For designs involving Layer 3 multicast traffic, the Brocade MLXe Router with the NetIron 5.4 or later software release can be used, that supports Layer 3 multicast traffic.
    • Running Layer 3 dynamic routing protocols is not supported on the ICL and CCEP links. As shown in the Typical Deployment section VRRP/VRRP-E is recommended for Layer 3 gateway redundancy when using MCT.

Deployment with Stacking at the Access Layer

In the diagram below, Brocade FastIron SX Series switches are used in the distribution layer to scale-out a network using MCT for active-active Layer 2 and VRRP-E for active-active Layer 3 data paths. The FastIron SX switches configured with MCT connect to traditional and mixed stacks at the Access/Edge layers using ICX 6610 and ICX 6450 switches.

MCT_DeploymentWithTraditional&MixedStacking.jpg

  MCT Deployment with Stacking at the Access Layer (click to enlarge)

 

LAG from the access stacks originate on different switches in the stack and terminate on both FastIron SXX 1600 switches. This provides the following benefits.

  1. Eliminates STP providing active-active links to the core of the network so all bandwidth can be effectively utilized.
  2. Failure of a path does not stop traffic flow in the network.
  3. Failure of a MCT peer switch does not stop traffic flow in the network.
  4. Failure of a stack switch does not stop traffic flow in the network.

Configuration

This section provides basic configuration steps, which should be completed in the specified order.

 

Step 1: Configure Trunk Group (if needed)

An ICL is typically a trunk group that provides port level redundancy and higher bandwidth for cluster communication. The ICL can be a single interface or a static trunk. LACP on an ICL is not supported.

If needed, configure the ICL trunk as shown below on each MCT peer switch.

----------

Brocade-1(config)#trunk ethernet 1/15 to 1/16

Brocade-1(config)#trunk deploy

----------

On the client side, trunk configuration is required for a static trunk only before assigning interfaces for CCEP. It is not necessary to configure trunks for a single client interface or LACP client interface. If needed, configure client side trunks on each MCT peer switch.

----------

Client-1(config)#trunk ethernet 1/1 to 1/3

Client-1(config)#trunk deploy

----------

Step 2: Configure the Session VLAN and Recommended Keep-alive VLAN

Enter the following commands to create the session VLAN and recommended keep-alive VLAN.

----------

Brocade-1(config)#vlan 3001 name MCT-keep-alive

Brocade-1(config-vlan-3001)#tagged ethernet 1/9

Brocade-1(config-vlan-3001)#exit

Brocade-1(config)#vlan 3000 name Session-VLAN

Brocade-1(config-vlan-3000)#tagged ether 1/7 to 1/8

Brocade-1(config-vlan-3000)#no spanning-tree

----------

Step 3: Configure the MCT Cluster

Configuration of the peer device involves the peer's IP address, RBridgeID, and ICL specification. The <cluster-name> variable is optional; the device auto-generates the cluster name as CLUSTER-X when only the cluster ID is specified. The <cluster-id> variable must be the same on both cluster devices.

The RBridgeID must be different from the cluster RBridge and any other client in the cluster. The MCT member VLAN is defined as any VLAN of which the ICL is a member.

----------

Brocade-1(config)#cluster SX 4000

Brocade-1(config-cluster-SX)#rbridge-id 3

Brocade-1(config-cluster-SX)#session-vlan 3000

Brocade-1(config-cluster-SX)#keep-alive-vlan 3001

Brocade-1(config-cluster-SX)#icl SX-MCT ethernet 1/7

Brocade-1(config-cluster-SX)#peer 1.1.1.2 rbridge-id 2 icl SX-MCT

Brocade-1(config-cluster-SX)#deploy

----------

Step 4: Configure MCT Clients

Client configuration requires the client name, RBridgeID, and CCEP. To configure dynamic LAG Client-1 , enter the following command.

----------

Brocade-1(config-cluster-SX)# client client-1

Brocade-1(config-cluster-SX-client-1)#rbridge-id 100

Brocade-1(config-cluster-SX-client-1)#client-interface link-aggregation ether 1/15 to 1/16

Brocade-1(config-cluster-SX-client-1)deploy

----------

To configure static trunk with Client-2, enter the following command.

----------

Brocade-1(config-cluster-SX)# client client-2

Brocade-1(config-cluster-SX-client-1)#rbridge-id 120

Brocade-1(config-cluster-SX-client-1)#client-interface ether 1/15 to 1/16

Brocade-1(config-cluster-SX-client-1)deploy

----------

The next section provides a brief introduction to the switches used in the configuration example here.

 

Brocade Enterprise Switching Family

 

Brocade ICX 6650 Series Switches

The Brocade ICX 6650 Switch is a 1RU fixed Ethernet switch that delivers industry-leading 10/40 GbE density, unmatched price/performance, and seamless scalability for the ultimate investment protection. The Switch is designed for Campus LAN aggregation deployments requiring cost-effective connectivity. It is MCT capable and offer flexible Ports on Demand (PoD) licensing for non-disruptive pay-as-you-grow scalability.

 

Brocade ICX 6610 Series Switches

The Brocade ICX 6610 delivers wire-speed, non-blocking performance across all ports to support latency-sensitive applications. The switch can be stacked using 4 × 40 Gbps stacking ports that provide 320 Gbps full-duplex of stacking bandwidth. Additionally, each switch provides up to 8 × 10 GbE (Gigabit Ethernet) uplink ports, making it an ideal platform for some small aggregation deployments.

 

Brocade ICX 6430 and 6450 Series Switches

Brocade ICX 6430 and 6450 Switches provide feature-rich enterprise-class stackable LAN switching solutions to meet the scalability and reliability demands of evolving campus networks–at an affordable price. The Brocade ICX 6430 and 6450 are available in 24- and 48-port 10/100/1000 Mbps models and 1 Gigabit Ethernet (GbE) or 10 GbE dual-purpose uplink/stacking ports–with or without IEEE 802.3af and 802.3at Power over Ethernet/Power over Ethernet Plus (PoE/PoE+)–to support enterprise edge networking, wireless mobility, and IP communications.

 

Brocade FastIron SX Series Switches

The Brocade FastIron SX Series of switches provides an industry-leading price/performance campus aggregation and core solution that offers a scalable, secure, low-latency, and fault-tolerant IP services infrastructure for 1 and 10 Gigabit Ethernet (GbE) enterprise deployments. Brocade FastIron SX Series switches are available in FastIron SX 800 an 8 slot model and FastIron SX 1600 a16 slot model – with or without IEEE 802.3at Power over Ethernet/Power over Ethernet Plus (PoE/PoE+) ports and N+1 PoE power redundancy. Organizations can leverage a high performance, non-blocking architecture and an end-to-end high-availability design with redundant management modules, fans, load-sharing switch fabrics, and power supplies.

 

References

 

Summary

Multi-Chassis Trunking provides active-active links to interconnect multiple traditional or mixed stack configurations at the distribution layer to achieve higher levels of scalability. This allows organizations to build an easy to manage scalable and resilient campus network using the HyperEdge Architecture to support their business needs today and in future. With MCT customers have the flexibility to extend Layer 2 VLANs across multiple stacks to support logical extension of applications and devices within the HyperEdge Architecture.

Brocade Enterprise Campus solutions deliver value, performance and reliability; customers can deploy networking solutions that fit their business and budget. Brocade offers premium features and innovations without the premium price that make campus networking “effortless” to acquire and operate. With the Brocade HyperEdge Architectures, owning and maintaining your entire campus network is one step closer to being effortless.

Contributors