Today’s Campus network is critical for business connectivity to customers, vendors, partners, and employees. To ensure business agility and competitiveness, the campus network must easily support new applications, cloud-based services, and mobile users. The campus network for today and tomorrow should be flexible, easy to manage, and cost-effective. The HyperEdge™ Architecture from Brocade seamlessly integrates new innovations with legacy technologies to dramatically improve network flexibility and reduce management complexity allowing organizations to deploy applications quickly and cost-effectively.
Traditional and HyperEdge Architecture with Mixed Stacking (click to enlarge)
Further, Bring Your Own Device (BYOD) is emerged as a top IT initiative industry-wide. To meet escalating demands for growing network traffic, which includes high definition quality video, interactive multimedia and mobile access, network administrators need to build a high performance, scalable enterprise network from the core to the edge.
Two key technologies enable the Brocade HyperEdge Architecture:
MCT offers several key features and enhancements for the traditional enterprise campus network: active-active links to eliminate the use of Spanning Tree Protocol (STP); high availability and resilience for switching.
This document discusses how to use Brocade MCT to increase network performance and flexibility, while lowering administrative cost. Readers will gain an understanding of:
Enterprise architects, network architects and network designers who need a simple solution to build a scalable campus network while reducing TCO.
The objective is to show how to design and architect a scalable Enterprise campus network using Brocade MCT in the Brocade HyperEdge Architecture.
The following are links to related information.
The content in this guide was developed by the following key contributors.
2013-04-09 1.0 Initial Release
Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)
The HyperEdge Architecture is an evolutionary campus network architecture based on integrated and distributed technologies that radically simplify networks by eliminating legacy protocols such as spanning tree and controller-centric tunneled wireless traffic. The HyperEdge Architecture meets the key challenges created by minimal innovation over the last few years and is ready to take on the challenges of streaming video, unified communications, VDI (Virtual Desktop Infrastructure), cloud-based applications, and mobility. The HyperEdge Architecture provides a foundation for a flexible, application-centric, automated, and cost-effective network solution that Brocade calls The Effortless Network™.
Campus Network Reference Architecture (click to enlarge)
Multi Chassis Trunking (MCT) allows two switches, or routers, to cluster together and appear as a single logical switch. The two switches are connected together with inter-switch links (ICL) that carry messages and control plane traffic needed to synchronize the state of the switches. MCT is used to scale-out the network by interconnecting multiple access domains and then aggregating their traffic before forwarding it to the core.
In the HyperEdge architecture, a domain is a single group of network switches and wireless access points that enable distributed network services, consolidate management, and share a common configuration. Traditional stacking and Brocade’s mixed stacking are two examples of a domain. Each domain connects to both physical switches in the MCT cluster through one or more uplinks
Easy to deploy and manage campus networks are key to deliver services to users in a cost-effective manner. The HyperEdge Architecture is used to achieve these objectives. However, improving and providing campus networks to be easier to deploy is not effective if they can’t scale to meet the performance requirements for today’s bandwidth intensive applications. MCT provides active-active links that connect with multiple traditional and/or mixed stack domains. Active/active links increase bandwidth scalability and enhance system availability. MCT provides switch redundancy and loop prevention without the use of STP. MCT in the HyperEdge Architecture delivers flexibility and scalability simultaneously improving business agility.
The Campus Network Base Reference Architecture shows how customers can design and build high performance, large scale enterprise networks using the HyperEdge Architecture.
The diagram below shows how MCT is used to interconnect several domains to a pair of FastIron SX Modular Switches to create a scalable Enterprise Network.
MCT Deployment with Stacking at the Access Layer (click to enlarge)
The campus network is critical for business connectivity to customers, vendors, partners and employees for an organization to remain competitive and agile. New bandwidth-intensive and quality-of-service sensitive applications, new sophisticated devices, and evolving usage patterns and traffic flows are putting immense stresses on IT.
Video, Unified Communications (UC), and Virtual Desktop Infrastructure (VDI) improve user productivity but they also increase traffic and availability requirements in the Campus network. As enterprises evolve to meet new economic conditions and global business requirements, where and how business is transacted is changing faster than ever. User experience driven by Web 2.0 technologies and new applications is causing a fundamental shift in the way people do their jobs and how they use these applications and services. Content and applications must be available instantly, whether they are delivered from a workstation, a virtual data center, or the Internet.
With the broad adoption of mobile devices such as smart phones, tablets, and laptops, users have become more demanding. Anytime, anywhere mobile access has become ubiquitous with an explosion of mobile devices connected to the network. Bring Your Own Device (BYOD) or Bring Your Own Technology (BYOT) has emerged as a top IT initiative.
Business Trends Driving Enterprise Networks
Existing enterprise campus networks have become complex, rigid and unable to scale to support today’s modern applications. The popularity of multimedia video and real time video conferencing applications require a network design that delivers higher bandwidth with low latency and jitter. Organizations today need to deploy applications faster but the complexity and rigidity of existing enterprise networks can delay deployment of newer applications and require costly. Here are some of the reasons for these limitations.
Organizations must design and build network infrastructure that is business-optimized and application-friendly for today’s requirements, yet is flexible enough to ensure that both current and future requirements are met. At the same time, they face continued pressure to reduce costs as IT is asked to do more with less. To support new applications and new devices the campus network needs to support the following design requirements.
The HyperEdge Architecture is for those organizations that need to design their campus network that is easy to deploy and manage. MCT with active-active links overcomes the limitations of STP allowing enterprise network to scale-out by interconnecting multiple access switches and aggregate the traffic to a high-performance and scalable network core.
Eliminating STP unlocks the potential of the network to deliver higher bandwidth and active-active data paths in the distribution and core of the network. As more applications with high quality user experience requirements push traffic onto the campus network, outages and disruptions will need to be avoided.
A Stack in the HyperEdge Architecture is a single group of network switches and wireless access points that enable distributed network services, consolidate management, and share network configuration information. In this design approach we will consider two stack types: Mixed Stack and Traditional Stack.
This is a collection of devices that are a part of a mixed stacking configuration that allows for the combination of Premium Switches (Stackable) and Non-Premium Switches (Stackable) to form a Multi-Dimensional stack. There are two ways in which customers can build a mixed stack domain
This type of stack includes switches with the same capabilities and features. A traditional stack can be configured with either Layer 2 or Layer 3 features on all switches in the stack.
Traditional and Mixed Stacking Domains (click to enlarge)
MCT is a way to ensure high availability for Layer 2 traffic offering a choice of active/passive or active/active configurations. MCT creates a single logical switching device from two physical chassis connected through an inter-switch link. Combining physical links into logical link aggregation group (LAG) connections is a common way to increase bandwidth and resiliency between the access and distribution switches. By adding MCT to the distribution switches, the physical links of a LAG can terminate on both distribution switches. MCT is a cost-effective way to extend high availability so if a distribution switch goes off-line, traffic continues to flow. When MCT is installed on a pair of switches, both switches are connected with an Inter-Chassis Link (ICL) to enable data flow and control messages between them creating a single logical chassis or switch. All links in a LAG from the access switches are active and are load shared using a hash algorithm. If one switch in the MCT cluster fails, the data path can still use the other switch within a few milliseconds for traffic rerouting. This dramatically increases the network availability, resilience and performance.
The HyperEdge Architecture has two ways to integrate mixed stacks with traditional stacks in a scalable campus network.
The choice of having access layer traffic switched or routed to the distribution layer is an important one. Each has its pros can cons and the final choice depends on the solution requirements.
The placement of the Layer 3 boundary at the access layer, as occurs with mixed stacking, eliminates STP so all uplinks can be used to the distribution layer via Layer 3 equal cost multipath (ECMP) routing. However, this limits Layer 2 mobility since VLANs are confined to the mixed stack. Although Layer 2 mobility of applications and users is restricted to a single stack, it can be enabled by using MPLS VRF technologies but this increases complexity.
Layer 2 switching in the access layer provides the ability to simplify network design by removing Layer 3 routing and allows logical grouping of devices, users and applications without requiring any complex overlay technology. However traditional (classical) Layer 2 designs require the use of STP to ensure loop free traffic flows. But STP only allows one path between switches and that introduces bandwidth inefficiencies and adds long network convergence delays in case of link failures or when a switch goes off-line.
Each method has strengths and weaknesses, and a hybrid design can provide the best of both.
Shown below are the Layer 2 and Layer 3 design options for connecting access layer traffic to the distribution layer.
Access to Distribution Layer Connection Options (click to enlarge)
Many organizations prefer a hybrid design combining Layer 2 and Layer 3 for flexibility. Legacy applications need to interconnect multiple stacks with a simple Layer 2 switched access design as they require a direct Layer 2 path to a server or management station. These applications need a common subnet for a logical group of end devices that are spread across multiple stacks. VLANs carrying routable traffic from an end-station are terminated directly on the access switches and traffic is routed to the distribution switch. Traffic that must be bridged at Layer 2 to an end-host is forwarded on 802.1Q trunks to the distribution switch for forwarding to the appropriate access layer stack. This supports legacy applications requiring Layer 2 bridging and modern applications using Layer 3 routing between the access and distribution layers.
The MCT Deployment reference architecture diagram shows a combination of traditional and mixed stacks of switches connected to a pair of distribution switches. The distribution switches are clustered together using MCT and act like a single logical switch to the stacks connecting to both the switches. The hybrid design approach proposed here supports various kinds of network deployments
Member links of the LAG are connected to both distribution switches configured with MCT. The distribution switches are connected to each with an Inter-Chassis Link (ICL) for data traffic and for control plane messages to synchronize their state. All physical links are active with load sharing based on a hashing algorithm.
MCT includes two primary functions.
The figure below shows MCT and LAG functions.
Typical MCT Configuration (click to enlarge)
These are common terms used to describe the operation of MCT.
Adding a switch or server as a client to the MCT cluster is a simple process. The client is connected to both MCT switches and the CCP protocol manages the rest. Traffic from the client is load balanced over the LAG ports using a hashing algorithm. The MCT switches forward the traffic to the destination directly. The CCP ensures that the MAC table in the two nodes is synchronized and in a consistent state. The ICL traffic is kept to a minimum to limit overhead. Downstream traffic on MCT switches is directly sent to the client switch or server.
It is recommended to use multiple physical links configured as a LAG trunk for the ICL between the MCT peer switches for resiliency. An optional keep-alive VLAN can be configured to allow keep-alive and health messages to flow between the MCT switches when the ICL link fails. Only one VLAN can be configured as the keep-alive VLAN.
While still operating with two separate control planes, MCT ensures that the neighboring client devices perceive the MCT peers as a single link aggregation interface entity. In lieu of a static LAG configuration, it is recommended that Link Aggregation Control Protocol (LACP) is so that the negotiation process is in place before the LAG interface comes up to minimize the likelihood of misconfiguration.
A campus network design with MCT provides the following benefits.
With MCT and the HyperEdge Architecture, organizations can build scalable and resilient campus network using MCT and the HyperEdge Architecture to support business needs today and scale to meet future needs.
In the diagram below, Brocade FastIron SX Series switches are used in the distribution layer to scale-out a network using MCT for active-active Layer 2 and VRRP-E for active-active Layer 3 data paths. The FastIron SX switches configured with MCT connect to traditional and mixed stacks at the Access/Edge layers using ICX 6610 and ICX 6450 switches.
MCT Deployment with Stacking at the Access Layer (click to enlarge)
LAG from the access stacks originate on different switches in the stack and terminate on both FastIron SXX 1600 switches. This provides the following benefits.
This section provides basic configuration steps, which should be completed in the specified order.
An ICL is typically a trunk group that provides port level redundancy and higher bandwidth for cluster communication. The ICL can be a single interface or a static trunk. LACP on an ICL is not supported.
If needed, configure the ICL trunk as shown below on each MCT peer switch.
Brocade-1(config)#trunk ethernet 1/15 to 1/16
On the client side, trunk configuration is required for a static trunk only before assigning interfaces for CCEP. It is not necessary to configure trunks for a single client interface or LACP client interface. If needed, configure client side trunks on each MCT peer switch.
Client-1(config)#trunk ethernet 1/1 to 1/3
Step 2: Configure the Session VLAN and Recommended Keep-alive VLAN
Enter the following commands to create the session VLAN and recommended keep-alive VLAN.
Brocade-1(config)#vlan 3001 name MCT-keep-alive
Brocade-1(config-vlan-3001)#tagged ethernet 1/9
Brocade-1(config)#vlan 3000 name Session-VLAN
Brocade-1(config-vlan-3000)#tagged ether 1/7 to 1/8
Configuration of the peer device involves the peer's IP address, RBridgeID, and ICL specification. The <cluster-name> variable is optional; the device auto-generates the cluster name as CLUSTER-X when only the cluster ID is specified. The <cluster-id> variable must be the same on both cluster devices.
The RBridgeID must be different from the cluster RBridge and any other client in the cluster. The MCT member VLAN is defined as any VLAN of which the ICL is a member.
Brocade-1(config)#cluster SX 4000
Brocade-1(config-cluster-SX)#icl SX-MCT ethernet 1/7
Brocade-1(config-cluster-SX)#peer 184.108.40.206 rbridge-id 2 icl SX-MCT
Client configuration requires the client name, RBridgeID, and CCEP. To configure dynamic LAG Client-1 , enter the following command.
Brocade-1(config-cluster-SX)# client client-1
Brocade-1(config-cluster-SX-client-1)#client-interface link-aggregation ether 1/15 to 1/16
To configure static trunk with Client-2, enter the following command.
Brocade-1(config-cluster-SX)# client client-2
Brocade-1(config-cluster-SX-client-1)#client-interface ether 1/15 to 1/16
The next section provides a brief introduction to the switches used in the configuration example here.
Brocade Enterprise Switching Family
The Brocade ICX 6650 Switch is a 1RU fixed Ethernet switch that delivers industry-leading 10/40 GbE density, unmatched price/performance, and seamless scalability for the ultimate investment protection. The Switch is designed for Campus LAN aggregation deployments requiring cost-effective connectivity. It is MCT capable and offer flexible Ports on Demand (PoD) licensing for non-disruptive pay-as-you-grow scalability.
The Brocade ICX 6610 delivers wire-speed, non-blocking performance across all ports to support latency-sensitive applications. The switch can be stacked using 4 × 40 Gbps stacking ports that provide 320 Gbps full-duplex of stacking bandwidth. Additionally, each switch provides up to 8 × 10 GbE (Gigabit Ethernet) uplink ports, making it an ideal platform for some small aggregation deployments.
Brocade ICX 6430 and 6450 Switches provide feature-rich enterprise-class stackable LAN switching solutions to meet the scalability and reliability demands of evolving campus networks–at an affordable price. The Brocade ICX 6430 and 6450 are available in 24- and 48-port 10/100/1000 Mbps models and 1 Gigabit Ethernet (GbE) or 10 GbE dual-purpose uplink/stacking ports–with or without IEEE 802.3af and 802.3at Power over Ethernet/Power over Ethernet Plus (PoE/PoE+)–to support enterprise edge networking, wireless mobility, and IP communications.
The Brocade FastIron SX Series of switches provides an industry-leading price/performance campus aggregation and core solution that offers a scalable, secure, low-latency, and fault-tolerant IP services infrastructure for 1 and 10 Gigabit Ethernet (GbE) enterprise deployments. Brocade FastIron SX Series switches are available in FastIron SX 800 an 8 slot model and FastIron SX 1600 a16 slot model – with or without IEEE 802.3at Power over Ethernet/Power over Ethernet Plus (PoE/PoE+) ports and N+1 PoE power redundancy. Organizations can leverage a high performance, non-blocking architecture and an end-to-end high-availability design with redundant management modules, fans, load-sharing switch fabrics, and power supplies.
Multi-Chassis Trunking provides active-active links to interconnect multiple traditional or mixed stack configurations at the distribution layer to achieve higher levels of scalability. This allows organizations to build an easy to manage scalable and resilient campus network using the HyperEdge Architecture to support their business needs today and in future. With MCT customers have the flexibility to extend Layer 2 VLANs across multiple stacks to support logical extension of applications and devices within the HyperEdge Architecture.
Brocade Enterprise Campus solutions deliver value, performance and reliability; customers can deploy networking solutions that fit their business and budget. Brocade offers premium features and innovations without the premium price that make campus networking “effortless” to acquire and operate. With the Brocade HyperEdge Architectures, owning and maintaining your entire campus network is one step closer to being effortless.