Campus Networks

Campus Network Infrastructure, Base Reference Architecture

by on ‎10-31-2012 09:16 AM - edited on ‎04-17-2014 03:45 AM by pmadduru (13,334 Views)

Synopsis: A comprehensive reference architecture featuring Brocade's HyperEdge Architecture for building a modern Campus Network.





As the corporate data center network undergoes transformation with technologies such as server virtualization and network consolidation, the focal point of change is currently centered in the user-facing campus side of the network. The physical scope of the campus network is typically one or more buildings that house key corporate departments and staff. Depending on the size of the company, a campus network can be quite extensive, servicing scores of buildings and thousands of users.


The data center provides the “heavy lifting” of data processing by housing file and application servers, the storage network infrastructure (SAN), and secure access to the external Internet and corporate intranet. The network design criteria for the data center are therefore significantly different from the design requirements of the campus LAN. By necessity, the data center network is highly centralized and tends toward uniformity of assets. The campus network, by contrast, is dispersed and must accommodate a wide variety of devices, including workstations, laptops, PDAs, smartphones, wireless access points, voice over IP (VoIP) handsets, IP-based remote sensors, radio frequency identification (RFID) readers, and security cameras, video surveillance and card readers.


Although the data center and campus network are part of a single corporate network, the unique requirements of each domain must be understood, to ensure the harmonious integration of the entire network. The sudden introduction of a new technology (for example, wireless access) on the campus can have unintended consequences in the data center, just as data center restructuring can adversely affect campus access. Optimizing the campus infrastructure must therefore incorporate any downstream effects that require additional changes to the data center design or services.





This document provides the Brocade Strategic Solution Lab reference architecture for the campus LAN network incorporates wired and wireless access, connectivity from 100 Mb through 40 Gbps, switch stacking, Power over Ethernet and Power over Ethernet+ (POE, POE+), options for high availability and resilience, and comprehensive management.



Enterprise architects, network architects and network designers will find important capabilities, features and reference designs in this document.



The reference architecture addresses common business requirements for the campus LAN. Brocade provides separate reference architectures for data center networks and service provider networks.


Related Documents

The Brocade Strategic Solutions Lab (SSL) continues to develop content. Use the references section below to see all available publications.



Brocade Community: Solutions Lab Publications


About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (


Key Contributors

The content in this guide was developed by the following key contributors.

  • Lead Architect  Venugopal Nalakonda, Strategic Solutions Lab
  • Lead Author Brook Reams, Strategic Solutions Lab


Document History






Initial Release



Updated for HyperEdge Architecture features


Reference Architecture

Campus networks are designed to support application requirements and deliver a high quality user experience. The original campus network was designed to meet the requirements of client/server applications and file/print sharing services using desktop and later lap top PCs. Today, the range of applications and supporting protocols has broadened while end-user devices have become mobile relying on wireless LAN (WLAN) to support user mobility and consistent connectivity.



   Campus Network Reference Architecture (click to enlarge)

Design Requirements

Campus design requirements are divided into external and internal. External requirements include users, applications, and devices (personal wireless devices, desktop and laptop computers, security cameras, etc.).

Internal requirements include LAN networks spanning a number of technologies including:

  • Layer-2 (Ethernet)
  • Layer-3
  • Wireless LAN (WLAN)
  • IP Network Services (access control policies, security, QoS, DHCP, etc.)
  • Network Management (configure, monitor, alert, etc.)

Design Modularity

Design modularity relies on templates that are used to control complexity while supporting a wide range of scalability and performance options. Templates are connected together to construct a scalable data center network as shown in the following diagram. Changes to one template do not affect other templates as they rely on standard interfaces and protocols.


   Reference Architecture, Design Templates (click to enlarge)

The functional model of the reference architecture is shown below. The campus network provides device connectivity with data center hosted applications and the internet and connects multiple campus environments together including remote offices. Campus network functions are classified as Core, Distribution, Access, wireless LAN (WLAN) and Management.


   Reference Architecture, Functional Model (click to enlarge)

In a campus network, some or all of these functions are required depending on the scale of the network. For example, an Edge Template may include wired and wireless LAN but not need a Distribution block if it’s a small enough size. A larger network may need to include a Distribution block. The flexibility of the architecture is provided by building blocks which are connected together to create templates.

Multiple templates can be defined, tested and deployed using a common set of building blocks. This promotes re-usability of blocks (and technologies), reduces testing and simplifies support. The figure below shows the building blocks used to create a typical campus template. As shown, it connects to a core template.


   Template Showing Building Blocks (click to enlarge)

In this example, the access blocks connect devices to access switches commonly located in wiring closets. The closet switches define the extent of the broadcast domain and are the L3/L2 boundary. Different access blocks can be designed to efficiently handle different devices (wired or wireless) and device density. Scale up is accomplished by deploying additional access blocks up to the design limit of the particular Edge Template.

A distribution block can be included in the Edge Template to increase connection scalability. It distributes traffic to multiple access blocks and wiring closets, often on a single floor. It provides connectivity to the Core Template, which can be a separate Campus core, or could be the data center core for smaller deployments.

External Requirements


Applications hosted in the data center created on servers create client traffic in the campus LAN. Device connections to the Internet that by-pass data center hosted applications are proliferating as smart phones and table computers connect to social media content including video streaming and public cloud applications changing the traffic patterns in the campus network. These impact the design requirements of the campus network principally bandwidth, latency, security and availability. The following are applications whose traffic is transported over the campus network.


This is the traditional application where a user computer provides the interface for a backend application. This is commonly used for back-office applications.

Web n-tier

This is the next generation of application architecture where HTML and web protocols defined the presentation layer. Today, this has grown to incorporate web 2.0 applications that deliver social networking applications, on-line search, blogging and micro-blogging sites.

Virtual Desktop Infrastructure

The cost of maintaining and managing desktop environments has stubbornly remained high. Virtual desktop infrastructure (VDI) consolidates applications and data into the data center while maintain the user experience of running applications directly on a desktop or lap top computer.


Unified Communications

Previously, telephony, video conferencing, instant messaging, and application sharing relied on separate facilities, devices, networks and operations teams. With unified communications (UC), a single device (desktop, laptop, smart phone) provides universal access to all forms of media. The result is dynamic, real-time collaboration that changes as quickly as the user decides to add in another media type. A high quality user experience places new demands on the campus network.


Edge Devices

The campus network traditionally connected desktop computers and printers to applications and file sharing servers in the data center. Today, the range of devices in the campus network is much broader. As more people grow accustomed to using personal hand-held devices to access information, deliver entertainment, and connect with people via voice, text messages and increasingly real time video, they expect the same level of access at work. This has implications on wireless/wired network management and security policy management in the campus network.

Desktop and Laptop PC

For many companies, employees use desktop and laptop PCs. However, the bandwidth is increasing particularly if UC and VDI applications are deployed.

Voice over IP Phones

These devices are on many office desks. With the advent of UC, the desktop / laptop computer becomes the unifying device to access all forms of collaborative media, text, voice, video and application sharing. VoIP phones require POE and POE+ to the desktop. Real-time video chats with audio are driving universal GE connectivity to the all user devices.

WLAN Controller and Access Points

The demand for mobility is growing in the campus environment. For a growing number of users, smart phones and compact light weight pad computers are replacing lap top computers. Many campus environments now support WLAN services using 801.11 a/b/g/n. WLAN access points commonly require POE/POE+ to provide power to the access point over the same Ethernet cable used for networking.

Thin Clients

The thin client (limited or no disk storage, simple operating system and few if any applications running directly on the device) are one way to reduce cost and complexity when deploying VDI applications. The campus network with POE/POE+ simplifies connecting, powering and managing thin client devices.

Security Cameras and Surveillance Video

Many campus environments integrate security cameras and surveillance video as well as the typical end user devices. These commonly use POE/POE+ to simplify connectivity and power. For video, network bandwidth has to be sufficient to avoid bottlenecks or loss of video quality.

Network Requirements

Performance and Scalability

The campus network is increasingly supporting applications such as unified communications and VDI with increased performance and scalability requirements. Desktop device link rates are commonly GE while inter-switch links between access and core switches will use 10 GE uplinks. Campus core to data center core links of 40 GE are now under consideration for VDI and UC applications.

Stackable switches will require more bandwidth for stacking links to avoid high over subscription between switches. Where 16 Gbps was the norm in the past, stacking links able to scale to 10x as much bandwidth will become more common.

As thin clients become deployed in support of VDI applications, POE will move to POE+ to simplify how power and connectivity are provided to more power hungry devices.

With current improvements to wireless LAN technology, the 100+ Mbps enabled by IEEE 802.11n is suitable for most laptop and mobile applications. Specific applications such as VoIP require additional performance guarantees in the form of Quality of Service (QoS) support and traffic policing.

Redundant network links for high availability are not required for all applications, but certainly mission-critical client applications must have failover capability in the event of an individual link or interface outage. A properly designed campus LAN infrastructure must be sufficiently flexible to provide the required bandwidth and availability per work group or application as business requirements change. The variability of client connectivity requirements also impacts other layers of the network infrastructure as client traffic is funneled to the network core and the data center.



As more traffic flows in the campus, the distribution and core need more resilience and availability. Multi-chassis trunking (MCT) is one way to ensure high availability for layer 2 traffic offering a choice of active/passive or active/active configurations. MCT creates a single logical switching device from two physical chassis. LAG connections can terminate on either chassis providing a cost-effective means of achieving high availability. As shown in the figure below, a link failure between an access and distribution switch (event 1) causes traffic to reroute to the redundant LAG link. As more applications with high quality user experience requirements push traffic onto the campus network, outages and disruptions will need to be avoided.


   Three Tier Topology with Full Mesh Resiliency (click to enlarge)

In the access layer using switch stacks, hit-less stacking avoids traffic outages caused by adding, removing switches in the stack, or when a switch fails. Brocade offers an optimized stacking solution, HyperEdge technology. Switch stacking also provides high availability as all physical switches act as a single logical switch so physical links in a LAG can connect to different switches protecting against link, port and switch outage. HyperEdge also provides mixed stacking so routing and other premium services only need to be provided by two switches in the stack but are shared by all switches in the stack.

For routing services in the distribution and core layers, graceful OSPF and BGP restarts and hitless software upgrades ensure traffic continues to flow within the campus and between campus and the data center and internet. And, VRRP or VRRP-e can be used with core routers to provide high-availability and resiliency. As show in the figure above, a link between one of the core routers and distribution switches fails (event 2). Traffic automatically routes to the other distribution switch and takes the alternate LAG link to the access layer switch.

In the WLAN, plug-and-play access points allow quick changes without network disruption so coverage can be adjusted as needed.



The sheer number of client devices at the campus layer poses an ongoing security challenge. A network is only as secure as its weakest link, so a large, dispersed campus LAN must be purposely provisioned with distributed security mechanisms to eliminate vulnerabilities to attack or inadvertent access. Access control lists (ACLs), authentication, virtual private networks (VPNs), in-flight data encryption, and other safeguards restrict network access to only authorized users and devices and forestall attempts to penetrate the network within the campus itself.

A range of protocols and services support end-to-end security within the campus. VLANs provide logical traffic isolation within work groups and applications. 802.1X port-based authentication and VLAN access control list (ACL) policy enforcement help secure wired connections in the access layer.

In the WLAN, IPSec provides encrypted tunnels securing wireless traffic. Controller configuration of access points means security policies can be uniformly and correctly applied to all WLAN access points eliminating vulnerabilities while simplifying management.



Given the inherently dispersed nature of the campus and the diversity of client devices, centralized and uniform management is essential for maintaining performance and availability and for enforcing corporate security policies. Campus network management relies on a variety of tools.


   Campus management with Brocade Network Adviser (click to enlarge)

Brocade provides a comprehensive management platform, Brocade Network Advisor (BNA) as shown above. BNA is graphic management platform that extends SNMP which has been a long time standard for monitoring and status information of components.

As the scale and reach of the campus network increases, real-time monitoring of traffic flows using open standards such as sFlow are becoming important. With mobility of devices and real-time variation in traffic created by unified communications and VDI, scalable traffic monitoring that scales from the individual device and application to LAG links and router ports provides proactive control of traffic to avoid bottlenecks and hot spots.


Campus Interconnects

Campus to Data Center

As new applications such as UC and VDI are deployed, bandwidth in the campus increases. This drives deployment of 10 GE and 40 GE links within the Campus distribution and core switches and between the campus core and data center core.

Campus to Internet

More web 2.0 traffic originating in the campus means more campus traffic flows directly to the internet by-passing the data center. The campus core has to scale well to accommodate this increased traffic.

Campus to MAN

Metropolitan area networks (MAN) are commonly used by K12 school districts and where there are a number of branch offices in a local region. The campus network can use Ethernet on a MAN leased from a service provider for 1 GbE or 10 GbE Ethernet LAN service.

Campus to Remote Office

Many companies have branch offices in the same region, across the country or overseas. As applications such as UC are deployed, peer-to-peer sessions dynamically created by users in real-time can scale from simple messaging and texting to multi-party video conferencing and back down again in minutes. Core router and campus LAN bandwidth and latency have to be sufficient to create a high-quality user experience. Network monitoring tools need to scale while providing identification of traffic flows by application and early warnings about traffic hot spots so sudden changes in traffic patterns can be understood and changes implemented to alleviate them.


The figure below shows a classic topology for a campus network with three tiers; access, distribution (optional) and core; each providing specific services and functionality.


   Classic three tier topology with full mesh resilient links (click to enlarge)

Access Layer

The access layer represents the most complex tier of the campus LAN. Over time portions of the access layer might be spontaneously expanded with an eclectic mix of network equipment that is difficult to manage making it difficult or impossible to accommodate changing user needs. A technology refresh (for example, from Fast to Gigabit Ethernet speeds) is often the only opportunity to streamline access layer design and simplify management. Developing a proactive and scalable access layer design requires first and foremost an understanding of the business applications that need to be supported.

Because high-availability, high-performance, and security requirements can vary from one department to the next, the access layer should support multiple speeds, rapid failover capability, VPN and other security protocols as required. Unified communications with concurrent VoIP, streaming media, and conventional data transactions can require QoS and PoE. In addition, devices requiring wireless connectivity need wireless LAN (WLAN) access points and centralized management to ensure stable and secure connectivity.

The design goal is an intelligent edgethat automates QoS configuration on a per-user or per-port basis for flexibility. With new applications like unified communications, the campus network also has to handle dynamic changes in traffic with no loss of performance or quality of service.

Access layer switches are typically housed in wiring closets distributed on multiple floors of each building on the campus. These, in turn, are connected to distribution layer switches that feed traffic to other segments or to the network core. To accommodate the fan-in of multiple access layer switches to the distribution layer, high-performance uplinks are required. Currently, these are typically 10 GbE links or multiple 10 GbE links combined into a single link aggregation group (LAG), as shown below.


   LAG for Resiliency between Access and Distribution Layers (click to enlarge)

The actual uplink bandwidth required depends on the traffic loads the devices generate. The uplink bandwidth between access and distribution layers should support peak traffic volumes, to ensure optimum operation. Best practice is to utilize traffic policing and rate limiting in these links, allowing the connection to limit the amount of lower priority traffic ensuring that the link does not become congested.

In the figure, connectivity between the access and the distribution switches have dual links. One of those links is blocked by spanning tree protocol and activated only in the event of a primary link failure. This preferred high-availability design, however, is not mandatory for all access layer switches, depending on the business criticality of the applications supported. Likewise, although WLAN access points are often configured to provide overlapping wireless coverage to guard against single access point failures, less mission-critical clients can be adequately served by a less resilient design. Most campus clients, however, require at least continuous access to e-mail and other corporate applications and so are best supported by a redundant connectivity scheme capable of failover in the event of a switch or link failure.

At the access layer, logical segregation of traffic in different workgroups or departments is achieved by virtual LAN (VLAN) segmentation. Because IEEE 802.1Q VLAN tagging can span multiple switches, members of a specific VLAN do not have to be physically collocated. By providing a limited Layer 2 broadcast domain for specified groups of users, VLANs help enforce security and performance policies and simplify network management.

Distribution Layer

The campus distribution layer distributes traffic from multiple access layer switches to the network core. Because each distribution switch is responsible for multiple access layer switch traffic flows, from hundreds of users, distribution switches should have high-availability architectures—including redundant power supplies, hot-swappable fans, high-performance backplanes, redundant management modules, and high-density port modules. Distribution switches are typically Layer 2/3 switches with support for robust routing protocols, to service both the access and core layers. The are configured with Multi-chassis Trunking (MCT) for resiliency and high availability for Layer 2 traffic and VRRP/VRRP-E for Layer 3 resiliency and availability for gateway traffic to the core. To provide adequate bandwidth to the core, distribution switches typically provide multiple 10 GbE or 40 GbE ports and with LAG for higher throughput and link resilency. Optionally, distribution switches may provide advanced security or other services to support upper layer applications.

Today, the primary driver for robust distribution layer design is the dramatic increase in access layer clients, the diversity of applications run by those clients, and the increased use of traffic intensive protocols for multimedia delivery and rich content. Higher volumes of traffic at the access layer require much higher performance at the distribution layer, as well as mechanisms for maintaining traffic separation and security, as required.

Core Layer

The core layer represents the heart of the campus network infrastructure. Transactions from campus clients to data center servers or to external networks must pass through the core with no loss in data integrity, performance, or availability. Core switch architectures are therefore designed to support 99.999% (“five nines”) or greater availability and high-density modules of high-performance ports. Because the core layer is also the gateway to the extended corporate network and the Internet, core switches can also be provisioned with 10 GbE ports for access to Carrier Ethernet networks, OC12 (622 Mbps) or OC192 (9.6 Gbps) high-speed WAN interfaces.

In addition to the robust routing protocols supported by the distribution layer, the core layer may also provide advanced multiprotocol label switching (MPLS), virtual private LAN service (VPLS), and multi-virtual routing and forwarding (Multi-VRF) protocols to enforce traffic separation and QoS policies. Network virtualization at the core not only simplifies management, it ensures that the unique requirements of diverse client populations are met in a common infrastructure. Although it is possible to collapse the distribution and core layers into a single layer using large chassis-based switches, the high availability of the core layer is so critical for most business operations that the majority of corporate networks maintain a dedicated core infrastructure.

Networking Protocols

Network protocols used in the campus network vary at each layer of the network: access layer, Ethernet and WLAN 802.11, Distribution and Core, IP and routing protocols (RIP, OSPF, BGP, MPLS). At each layer, performance, scalability and resiliency are achieved using suitable protocols.

Layer 2

The access layer typically relies on layer 2 protocols for frame forwarding which are known as link layer protocols. The link layer has to prevent loops from forming between switches. Commonly used loop prevention protocols include 802.1w Rapid Spanning Tree (RSTP) and Per-VLAN Rapid Spanning Tree (PVST). Both provide rapid convergence should links or switches fail.

Spanning Tree Protocol

Spanning Tree protocol (STP) provides frame forwarding between switches in a broadcast domain. Enhancements to STP include Multiple Spanning Tree (MSTP), Rapid Spanning Tree (RSTP), and Per VLAN Spanning Tree (PVST/PVST+).

Although Spanning Tree is wide spread, it has limitations. For example, only one link can be active at a time and all traffic within the broadcast domain is halted whenever there is a change in the network (link added/removed, switch added/removed) so STP can rebuild the topology. Link aggregation groups (LAG), multi-chassis trunking (MCT) are additional capabilities that can be used in Ethernet networks.

Link Aggregation Group

Link distribution between the access and distribution layers uses 802.1ad link aggregation control protocol (LACP). This provides automatic formation of link aggregation groups (LAG). Each LAG includes multiple physical connections providing resiliency and greater bandwidth than just a single physical link. Each LAG is treated as a single connection by RSTP and PVST. If multiple LAG are configured between switches, one is active and the other passive when using RSTP or PVST. Should an entire LAG fail, the passive LAG become active.

Link aggregation groups (LAG), sometimes called EtherChannels, are commonly used to increase bandwidth and improve resiliency. As server and application performance increased, a single link couldn’t provide sufficient bandwidth. LAG allows multiple links at the same link rate to be combined into a single logical link. Failure of a link with the LAG is detected and traffic continues to flow without requiring STP to rebuild the topology. LAG can be used between switches and also between servers and switches provided the host operating system supports it. Brocade supports LAG on all data center switches.


Multi-chassis Trunking

Where LAG provides greater bandwidth and improved resiliency within a link, multi-chassis trunking provides fault tolerance at the switch level. With MCT, two switches are connected using inter-chassis links (ICL) and present themselves to other switches and hosts as a single logical switch. Should one switch go off-line, traffic continues to flow to the other switch without requiring STP to rebuild the topology. Brocade supports MCT on the MLX and FastIron SX series of switches.


Switch Stacking

Stacking provides resiliency by combining multiple physical switches together into a single logical switch. Similar to MCT, stacking allows multiple links within a LAG to connect to separate physical switches in the stack. Should a link or switch fail, traffic is rerouted to the remaining links and the switches they connect to in the stack.

Mixed Stacking

As part of Brocade’s development of the HyperEdge Architecture, Brocade provides a powerful optimization of stacking called mixed stacking. Switches with different capabilities can be included in a single stack so high value functions such as Layer 3 routing, can be shared without the cost of having these capabilities installed in all switches in the stack. A mixed stack typically creates a Layer 2 / Layer 3 boundary for traffic within the stack. The distribution/access layers are collapsed into a single mixed stack simplifying the design and lowering configuration and administrative cost. A mixed stack can include lower cost Layer 2 switches with only one (or two for redundancy and resilience) Layer 3 capable switches.  All Layer 2 switches forward traffic to the Layer 3 enabled switches for routing by the core layer.



Traffic isolation relies on 802.1q virtual LAN (VLAN) tagging. VLANs allow flexible configuration of switches so that traffic flows within groups, departments or applications is contained within separate logical domains.

A VLAN is used to improve utilization of the physical network, provide logical isolation of traffic and to apply network policies to Ethernet traffic. Frames have a tag added and removed with a VLAN identifier based on policies set within the server, storage or switch. Class of Service (CoS) identifiers are included in the tag so switches can optimize frame forwarding when congestion occurs. VLANs are commonly used to segregate traffic for security reasons. For example, servers within a cluster maybe assigned to one VLAN, management traffic to another and client traffic to yet another VLAN. Each VLAN’s traffic is logically isolated from other VLAN traffic unless specifically routed at a Layer 3 router. Brocade supports VLANs on all data center switches.

Layer 3

The distribution layer is the boundary between layer 2 and layer 3 protocols. Distribution switches have ports facing the access layer configured with layer 2 protocols and have ports facing the core switches configured with layer 3 protocols.

IPv4 and IPv6 may be required in the distribution layer if IPv6 support is required for devices. For example, smart phones increasingly use IPv6 due to the exhaustion of IPv4 addresses and the explosive growth of these devices around the world.

Routing protocols commonly used include RIPv1/v2 and RIPng, OSPF v2/v3, border gateway protocol (BGP)-4, distance vector multicast routing protocol (DVMRP), and protocol independent multicast sparse and dense modes (PIM-SM/DM). For remote campus connections, multi-protocol label switching (MPLS) is often used for virtual private network (VPN) and virtual private LAN service (VPLS).

At the core when wide area connection are used to connect remote offices and potentially the data center, Carrier Ethernet networks, OC12 (622 Mbps) or OC192 (9.6 Gbps) high-speed WAN interfaces are commonly used.

The core layer might also provide advanced multiprotocol label switching (MPLS), virtual private LAN service (VPLS), and multi-virtual routing and forwarding (Multi-VRF) protocols to enforce traffic separation and QoS policies.


Wireless LAN (WLAN)

The growth of wireless devices has driven the need for scalable WLAN networks, both in number of devices and bandwidth available. The IEEE 802.11 series of protocols that define radio frequency networks, or wireless LAN network protocols. The latest version, 802.11n, and the next version, 802.11ac, increases bandwidth to meet the requirements for video traffic to mobile devices (smart phones, pad, laptop and video surveillance cameras). WLAN configuration can become complex and time consuming as the number of WLAN access points (WAP) grow. Brocade provides the Brocade Mobility series of intelligent WAP with advanced features including self-healing mesh topology, and peer-to-peer data path forwarding to avoid routing data traffic to the WLAN Controller at the distribution or core. With Brocade Mobility Controllers, the controller provides access control and security policies for all attached AP simplifying management and configuration. This simplifies management of wireless networks since WLAN controllers can manage thousands of WAP and tens of thousands of devices.


Physical Deployment

Campus environments often rely on closets and other small rooms available in buildings. For this reason, the switches are designed to operate under environment conditions commonly found in an office building rather than the tight controlled environment found in datacenters.

The layout of switches accommodates the physical layout of buildings. Low rise campus structures and high-rise office buildings affect the location and number of switches at each location in the campus network.

In the three tier architecture of access, distribution and core, each tier resides in one or more wiring closets. It is common to use multiple wiring closets for the access layer each serving part of a floor. Uplinks from the switches run back to single closet with distribution switch(es). An distribution closet per floor is commonly located near risers that run between floors which carry power, HVAC and other services between floors of the building.

It is common in buildings to have a central space where utility power and telephone services enter the building and this is where core switches are commonly located.

Wireless access uses radio receivers called access points (AP). AP are distributed to provide uniform radio signals throughout the building. The access point converts the radio signal to electrical signals and forwards the traffic to an access switch. Access points are managed by WLAN controllers commonly connected to the distribution layer switch(es). The WLAN controllers provide centralized configuration and policy settings for the AP.

Stacking switches with high-speed dedicated interconnects are popular in the campus network. Stacks are a cost-effective, easy way of scaling connectivity within a wiring closet. Adding a switch to a stack adds connectivity for devices. Stacking is also an option for scaling the distribution layer.

Chassis switches provide high port density, high bandwidth links and fault-tolerant designs with high-availability and resiliency. However, per port cost is higher than fixed form factor switches. Consequently, chassis switches are more commonly found at the core and sometimes in the distribution layers.

Building Blocks

This section defines a palette of building blocks. Blocks are grouped into the following classes:

  • Access – Wired and Wireless
  • Distribution
  • Edge
  • Core
  • Management

The table below lists the blocks by class with a hyperlink to the subsection describing the building block. After the table, a description of each class of blocks and a description of each block is provided.


Building Block


Switch Stacking


Switch Stack with PoE/PoE+


Wireless LAN (WLAN)


Multi-Chassis Trunking




40 GbE Stack


Brocade Mixed Stacking


Stacking with 40 GbE


Internet Connectivity


MAN Connectivity


Brocade Network Advisor and sFlow Monitoring

Access Blocks

An access block connects devices (desktop, laptop, phone, security camera, smartphone) to the campus edge. The network edge can be wired or wireless LAN (WLAN). In wired configurations, devices can receive their power via the Ethernet network using Power over Ethernet (POE) and POE+. This reduces cost for low power devices such as Voice over IP (VoIP) handsets, security cameras and wireless access points.

WLAN edges use specified radio frequencies to connect to wireless devices via the IEEE 802.11 family of protocols. The edge includes wireless access points (AP) such as the Brocade Mobility family of AP and associated WLAN controllers such as the Brocade Mobility RFS family of controllers. Brocade Mobility RFS Controllers have PoE connections to power multiple AP radios providing wireless coverage inside buildings and in outdoor spaces. The RFS controllers provide security services and access control policies simplifying configuration and management of the WLAN network.

Switch Stacking

Where the number of devices is small, a single access switch can provide connectivity. When a single switch can’t provide the needed connectivity, switch stacking is commonly used. In this configuration, multiple switches are connected together with ports that carry data traffic and a dedicated stacking protocol. The stack creates a logical stack with a single IP address mapping to all physical switches and acts as a single logical switch as far as Spanning Tree protocols are concerned. Devices can connect to any device port on any switch. Stacks start with two switches and can be extended over time as device connectivity increases.

The stack topology is a usually a ring where each switch has a stacking connection to its upstream and downstream neighbor. One switch is elected the master ensuring a consistent control plane extends to all physical switches including policies and access control settings. If the master goes off line, a new master is elected so the stack continues to operate. An example of ring topologies for stacking ports is shown below.


   ICX 6450 Stacking Port Ring Topology

A less resilient option is to use a linear topology as shown in the example below.


   ICX 6450 Stacking Port Linear Topology

Some devices may require higher availability and these can use dual Network Interface Cards (NIC) with NIC Teaming. Each card connects to a different physical switch in the stack protecting from NIC, link and switch failures. Other devices that do not require this level of high availability (and expense) connect to only one switch in the stack.

Brocade provides stacking links in 1, 10 and 40 GbE bandwidths to accommodate older devices and today's much higher speed video conferencing and ever increasing number of high bandwidth personal wireless devices.

Switch Stacking with PoE/PoE+

Where devices require low level power, Power over Ethernet (POE) and POE+ are used so the same Ethernet connection providing network connectivity provides electric power as well. VoIP handsets, security cameras and wireless LAN access points and some desktop thin client computers are examples of devices that use PoE connectivity.

POE+ provides more power per device than PoE. Both PoE and PoE+ enabled switches can be configured into a switch stack offering the same resiliency and extensibility features of the wired switch stack. Stacks can include PoE and non-PoE switches so powered and non-powered devices can connect to the same stack.

The figures below shows stacks with 40, 10 and 1 GbE stacking links, both with the option for powered PoE/PoE+ ports.  Note the green line shows configuration and management of the WAP by a central clustered Brocade Mobility WLAN Controller, either attached to Distribution or Core block.  See the Access Blocks-Wireless section for more information.

Stacking Configurations with 1, 10 and 40 GbE Stacking Links


   Access Block, 40 GbE Switch Stack with PoE/PoE+ (click to enlarge)


   Access Block, 10 GbE Switch Stack with PoE/PoE+ (click to enlarge)


   Access Block, 1 GbE Switch Stack with PoE/PoE+ (click to enlarge)


Access Blocks—Wireless

Once regarded as optional technology, wireless networks are creating efficiencies and reducing costs not only in corporate enterprises, but also in a wide array of industries, such as healthcare, education, and manufacturing. IEEE has recently ratified a highly anticipated WLAN standard called 802.11n, which promises a 5x increase in data speeds and unprecedented reliability, and is nearing adoption of the 802.11ap standard with even more wireless bandwidth.

As a result, WLANs now enable a fully cohesive working environment by combining the mobility of wireless with the performance of wired networks. Two components are commonly used in a WLAN; a WLAN Access Point (WAP) and a WLAN Controller switch (WLAN Controller).

Wireless Access Points (WAP)

WAP technology has evolved to include mesh topologies. In a wireless mesh, the network dynamically routes packets from access point to access point. A few nodes have to be connected directly to an Ethernet port, but the rest share a connection with one another over the air, negating the seemingly contradictory need to distribute wires for a wireless network. The Brocade Mobility 7131 Access Point includes mesh technology while the Brocade 6511 Wall Plate access point provides convenient connectivity with a simple wall mounted switch and access point.

Comprehensive network security features keep wireless transmissions secure and provide compliance for HIPAA and PCI regulations. The Brocade Mobility RFS7000 provides gap-free security for WLAN networks by using a tiered approach that protects data at every point in the network—wired or wireless.

This complete solution includes:

  • Stateful Layer 2-7 wired/wireless firewall
  • Integrated IPSec VPN gateway to secure all traffic between the APs and the controller
  • AAA Remote Authentication Dial-In User Service (RADIUS) server and secure guest access with a captive Web portal, reducing the need to purchase and manage additional infrastructure
  • Hyper-fast secure roaming
  • Network Access Control (NAC) support
  • MAC-based authentication
  • Comprehensive integrated Intrusion Detection System (IDS)/Intrusion Prevention System (IPS) engine for rogue detection and containment and anomaly analysis

WLAN Controllers

Brocade's Mobility Controllers provide central management and configuration of thousands of WAP and tens of thousands of wireless devices. The WLAN Controllers are deployed in a cluster for resiliency and availability. They can be attached to a Core block or to Distribution blocks in very large environments. WAP data traffic is optimized when using a wireless mesh so peer-to-peer data traffic does not have to traverse the wired network to the WLAN Controller greatly reducing latency and bandwidth on uplinks to the Distribution or Core.

The diagram below shows WAP devices attached to powered PoE/PoE+ switch ports in access switches and the WLAN Controller cluster attached to Distribution switches. When a campus network uses only mixed stacks that eliminate the need for a Distribution layer and it's switches, or where the total number of WAP and devices can be managed by a single WLAN Controller cluster,the WLAN Controller can be attached to the core.





  Wireless Block, WAP Managed by WLAN Controller (click to enlarge)


   Alternate WLAN Controller Configuration at the Core (click to enlarge)

Distribution Blocks

This optional block is used when the scale of the campus network requires more connectivity between the access layer and the core. It is inserted between the core and access blocks. Distribution blocks also require high availability since they connect many devices to the core and a failure of a distribution switch would cause a wide spread outage. Stacking and multi-chassis trunking are common solutions.

Multi-chassis Trunking (MCT)

This block improves availability for access blocks that move the Layer-2/:ayer-3 boundary outside the access block. Since a LAG must have all links in the same switch, Multi-chassis Trunking (MCT) creates a single logical switch using a two switch cluster. Access block LAGs terminate into each of the switches in the MCT cluster. This block provides routing to the core block and terminates Layer 2 domains for access blocks forwarding Layer 2 traffic.


Where MCT provides resiliency and high availability for Layer 2 traffic, Virtual Routing Redundancy Protocol (VRRP) and VRRP Extended (VRRP-E) provide similar resiliency and high availability for Layer 3 gateway traffic. Where the access layer is forwarding Layer 2 traffic to the distribution layer, combining MCT with VRRP/VRRP-E on the distribution switches is a common method for providing rapid fail-over of traffic should a distribution switch go off-line.

This distribution block can connect to an IP Services block providing network access control (NAC). As shown by the green dotted line, WLAN Controller clusters pass configuration and management traffic to WAP devices attached to the access block PoE/PoE+ powered ports.




   Distribution Block with MCT and VRRP/VRRP-E (click to enlarge)

Stacking with 40 GbE

Brocade ICX 6610 switches have 40 GbE stacking ports. Stacking provides a cost-effective distribution building block. The ICX Switch series can have eight switches in a stack, can scale up to 384 1 GbE device ports, and the ICX 6610 provides up to eight 10 GbE up-link ports per switch. Stacks include a master and standby controller so fail-over is built-in to provide resiliency and high-availability for both Layer 2 and Layer 3 traffic.

This distribution block can connect to an IP Services block providing network access control (NAC). As shown by the green dotted line, WLAN Controller clusters pass configuration and management traffic to WAP devices attached to the access block PoE/PoE+ powered ports.





   Distribution Block with 40 GbE Stacking

Edge Blocks

Edge blocks terminate Layer 2 traffic forwarding only Layer 3 traffic. Therefore, and Edge block collapses distribution and access into a single block. With the adoption of 10 GbE and 40 GbE for stacking and uplinks and high density 1 GbE switching ports, the distribution layer can be eliminated creating a more efficient and less costly Core / Edge topology.

Brocade Mixed Stacking

Brocade introduced a unique optimization on stacking for the HyperEdge Architecture, mixed stacking. In classical stacking, all switches in the stack must have the same configuration, features and value-added software licenses. This increases the cost of switch stacks. With Brocade mixed stacking, only a few switches, such as the ICX 6610, need to have value-added Layer-3 licenses yet less featured switches, such as the ICX 6450, can forward traffic for layer 3 routing. Switches with value-added features share them with all switches in the stack reducing cost and simplifying upgrades to switches in the stack.

This edge block includes wired and wireless device connectivity via Brocade Mobility Access Point connections to PoE/PoE+ powered ports on any switch. As shown by the green line "To WLAN Controller", all WAP can be monitored, managed and configured from a central WLAN Controller attached to the core block or to a distribution block.




   Edge Block, Mixed Stacking (click to enlarge)

Edge Block, 40 GbE Stack

Brocade introduced 40 GbE stacking with the ICX 6610 Switch. The ICX 6610 supports advanced Layer 3 routing and services including Multi-VRF. This block provides near 1:1 over-subscription as each switch has eight 10 GbE ports that can be used for uplinks in addition to four 40 GbE stacking ports and up to 48 1 GbE device ports. Device ports on a switch can be powered with PoE/PoE+. Terminating Layer 2 traffic in the stack collapses the distribution and access layers into a single Edge layer offering the option to create cost-effective and efficient Core/Edge topologies for a campus network.

This edge block includes wired and wireless device connectivity via Brocade Mobility Access Point connections to PoE/PoE+ powered ports on any switch. As shown by the green line "To WLAN Controller", all WAP can be monitored, managed and configured from a central WLAN Controller attached to the core block.




   Edge Block, 40 GbE Stacking

Core Blocks

Core blocks connect multiple access or distribution building blocks together and provide access to networks outside the campus. They provide routing services within the campus network (e.g., RIP, OSPF, IS-IS) and external routing protocols used by the Internet, data center interconnects and the Campus LAN such as BGP and MPLS.

For larger campus environments, a campus core carries traffic flowing into and out of the campus network from the Internet. The campus core can also connect to the data center core, and to MAN and WAN networks as required.


In larger environments, multiple remote campus networks can be interconnected using VPN services with MPLS. Where there are remote building and facilities in a metropolitan area, such as K12 school districts, higher education, banks and retail franchises, the campus core can connect to a MAN. The MAN is often a leased Ethernet service with 1 or 10 GbE connectivity. This is a common configuration for school districts that can have hundreds of buildings in a county or large metropolitan region.

Campus Core with Internet

This block provides Internet access and datacenter core router access.  It is commonly used for large facilities, multi-story buildings or campus facilities where wired and wireless access does not need to extend across metropolitan distances. The WLAN Controller cluster can be attached to the core to provide central configuration and management of all WAP devices.



   Core Block, with Internet

Campus Core with MAN

The following block provides Metropolitan Area Network access (MAN). This block is commonly used for K12 school districts or when a number of branch offices or franchise outlets are within a metropolitan area. The WLAN Controller cluster can be attached to the core to provide central configuration and management of all WAP devices both local and remote devices via the MAN.




   Core Block with MAN (click to enlarge)


As mentioned previously, building blocks are combined into templates. An Edge template can include one or more access blocks and an optional distribution block. A Core template can include a core block.

The following provides an example of how to create templates from building blocks. The final section shows how to use multiple templates to construct a canonical architecture for a data center network.

Campus Template, Small Core/Edge

This uses an efficient core/edge topology to reduce cost and complexity. It supports both wired and wireless devices and has a central WLAN Controller cluster.



   Campus Small Core/Edge Template, Small (click to enlarge)



The wired block can scale with switch stacking while the wireless block would add additional access points up to the maximum supported by the WLAN controller cluster. Both access blocks could connect to a data center core template where connectivity to applications and the internet would be made. Below is an exploded view of showing the details of the building blocks included in this template.




   Campus Small Core/Edge Template, Exploded (click to enlarge)

Campus Template, Distribution/Access-Medium/Large Campus

This template includes wired and wireless LAN blocks and a distribution block for medium/large size campus environments as shown below.

Distribution/Access Template #1


   Campus Edge Template, Medium/Large (click to enlarge)

This template can scale at both the Edge and Distribution layers with switch stacking. At the distribution layer, HyperEdge stacking reduces the cost of the stack by adding layer-3 routing services to two switches in the stack allowing more expensive routing to be shared across less expensive layer 2 switches in the stack. Below is an exploded view of showing the details of the building blocks included in this template.



   Campus Distribution/Access Template #1, Medium/Large Exploded View (click to enlarge)

Distribution/Access Template #2

An alternate design uses stacking at the both the distribution and access layers as shown below.


   Campus Distribution/Access Template #2, Medium/Large Exploded View (click to enlarge)


Campus Template, Core

This template would be used with medium and large scale campus environments.




   Campus Core Template (click to enlarge)

Disribution or Edge blocks connect to the core. A central WLAN Controller cluster can be used for central configuration and management of all WAP devices connected to PoE/PoE+ powered switch ports in the Access or Edge blocks.




   Campus Core, Exploded (click to enlarge)


Campus Template, Remote Office

This template connects remote office locations to a Campus Core with MAN access. This block is commonly used for K-12 school districts and where there are remote offices in a metropolitan region.


   Campus Remote Office Template (click to enlarge)

The Distribution block provides WAN or VPN connectivity to the Campus Core. For example,.MPLS and/or BGP can be used to connect remote offices to other campus networks and the data center for application access.

The Access block uses HyperEdge stacking with PoE/PoE+ switches to cost effectively provide layer 2 or layer2/layer 3 network connectivity. With PoE, VoIP handsets can be powered from the same stack.


   Campus Remote Office, Exploded (click to enlarge)

This template would be used with a Core template that includes a Core with MAN block as shown below.  This block includes a central WLAN Controller to configure and manage all remote WAPS over the MAN. Again, this is a common design for K-12 school districts who need to reduce management complexity and the cost of visiting tens or hundreds of remote school buildings.


  Campus Core Block with MAN


Management Template

The campus management template includes Brocade Network Advisor (BNA) and sFlow monitoring.




   Management Template, BNA with sFlow (click to enlarge)



A short tutorial about sFlow, how it works with Brocade switches and routers to provide detailed network traffic flow monitoring.


Canonical Architecture

Below is a canonical architecture for a campus network showing the variety of templates that can be constructed from design blocks.  Both traditional core/distribution/edge and modern, efficient core/edge topologies can be designed based on this Campus Infrastructure Base Reference Architecture

Each templates can scale-up and templates can be replicated to quickly scale-out the network as required.


   Canonical Architecture Using Templates (click to enlarge)

by pbal
on ‎03-20-2013 04:48 PM

Very good overview.