Campus Networks

Campus Network Infrastructure-Best Practices: HyperEdge Architecture Design with Multi-VRF

by jmartell on ‎03-25-2013 09:57 AM - edited on ‎08-06-2014 08:42 AM by (5,183 Views)
 


 

 

Preface

 

Overview -

Multiple Virtual Routing and Forwarding (Multi-VRF) is an advanced Layer 3 feature used to create multiple virtual networks on top of a single physical network. Multi-VRF increases security by segregating physical resources, configuration and management into independent domains. Traffic is logical isolated in each VRF instance extending the value of the physical network while avoiding the cost of physically separate networks. Applications of Multi-VRF in the campus network includes company mergers than may require maintaining duplicate IP address spaces, multi-tenancy in retail and cloud service environments, and securing guest access to a corporate network to limit access to public resources only .

 

Brocade’s HyperEdge™ Architecture for campus networks introduced Multi-VRF with FastIron release 8.0 to address the many limitations found in legacy campus networks that make them rigid, complex, and costly to maintain. Organizations are learning the hard way that these networks weren’t built to meet the challenges found when the campus network has to support multi-tenant traffic flows.

 

Today, the campus network requires flexibility, simplified management, and above all, must be affordable. The Effortless Network captures Brocade’s vision of the campus network for today and tomorrow. The HyperEdge Architecture integrates several innovations for the campus network, such as Multi-VRF, with existing assets improving flexibility and reducing complexity so organizations can deploy applications quickly and cost-efficiently.

 

For example, mixed stacking allows premium switch services to be extended to all ports in the stack even some are in entry-level switches. When Multi-VRF with virtual Layer 3 routing and forwarding is added to only the premium switches, logical traffic isolation is available to all devices regardless of the switch they connect to reducing the cost of administering security policies when using logical traffic isolation. This is an example of the power of the HyperEdge Architecture-it’s a combination of innovative capabilities working together that create The Effortless Network. Without Multi-VRF and mixed stacking, separate physical networks have to be purchased, configured and managed to achieve secure traffic isolation at much higher cost and increased complexity.

 

Purpose of This Document

This document describes the benefits of Multi-VRF, provides examples of how to design solutions that use Multi-VRF, and provides some of the important CLI commands used with Multi-VRF. Integration with other features of the HyperEdge Architecture, such as mixed stacking is also shown.

 

Audience

This document is intended for network administrators, systems engineers, and those who need to provide a virtual network on top of a physical campus network.

 

 

Objectives

  • To provide an introduction to FastIron’s implementation of Multi-VRF
  • To discuss the benefits and limitations of a virtualized network infrastructure
  • Describe the design of a network with increased security by segmenting the network enterprise and its services
  • List the steps required to deploy and monitor a virtualized network infrastructure.

 

Related Documents

The following documents are valuable resources for the designer. In addition, any Brocade release notes that have been published for the FastIron, NetIron and Mobility operating systems should be reviewed.

 

References

 

About Brocade

Brocade®(NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was developed by the following key contributors.

Lead Architect: Jose Martell, Technical Marketing Engineer

 

Document History

Date                  Version        Description2

2013-04-08       1.0                  Initial Release

HyperEdge Reference Architecture

The campus reference architecture supports wired and wireless devices using both the traditional core/distribution/access topology and a simpler core/edge topology with innovative mixed stacking as shown below.

MultiVRF_CampusRefArch.jpg

   Campus Network Reference Architecture (click to enlarge)

 

Mixed stacking, a powerful innovation of the HyperEdge Architecture, can be configured with Multi-VRF to segment a physical network into multiple secure logical networks with their own network resources and IP address spaces. Fewer layers and better utilization of the physical hardware simplify management and lower administrative costs. This guide shows how to design Multi-VRF solutions with a mixed stack.MultiVRF_AlternateMixedStackTopologies.jpg  

   Alternate Mixed Stack Topologies(click to enlarge)

 

Mixed stacking is a key component of the HyperEdge Architecture that provides a new approach to a traditional core/distribution/access topology by collapsing the distribution and access layers into an Edge. The result is a core/edge topology that reduces network layers for more agile and advanced network services while lowering TCO. With mixed stacks, network managers can configure advanced features, such as Multi-VRF, advanced routing, integrated wired & wireless NAC for BYOD applications, and IPv6 multicast, without the need for a forklift upgrade of the entire stack of switches.

 

Why Virtualize The Network?

Virtualization is not a new networking concept; VLAN technology has been used for many years to build secure, separate LAN segments on a single Ethernet switch. Multi-VRF extends virtualization to Layer 3 routing and network services. A physical router is segmented into multiple logical or virtual routers to provide isolation, increased security, and control over internal and external network traffic.Virtualization allows an enterprise to apply unique security policies to different logical groups or entities within the campus. This provides traffic isolation per application, group, or service, and most importantly, allows the physical infrastructure to serve multiple entities or organizations.  Logically separating the physical network into multiple logical instances provides a sizeable reduction in network cost and management complexity.For several decades, technologies introduced that provide various methods of virtualization, but proprietary hardware locked customers into restrictive network designs while upgrades could force a customer to tear down and replace the network hardware. Those technologies have grown in terms of abilities and ease of deployment making them more efficient and attractive to customers. The following diagram depicts the development of differentvirtualization technologies over time.  

MultiVRF_EvolutionOfNetworkVirtualization.jpg

   The Evolution of Network Virtualization (click to enlarge)

 

Possible Deployments

Multi-VRF has broad applications. Main areas for deployment can be for separation of functional groups network access within the enterprise to increase security. Another use case is for company mergers or B2B (Business to Business) network access or even retail shops integrated under a common network infrastructure.Other examples and reasons for traffic separation:

  • Airports – Different airlines, Passenger public access, Banks, and stores.
  • Company Mergers – Overlapping IP address space
  • Financial Institutions – Regulation/Compliance, Credit Card (PCI)
  • Hospital – Regulations Health Care (HIPPA)
  • Multi-tenancy –retail stores and residential complexes
  • Isolation of functional groups or outside vendors and guests within the Enterprise
  • Guest VRF – place quarantined hosts that failed NAC posture validation or Dot1X authentication
  • VoIP Deployment – Voice and Data VLAN separation

 

Brocade’s vision, The Effortless Network with HyperEdge Architecture frees up the network designer providing the ability to distribute advanced features such as Multi-VRF, across premium and entry level network gear.

 

The purpose of VRF is to virtualize the physical network. Multiple logical networks are defined each with its own instance of routing tables and network services on the same physical router or switch. This reduces the physical network footprint and lowers network cost.

 

Although VRF virtualizes Layer 3 in the network stack, the separation starts in the hardware at the ASIC’s Ternary Content Addressable Memory, TCAM. The TCAM route entries table is divided by the total number of VRF instances. Therefore, the designer has to consider the overall requirements of the individual VRF partition and the amount of TCAM space required for each VRF route table.

 

With mixed stacks, only the premium switches, such as the ICX 6610, have VRF configured, but traffic on the ICX 6450 entry level switches can use VRF. By deploying VRF on a mixed stack, network utilization is maximized with a minimum investment in the network hardware. And, a single point of management, the stack, simplifies configuration and administration further reducing cost and complexity.

 

VRFs have been associated with Multi-Protocol Label Switching and Virtual Private Networks (MPLS VPN) for isolating multiple customers with overlapping address spaces so that they do not see or interfere with each other. VRF without an MPLS transport is known as Multi-VRF or VRF-Lite. MPLS/VRF is primarily used by Service Providers with dedicated networks built specifically to run MPLS, but for the enterprise, this approach is not feasible because it requires campus customers to design an deploy new MPLS network. Deploying a new network MPLS network would imply more complexity, more administration tasks, new training, and, most importantly, new hardware which would defeat the purpose of virtualizing the infrastructure.

 

Different routing protocols can run in different VRF partitions on the same device to exchange routes dynamically with directly connected devices participating in the same VRF instance. Thus, each instance of the routing protocol runs independently of the others creating an independent network with its own routing & forwarding tables. Each virtual network gets an allocation of the underlying hardware resources of the physical router/switch as shown in the following figure.

 

MultiVRF_VPNEnabledByMVRF.jpg

    Virtual Private Networks enabled by Multi-VRF (click to enlarge)

 

Benefits of Deploying Multi-VRF

The tangible benefits of Multi-VRF include:

  • Increasing the security in the enterprise network by isolating entities with different requirements
  • Allowing different organizations to share the same infrastructure,
  • The option to have overlapping IP address spaces in different VRF instances.

Multi-VRF has additional benefits compared to VLANs and 802.1Q, virtual circuits, and Generic Routing Encapsulation tunnels (GRE) to carry traffic over the WAN/MAN/Campus. All have some benefits but they also have some drawbacks and limitations.

 

VLANs isolate the broadcast domain and provide a separate virtual LAN to workgroups on the same network infrastructure. VLANs associate to one or more Layer 2 ports on a switch, run multiple per-VLAN STP, and share the hardware resources along with all other VLANs in the same device. With 802.1Q, VLANs expand their broadcast domain out to other network devices, thus breaking the isolation of the workgroups by expanding VLANs to other areas of the campus network which reduces its efficiency and value.

 

Virtual circuits lock networks into point to point connectivity scenarios which require network protocol enhancements to overcome the restrictions of one-to-many connectivity imposed by the nature of the virtual circuits used in ATM, Frame-relay, TDM, and connection-based technologies.

 

GRE tunnels bridge groups together across the campus but add administrative complexity while still sharing the same forwarding tables at the end nodes of the campus network; e.g., the IP address space of the default VRF instance.

 

Multi-VRF is transparent to any the Layer 2 virtualization, such as VLANs and broadcast domains, adding Layer 3 virtualization by separating the routing & forwarding resources into separate logical routers/switches. Multi-VRF instances associate to one or more Layer 3 interfaces in a network device. Each Multi-VRF instance maintains its own forwarding table for the IP routing protocol regardless of the IP routing protocol used, e.g., OSPF, RIP, BGP.MultiVRF_VirtualSwitch&RouteCoExist.jpg
   Virtual Switching and Virtual Routing Can Co-exist (click to enlarge)

 

Separating and isolating the routing & forwarding resources in network devices, truly virtualizes the network and provides some substantial benefits:

  • Isolation and Traffic Separation

For the purposes of company privacy and to protect network resources, the network administrator may need to keep some areas of the network unreachable by everyone else. Multi-VRF can do exactly that by keep segments of the network running like sailing ships in the night – they don’t see each other.

  • Network Efficiency

Increased utilization of the network infrastructure and reduced number of network devices simplifies network administration.

  • IP Address Reuse

Since the routing and forwarding tables are kept separate from each other, the network administrator can redeploy duplicate IP address spaces in the physical network.

  • Reduced Operational Complexity

Configuration becomes simpler by not having to implement various mechanisms to keep traffic from flowing to restricted areas or to increase security for sensitive applications.

  • Lower TCO

Reduces overall network cost, reduces the number of touch points in the network by virtue of not increasing the physical network size while providing multiple virtual private networks on a common physical network.

 

Overview of Brocade’s Multi-VRF

Before any configuration is applied to a switch, there is only one active instance of routing and forwarding.  No virtualization or separation of resources exists at this point. The only instance of Multi-VRF is the default VRF. All physical ports are assigned to the Default VRF including the out-of-band management port. The Default VRF is the root routing & forwarding instance that always exists in the switch. Any commands for router configuration not addressed to a specific VRF instance are handled by the Default VRF.

 

A Layer 3 interface can only be in one VRF instance at a time. When an administrator configures a new VRF instance and adds ports to it, the ports are automatically removed from the Default VRF.

 

In a switch such as ICX 6610, only non-physical (VLAN interfaces) ports can be part of a VRF instance; on the other hand, in a FastIron SX switch, non-physical VE ports and physical ports can be included in a VRF instance. Allowing physical ports into a VRF instance is particularly important because it allows Brocade’s FSX switches to interoperate with switches running older FastIron releases and non-Brocade switches that don’t support Multi-VRF.

 

Some IP services are currently not VRF aware and are available only in the Default VRF instance. For example:

  • IP Policy Based Routing
  • BGP+ for IPv6
  • RIP ng

 

Management VRF Instance

A Management VRF is a special global VRF instance that must be configured. It provides secure access for management of the virtualized device. All outgoing management traffic is sent through the Management VRF via the out-of-band management port. The default behavior is to accept all management traffic for any VRF, including the Default VRF, since inbound management traffic is unaware of router/switch virtualization and the existence of other VRF instances.

 

The Management VRF is aware of the following IP services and applications:

  • SNMP server
  • SNMP trap generator
  • Telnet server
  • SSH server
  • Telnet client
  • Radius client
  • TACACS+ client TFTP
  • SCP
  • Syslog
  • sFlow

 

Any VRF instance can be configured as the Management VRF, except the Default VRF. The same behavior is expected for either address family. That is IPv4 or IPv6.

 

VRF Aware Services

All VRF instances are aware of the IP services like DHCP, DHCP snooping, IP Source Guard, IP routing protocols, Multicast, IPv4 & IPv6. Each user VRF runs independently and receives system services from the system pool; the TCAM is shared among all user VRFs configured; therefore the need to understand about route entries are allocated to every user VRF.  

MultiVRF_SystemResourceAlllocationToVRF.jpg

   System Resource Allocation to VRF Instances (click to enlarge)(click to enlarge)

 

These IP protocols and services are VRF-aware and use the hardware resources of the TCAM. Supported for these features started in FastIron Operating System release 8.00.0 and they run independently within each user VRF:

  • IPv4, and IPv6 forwarding
  • OSPFv2, RIPv1/v2, and BGP for IPv4
  • OSPFv3 for IPv6
  • VRRP, and VRRP-e for both IPv4 and IPv6
  • ARP, and DHCP Relay (IPv4/IPv6)
  • Overlapping IP address space
  • IPv4, and IPv6 multicast forwarding
  • PIM-SM/DM for IPv4
  • PIM-SM for IPv6
  • MSDP for both IPv4/IPv6
  • GRE Tunnel (Tunnel carrier path must be on default-VRF. No support for keepalives)
  • IPv6 over IPv4 Tunnel (Tunnel carrier path must be on default-VRF)
  • SFlow with up to four collectors on the Default VRF.

 

VRF Transport Options (Link virtualization)

FastIron 8.00.0 and later releases support IP GRE tunnels, Layer 2 trunks (802.1Q), and dedicated physical interfaces to interconnect virtualized network entities; expanding the virtualized network. The method used would depend on the topology, bandwidth required, and availability of network links.

 

GRE tunnel transport

Secure networks can use point-to-point tunnels to transport Multi-VRF traffic. This relies on Layer 3 reachability to and from the end points of the tunnel.A tunnel interface (e.g., GRE and IPv6 tunnel) can be configured in a VRF instance. 

 

A tunnel is created for each Multi-VRF instance through the available Layer 3 path of the Default VRF so user VRF traffic is kept isolated from traffic in other VRF instances. The tunnel source and destination points belong to the Default VRF. A drawback is a more complex configuration. The tunnel itself resides in the Default VRF but the tunnel interface is in the specific user VRF instance.

 

The tunnel keep-alive function is supported for GRE tunnels associated with the Default VRF but is not supported on GRE in other VRF instances.MultiVRF_GRETunnelsWithMVRF.jpg  

   GRE Tunnels Deployed with Multi-VRF (click to enlarge)

 

802.1Q Transport

VLANs provide Layer 2 segmentation and through the use 802.1Q, multiple VLANs share the same physical port. Each VLAN interface, not the physical port, can be in only one VRF instance at a time. This approach is the most common option due to the familiarity with VLAN configuration and less complexity.MultiVRF_VLANTransportWithMVRF.jpg  

   VLAN Transport with Multi-VRF(click to enlarge)

 

Separate Interfaces Transport

Based on requirements such as bandwidth, a separate physical interface per VRF instance can be used. This provides traffic isolation as well as isolates traffic paths. This is more expensive since more physical interfaces are needed, but if the bandwidth requirements are large, it makes sense to use a dedicated physical link per VRF instance.

MultiVRF_SeparatePhysInterfaceWithMVRF.jpg  

Separate Physical Interfaces with Multi-VRF (click to enlarge)

 

Multi-VRF Special Considerations

The goal when virtualizing the router/switch is to share the physical hardware to reduce cost, both acquisition and operations and maintenance. But running multiple VRF instances consumes the hardware resources quickly. For example, the maximum number of routes supported in hardware is shared by all of the Multi-VRF instances deployed, that is the Default VRF, the Management VRF, and all the User VRF instances. The following requirements should be considered when designing a network with Multi-VRF.

  • Traffic engineering is required to optimize use of TCAM and memory resources in hardware. For example, it’s important to minimize the use of ACL, routes, multicast, and other features that consume TCAM and memory. Route summarization and an efficient IP addressing scheme can help reduce route entries in TCAM.
  • Running IPv4 and IPv6 requires even more fragmentation of the TCAM space; hence, it is important to maximize the total number of route entries in TCAM by keeping a low number of VRF instances. The ICX 6610 has the capability to configure up to 16 VRF instances but there would only be space for static route entries; hence, the recommendation is to deploy a maximum of 10 VRF instances.
  • In the case of the FastIron SX 800/1600, the maximum recommended number of VRF instances is 64, although the system allows up to 128 VRF instances. The discrepancy is to allow for other applications to be running such as MSTP, OSPF, SNMP or other CPU intensive applications.
  • A Layer 2 port can be part of one or more VLANs (tagged or untagged port), but a Layer 3 interface can only be in a single VRF instance.
  • VRF is a Layer 3 feature and only Layer 3 interfaces are configured in a VRF instance. No physical ports can be configured with Multi-VRF on the ICX 6610 and FCX product families. The FastIron SX1600/800 allows external routers without VRF support to join the Multi-VRF instance via a physical interface without 802.1Q tagged ports configured.
  • The Default VRF cannot be configured as the Management VRF.
  • Multi-VRF scalability is limited by the hardware resources available in the platform and directly impacts the performance of the virtualized device.

 

Best Practices

It is important to define the business and technical requirements, and understand the expectations; this includes budgetary concerns, network services, and security implications.

 

Most of these aspects are addressed by the HyperEdge Architecture, that is, in terms of management touch points in the network, simplicity, future proofing the initial investment in entry level switches, and distribution of services across a mixed stack, HyperEdge Architecture Overview and Design Guide. Multi-VRF complements the solution by isolating and separating traffic within the network infrastructure without the need to build parallel networks for each type of traffic that needs to be separated.

 

The FastIron 8.00.0 and later releases support the following three transport options that should be carefully considered when scaling a virtualized network. The transport interconnects the virtualized network switches and routers and mixed stack domains.

  • GRE tunnels are an excellent choice but restrict the path between the end points while increasing the administration and configuration complexity as the network grows.
  • Separate interfaces provide an excellent physical separation with increased bandwidth per VRF but increase the use of network resources and administration by having more links to terminate, configure, and maintain.
  • 802.1Q integrates all of the VLANs part of VRF instances into a single interface. This might be the most common option due to its relative ease to configure and add bandwidth.

 

A fixed configuration switch, such as the ICX 6610, can support up to 10 VRF instances; the software architecture allows for up to 16 VRF instances but it is not practical due to the restriction in route entries available per VRF.The table below summarizes VRF instance support by hardware platform.MultiVRF_VRFSupportByHW.jpg

   VRF Support by Hardware Platform (click to enlarge)

 

It is recommended to program the maximum forwarding table size for all of the VRF instances. Failure to do so may cause errors when IP routes are being added to the VRF instance routing tables. The error messages may include words to effect of “not enough hardware resources”.

 

The FastIron SX 1600/800 platform is comfortably built with plenty of hardware resources to support up to 64 VRF instances. On this platform, it is a good practice to program the maximum size for the route tables.

 

Forwarding table size should not be the only consideration, some routing protocols like OSPF are CPU intensive and a large number of neighbors can seriously impact performance. Therefore, the design must consider how to minimize the number of VRF instances. Characteristics to watch for are the number of router/neighbors in an OSPF area, the type of links, and the probability to flap down/up which can trigger the CPU to run the Dykstra algorithm which requires CPU processing time.

 

It is a good practice to create a Management VRF to insure that all management traffic will be forwarded to the correct receivers of that traffic. By default, if a Management VRF is not created, the Default VRF and the out-of-band management port receive the management traffic. This may weaken security since management traffic gets forwarded on the same VRF as all of the ports and VLANs part of the Default VRF.

 

The Management VRF should not be configured on the default VRF. Create a separate VRF instance and leave the default VRF for the infrastructure traffic and all ports that do not belong to any VRF.

 

Multi-VRF Routing Protocol Support

The following IP routing protocols are supported:

MultiVRF_IPProtocolSupportVRF.jpg 

   IP Protocol Support with Multi-VRF (click to enlarge)

Configuring Multi-VRF

Configuring Multi-VRF requires a few commands as shown below. Please refer to the ICX and FastIron Configuration Guide as well as release notes for more information.

 

  1. 1) Create the VRF needed based on requirements and expected performance:

VRF vrf_nameRD IP_address:ID or ASN:ID

  1. 2) Define the address family:

Address-family IPv4|IPv6

  1. 3) Assign a Virtual interface (ICX 6610, FCX) or a Physical interface (FSX):

VRF forwarding vrf_name

  1. 4) Configure required IP routing protocol instance or Multicast routing under each user VRF:

Static routes, static ARP, IPv6 neighbor, IPv4/v6 Multicast, OSPF v2/v3, RIP, BGG

These commands are standard configuration commands under each VRF.

 

 

Maximum Configurable Parameters

The system will monitor the maximum hardware resources (routes) allocated for both address families, IPv4 and IPv6, to each VRF instance. The system will allow or prevent adding a new VRF instance if the available hardware resources for routes is exceeded. To avoid not being able to create a VRF instance, the designer should estimate the maximum number of routes expected for each VRF instance. The table below shows how TCAM space is allocated before any user VRF configured; actual CLI commands are in parenthesis.

 

MultiVRF_AvailTCAMSpaceBeforeMVRF.jpg

   Available TCAM Space before Adding New VRF Instances (click to enlarge)

 

 

This following table shows the TCAM space after two VRF instances for IPv4 and IPv6 address spaces have been configured. Note the default VRF TCAM route table space reduces accordingly. All required CLI is included in parenthesis and configurable parameters are provided in a table later in this section.MultiVRF_SystemMaxIPAfterConfi.jpg

   System-max IP/IP6-default-vrf and  IP/IP6-route-vrf after configuration (click to enlarge)

 

 

The table below shows the range of configuration parameters for Multi-VRF based on the hardware platform.MultiVRF_MVRFConfigParameterRanges.jpg 

Multi-VRF Configuration Parameter Ranges by Hardware Platform (click to enlarge)

 

Example Use Case

The following use case depicts a network at a shopping mall where two shops have the exact same requirements:

  • Retain their IP address space (duplicate IP address)
  • Maintain both networks isolated from each other
  • Traffic should never leak into each other
  • No need for new hardware (keep costs down)
  • No more devices to administer and maintain (reduce TCO)

 

The expected behavior is to have customers visit the shops and browse their respective web pages to inspect the products catalogs. Note that both stores have identical IP addresses for their servers.

 

MultiVRF_UseCaseConfiguration.jpg

   Multi-VRF Use Case Configuration(click to enlarge)

 

Tea Shop VLANs

 

----------

Vlan 10 by port

      Untagged ether 3/1/11

      Router-interface VE 10

Vlan 11 by port

      Untagged Ether 3/1/23

      Router-interface VE 11

----------

 

Wine Shop VLANs

 

----------

Vlan 20 by port

      Untagged Ether 4/1/11

      Router-interface VE 20

Vlan 21 by port

      Untagged Ether 4/1/23

      Router-interface VE 21

----------

 

Note that clients on VLAN 10 and VLAN 20 use the same IP address space. Also, servers on VLAN 10 and VLAN 20 share the same IP addresses.

Set Maximum Route Table Size for the Default VRF

system-max ip-route-default-vrf 2000

Configure the RED VRF Instance:

vrf red

rd 1.1.1.1:1

address-family ipv4

exit-address-family

exit-vrf

Configure the BLUE VRF Instance:

vrf blue

rd 1.1.1.1:2

address-family ipv4

exit-address-family

 

Assigning Layer 3 ports to a user VRF:

!

interface ve 10

vrf forwarding red

ip address 192.164.24.1 255.255.255.0

!

interface ve 11

vrf forwarding red

ip address 11.1.1.1 255.255.255.0

!

interface ve 20

vrf forwarding blue

ip address 192.164.24.1 255.255.255.0

!

interface ve 21

vrf forwarding blue

ip address 11.1.1.1 255.255.255.0

!

 

Notice that both businesses are using the same IP address space and both shops have their catalog server traffic isolated from each other.

Monitoring a VRF network

Show commands must have a VRF argument; otherwise the command will reflect the default VRF.

----------

Show VRF          Displays configured user VRF with RD and interfaces per address family

Show VRF Red      Red VRF configured details

Show VRF Green    Red VRF configured details

Show VRF Blue     Red VRF configured details

Show VRF detail         Detailed information about the user VRF with router-ID and interface status

Show management VRF   Display detailed statistics of management traffic

----------

 

Also, check the status of the mixed stack:

----------

ICX6610-48 Router#show stack

alone: standalone, D: dynamic config, S: static config

ID   Type         Role Mac Address    Pri State   Comment

1  S ICX6610-48   active 748e.f890.2bf8 128 local   Ready

2  S ICX6610-24F  standby 748e.f834.8408   0 remote Ready

3  S ICX6450-48P  member 748e.f882.e9c0   0 remote  Ready

4  S ICX6450-48P  member 748e.f883.18a0   0 remote  Ready

    active       standby

     +---+        +---+

  2/1| 1 |2/6==2/1| 2 |2/6

     +---+        +---+

 

    active

      ---         +---+        +---+

     ( 1 )3/1--2/1| 3 |2/2--2/1| 4 |2/2

      ---         +---+        +---+

Standby u2 - No hitless failover. Reason: hitless-failover not configured

Current stack management MAC is 748e.f890.2bf8

Note: no "stack mac" config. My MAC will change after failover.

----------

 

The PING and traceroute commands are VRF-aware which means that they have the VRF argument. If no VRF argument is used, then the commands are executed on the default VRF. These commands can also help monitor your network.

----------

FastIron-Router#ping vrf customer-1 IP_Address

FastIron-Router#traceroute vrf customer-1 IP_Address

----------

 

Appendix A: Full Configuration for a Mixed Stack with Multi-VRF

 

The following shows a complete configuration of a mixed stack with two VRF instances configured.

----------

ver 08.0.00b226T7f3

!

stack unit 1

  module 1 icx6610-48-port-management-module

  module 2 icx6610-qsfp-10-port-160g-module

  module 3 icx6610-8-port-10g-dual-mode-module

  priority 128

  stack-trunk 1/2/1 to 1/2/2

  stack-trunk 1/2/6 to 1/2/7

  stack-port 1/2/1 1/2/6

  peri-port 1/3/1

stack unit 2

  module 1 icx6610-24f-sf-port-management-module

  module 2 icx6610-qsfp-10-port-160g-module

  module 3 icx6610-8-port-10g-dual-mode-module

  stack-trunk 2/2/1 to 2/2/2

  stack-trunk 2/2/6 to 2/2/7

  stack-port 2/2/1 2/2/6

stack unit 3

  module 1 icx6450-48p-poe-port-management-module

  module 2 icx6450-sfp-plus-4port-40g-module

  default-ports 3/2/1 3/2/2

  stack-port 3/2/1 3/2/2

  connect 1/3/1

  connect 4/2/1

stack unit 4

  module 1 icx6450-48p-poe-port-management-module

  module 2 icx6450-sfp-plus-4port-40g-module

  default-ports 4/2/1 4/2/2

  stack-port 4/2/1 4/2/2

  connect 3/2/2

stack enable

!

vlan 1 name DEFAULT-VLAN by port

!

vlan 10 by port

untagged ethe 3/1/11

router-interface ve 10

!

vlan 11 by port

untagged ethe 3/1/23

router-interface ve 11

!

vlan 20 by port

untagged ethe 4/1/11

router-interface ve 20

!

vlan 21 by port

untagged ethe 4/1/23

router-interface ve 21

!

system-max ip-route-default-vrf 2000

!

vrf red

rd 1.1.1.1:1

address-family ipv4

exit-address-family

exit-vrf

!

vrf blue

rd 1.1.1.1:2

address-family ipv4

exit-address-family

exit-vrf

!

no logging enable fan-speed-change

no logging enable ntp

ipv4-subnet-response

!

!

ipv6 max-mroute 64

!

!

interface ve 10

vrf forwarding red

ip address 192.164.24.1 255.255.255.0

!

interface ve 11

vrf forwarding red

ip address 11.1.1.1 255.255.255.0

!

interface ve 20

vrf forwarding blue

ip address 192.164.24.1 255.255.255.0

!

interface ve 21

vrf forwarding blue

ip address 11.1.1.1 255.255.255.0

!

end

----------

Contributors