Service Providers

pmoyer

Innovation: Multi-Chassis Trunking with VPLS

by pmoyer on ‎01-28-2012 01:24 PM (175 Views)

This is a great opportunity for me to introduce a really cool and highly anticipated feature that is part of the Brocade NetIron 5.3 Software Release. The official release date for this software is sometime next week, but because you are part of this awesome SP community, you get a sneak peak!

While 5.3 contains many new innovative features that our SP customers have been clamoring for, I thought I’d pick one in particular and write a bit about it here. The feature is Multi-Chassis Trunking integration with Virtual Private LAN Service, or MCT w/VPLS for short.

MCT

First, a short background refresher on what problem MCT solves.  (BTW: Brocade has been supporting MCT for well over a year now.)

Brocade developed MCT to provide a layer-2 “active/active” topology in the data center without the need to run a spanning-tree protocol (STP). STP has traditionally been used to prevent layer-2 forwarding loops when there are alternate paths in a layer-2 switched domain. However, STP has its issues in terms of convergence, robustness, scalability, etc. Orthogonal to STP, link aggregation (IEEE 802.3ad) is also often deployed to group or bundle multiple layer-2 links together. The advantages of link aggregation are:

  • Multiple physical links to act as a single logical link
  • New bandwidth is the aggregate of all the links in the group
  • Traffic is shared across the links in the group
  • Failure of a link in the group is handled in a sub-second manner
  • Provides link and slot resiliency

So, MCT leverages standards-based link aggregation but is capable of providing this “across” two switch chassis instead of just one chassis. This is shown below.

mct_single_lag.jpg

As you can see, there are two chassis that act like a single logical switch. This is called an MCT pair or cluster. The devices on either side of the MCT logical switch believe they are connected to a single switch. Standard LAG is used between these devices and the MCT logical switch. The advantage of doing this is that now both switches in the MCT cluster are functioning at layer-2 in an “active/active” manner. Both can forward traffic and if one chassis has a failure, standard failover mechanisms for a LAG bundle take effect. In addition, there are no layer-2 loops formed by an MCT pair so no STP is needed!

VPLS

Now, for a short background refresher on what VPLS provides. (BTW: Brocade has been supporting VPLS for many years now.)

VPLS provides a layer-2 service over an MPLS infrastructure. The VPLS domain emulates a layer-2 switched network by providing point-to-multipoint connectivity across the MPLS domain, allowing traffic to flow between remotely connected sites as if the sites were connected by one or more layer-2 switches. The Provider Edge (PE) devices connecting the customer sites provide functions similar to a switch, such as learning the MAC addresses of locally connected customer devices, and flooding broadcast and unknown unicast frames to other PE devices in the VPLS VPN.


MCT with VPLS


Very frequently, a customer network needs to provide layer-2 connectivity between multiple data centers-- to enable VM mobility, for instance. The MCT w/VPLS feature I’m describing provides this type of connectivity in a redundant and high-available fashion. MCT provides the “active/active” layer-2 connectivity from the server farm or access layer to the core layer of the data center. The customer then leverages VPLS on the core layer data center routers to transport the layer-2 Ethernet frames between data centers.  This is shown below.

mct_vpls_clouid.JPG

In the diagram above, the CE switch uses a standard LAG to connect to the redundant MCT cluster. The same NetIron routers that form the MCT cluster are also configured to support VPLS to connect to the backbone network. So, the connection from the CE switch in one data center to a remote CE switch in another data center is a layer-2 service. VM mobility between the data centers is now provided in a redundant end-to-end fashion.

Fast failover in the VPLS network is provided by using redundant Pseudo-Wires (PWs), based on IETF Internet Draft <draft-ietf-pwe3-redundancy-03>. As shown below, each PE router signals its own PW to the remote PE. These local PE routers determine, based on configuration, which PE signals an active PW and which PE signals a standby PW. There is also a spoke PW signaled between the PE routers. In the case of the active PW failing, the primary PE router signals to the secondary PE router to bring up its standby PW. This failover is provided in a rapid manner.

mct_vpls_pw.JPG

The benefits of this solution are:

  • Provides an end-to-end layer-2 service between data centers
  • Eliminates single points of failure
  • CE/Client requires only standard LAG (LACP or static)
  • End-to-end protection with pre-established redundant paths

So, as you can see this is a really awesome capability for SPs who need to integrate their data center infrastructure with their MPLS/VPLS backbone network. We expect this solution to become a very common data center network architecture going forward for providing inter-data center layer-2 connectivity. I should also note that this solution works with Virtual Leased Line (VLL), in addition to VPLS. And, on top of that, it integrates with Ethernet Fabrics in the data center extremely well!

Stay tuned to this forum for more blogs like this.

Comments
by (anon) on ‎09-26-2012 02:39 AM

This is my first time i visit here. I located so many interesting stuff in your site. Thanks a lot for the blog. Really thank you!

by (anon) on ‎09-29-2012 05:23 AM

Please keep on posting about This topic. I want to know more details about it. Anyway, I have read some of posts in your blog here. And all of them are very informative. Thanks for sharing.http://www.holidaygolfusa.com/