Here we are again with another fantastic Brocade NetIron Software release that, as usual, includes some critical features for the SP & REN (Research & Education Network) marketplace. This blog will highlight OpenFlow related features in the NetIron R5.6 Software; which is now shipping in a controlled release with full GA coming very soon.
OpenFlow is rapidly being adopted as the “de-facto” southbound API for the emerging Software Defined Networking (SDN) architecture. This should be of no surprise to our SP audience, as Brocade has been promoting the OpenFlow protocol and its place in SDN architectures for 3+ years now. We have been well ahead of this trend and we are actively accelerating the adoption of OpenFlow and SDN technologies.
A bit of history to refresh the audience on where we’ve been when it comes to OpenFlow. Initial OpenFlow v1.0 support was available as a GA capability in NI R5.4; over a year ago. This included support for Layer-2 or Layer-3 matching rules, support for a “hybrid switch mode” capability, and the ability to scale the OpenFlow rule entries to the 4k range. In NI R5.5, Brocade upped the ante for OpenFlow/SDN solutions by providing a “hybrid port mode” capability. This capability is explained here and is a key enabler for the Internet2 Advanced Layer 2 Service (AL2S) architecture.
In NI R5.6 we are once again upping the ante by adding MPLS/VPLS support for hybrid port mode, dramatically increasing the scalability of OpenFlow entries to the 128k range, and increasing the matching capability to include simultaneous Layer-2 plus Layer-3 matching (L2 + L3) rules.
VPLS Hybrid Port Mode
MPLS/VPLS support in hybrid port mode allows VPLS endpoints to be included on an OpenFlow hybrid port. Previously, only IP endpoints were supported on hybrid ports. Adding VPLS support allows network operators to provide OpenFlow and VPLS services on the same router port; basically allowing these networking technology’s to co-exist on the same physical infrastructure in their own logical network overlay. Another way of viewing this is that the IP infrastructure is really the underlay network; and there is an OpenFlow logical network overlay and a VPLS logical network overlay that sits on top of this physical infrastructure. This hybrid port capability is also provided in a “protected mode” fashion, where the operator can determine which VLANs on the port are subject to OpenFlow rules and which VLANs are not (ie: they are protected). This allows an incremental deployment of an OpenFlow-based SDN solution with no risk to the underlying IP infrastructure; which is already providing service to users and cannot be impacted.
The basic flow control works something like this:
OpenFlow Entry Scaling
Regarding the increased scaling for OF entries, the scale increase is dependent on platform and line-card. On the MLX platform the scale increases to 64k OF entries per system and on the XMR platform, the scale increases to 128k OF entries per system. This includes L2 entries, L3 entries and L2+L3 entries. It should be noted that the L2+L3 OF entries on the MLX and XMR are for the following line-cards: 8x10GbE, 4x40GbE and 2x100GbE.
On the CER platform, the scale for L2 entries goes to 32k and the scale remains the same on the CES platform, at 4k OF entries. On both the CES and CER platforms, the scale for L2+L3 rules is 2k OF entries per system. With the addition of L2+L3 support on the CES/CER platforms, there is no need to add explicit configuration for L3 only, as it is now implicit in the L2+L3 feature.
In addition to the per-system level scaling numbers, the operator must be aware of per-line card level scaling numbers. For example, on the XMR platform with the 8x10-X GbE line card the scale increase is 64k OF entries for L2 or L3 and 52k OF entries for L2+L3. It is important to keep in mind both the per-system level scale increases and the per-line card level scale increase to fully understand the level of scale your device will support. For a complete matrix of per-system and per-line card scaling for OF entries, please go here to the Software Defined Networking Guide for R5.6.
A few use case examples of where this increased scale becomes particularly critical are Network Access Control (NAC) and Network Function Virtualization (NFV) solutions. In an SDN-based NAC solution that is deployed on a large university or enterprise campus, there may be 10’s of thousands of users and possibly one or more OpenFlow rule entries per user. For example, a user may be initially redirected to a captive portal using a single OF rule but once the user is authenticated successfully and granted access to the network resources, there may be a single OF entry allowing the user to forward packets to all destinations or the user may be granted access to only portions of the network resources using multiple OF entries (ie: walled garden). In an NFV example, a service provider may be virtualizing their Broadband Remote Access Server (BRAS) solution and this may require OpenFlow rule entries in access switches to forward packets (ie: steer traffic) to the various Virtual Machines (VMs) that are providing this service. A typical BRAS provider has many 10’s of thousands of users and each of these users may need more than one OF entry to properly steer packets to the appropriate services.
Simultaneous OpenFlow L2 + L3 Matching
For simultaneous Layer-2 + Layer-3 matching rules, this will be very useful for extended matching criteria on OpenFlow enabled ports; whether they are dedicated OpenFlow ports or hybrid OpenFlow ports. Think of extended Access Control Lists (ACLs). You are no longer restricted to matching on only an L2 header or an L3 header. Here are the header fields that can be identified for matching criteria.
This extended matching capability is useful in scenarios such as the NAC example previously described; where you may have rules that initially match on L2 fields and then, based on user authentication results; change the rules to match on L2 + L3 fields for various forwarding actions. For example, match on the authenticated users’ SA MAC field in the L2 header plus one or more of the fields in the L3 header to determine the forwarding action. The use cases for this extended matching capability are numerous.
Before wrapping up this short blog, I should (re)emphasize that all these OpenFlow capabilities are hardware based, so there is no performance impact whether doing IP, MPLS or OF forwarding. These are all implemented in what some industry folks call the “fast path” forwarding pipeline. Speaking of this, there is actually no “slow path” forwarding pipeline in the NetIron Product portfolio for data plane packets. One of my colleagues recently humored that “slow path” actually means “no path” in today’s high performance networking world.
Well, I hope you found that this blog helped explain the NetIron Software Release 5.6 OpenFlow related features and benefits. Networking is once again becoming “cool” with all the developments and activities around OpenFlow and Software Defined Networking!