on 11-01-201305:29 PM - last edited on 10-23-201404:27 PM by Bill Dominick
[Note: I wrote this piece last weekend, because I knew that I’d be too busy to work on it this week. This explains the prematurely skeptical reference to the Red Sox.]
It’s a beautiful Sunday afternoon here in Silicon Valley. It’s been a good weekend for sports: Manchester United won (finally!), Sebastian Vettel claimed his fourth F1 Driver's Championship, and the Patriots came from behind to thrash Miami. And then there’s the Red Sox; oh well, three out of four isn’t bad. But all of those events are sitting on my DVR, because for the next week I’m focussed on one thing: preparing for the upcoming OpenStack Summit.
OpenStack networking is complicated. This is mostly due to the fact that data center networking is going through a period of massive disruption in several different areas, leading to a combinatorial explosion of complexity. Overlay architectures, different kinds of tunneled underlay, the replacement of dedicated network equipment by software running in VMs, the emergence of controller-based SDN such as the OpenDaylight project, and the spectacular performance improvements in merchant silicon and x86 processors: these have resulted in many innovative products from startups and established vendors, all of whom are keen to participate in OpenStack. In part, it’s because the OpenStack mission has been expanding from a simple EC2-style IaaS to include legacy data center automation and carrier NFV. Public clouds emphasize abstraction and multi-tenant isolation, features which are less relevant for other users of the technology, and it's challenging to develop abstractions and APIs which address all of the use cases. There is still a lively debate on which parts of OpenStack are "core" elements of every OpenStack system. (Indeed the original Nova networking system is still the default; deprecation is planned for the upcoming Icehouse cycle.)
In this exciting and unpredictable environment, my team has been working on a project to manage some of the diversity. In our Dynamic Network Resource Manager (DNRM) Blueprint, we’re proposing a framework for managing the pool of physical and virtual network resources from multiple vendors. It borrows an idea from the OpenStack Nova scheduler: the use of a policy-based resource allocator that abstracts away the complexity of resource management, and allows each cloud operator to choose the resource allocation policy which fits their environment.
We’re demonstrating a proof-of-concept implementation of DNRM that uses the Brocade Vyatta vRouter, probably the most widely used virtual networking appliance. The DNRM resource manager uses Nova to provision a number of Vyatta virtual machines. Then a modified API handler in Neutron intercepts each client request to create an L3 Router, calls the policy-based DNRM allocator to find the best resource instance, examines the type of resource, and calls the appropriate driver (in this case the Vyatta driver) which talks to the VM to configure the vRouter. All of this can be viewed in the OpenStack Horizon dashboard; we've added a new panel which displays the state of the resource pool.
The Blueprint explores a range of use cases that are supported by the DNRM framework. Several of Brocade's customers are particularly interested in the ability to allocate virtual appliances for dev/test networks and physical systems for production traffic, without changing any code. Others focus on the way it supports resources from multiple vendors, or the ability to choose specific resources to meet compliance requirements.
Inevitably such a comprehensive mechanism as DNRM overlaps several projects within Neutron, including the FWaaS, LBaaS, and VPNaaS work. In recent weeks we’ve been meeting with many of the other contributors to OpenStack to thrash out the details of what a final architecture should look like. I’m looking forward to the Design Summit sessions in Hong Kong, which should lead to agreement on a program of work for the next Icehouse release of OpenStack. It’s going to be complicated, for the reasons that I already mentioned, but I think this increasing complexity emphasizes the need to provide cloud operators with policy-based automation tools.
And when I get back from Hong Kong on the 10th, I'll see which of those sporting events I still want to watch!