Software-Defined

What the World Wants from Networking: An Impregnable Cathedral

by Curt.Beckmann on ‎03-14-2013 03:41 PM (2,782 Views)

I spend most of my work time with my head down (figuratively, anyway) focused on SDN technology details and working with many others who are savvy about frame layouts, standard protocols and configuration management.  Then in my personal life, when my neighbors ask about work, I exuberantly ramble that SDN has made networking exciting again.  They smile and nod, then comment on the weather. Apparently networking is not something they spend much time thinking about.   I don’t often get to hear from folks who actually think a fair bit about networking in terms of a service or utility rather than at the level of bits-and-bytes.  So I was thrilled recently to be able to hear new perspectives.

A few weeks ago, I attended the US-Ignite “Developer’s Workshop” in San Leandro, which was all about identifying and building cool apps to take advantage of “unconstrained” network bandwidth.  Two weeks ago I attended the 2013 TED conference in Long Beach.  Both events offered opportunities to hear what the larger world wants from networks.

ted2013_0064835_dsc_8905.jpg

The US-Ignite event was a forward-looking “BarCamp” style event with attendees ranging from application developers to researchers to vendors.  Much of the focus was related to finding and highlighting applications that clearly demonstrate the value of gigabit bandwidth.  Google Fiber was a recurring theme, and indeed two Code for America developers were in attendance mere hours before their flights to Kansas City.  Distance learning and telemedicine and other cool applications were discussed. Interestingly, SDN and OpenFlow (usually treated as synonyms) were routinely mentioned in a kind of offhand way whenever some new complex functionality was anticipated or desired.  As in, “Of course we’re all going to be videoconferencing constantly from our desktops and mobile devices, and OpenFlow will ensure we get the quality of service we need.”

I was pleased that application developers fully expected to use a network-aware API to express their needs to the network, but then surprised that they saw the interface as pretty much unidirectional.  That is, the application says “I need this much bandwidth now”, and (if the app is really nice) maybe “I need the bandwidth for XYZ minutes” or “for QRS gigabytes.”  The expected network’s response seemed to be limited to “Okay, gotcha” or, at worst, “Sorry, missed that… can you repeat please?”  I asked what the network should say when it’s out of resources (as when each house in the neighborhood is watching 2 separate HDTV movies at the same time).  Can the network say “Gosh, I’m sorry, but I’ve only got half of that right now, but should have it at 8:30pm, which would you prefer?”  That comment drew frowns and responses like “next gen networks should solve those problems for us.”   As if ubiquitous “unconstrained” end-to-end bandwidth is just a year or two away, even though we all know that provisioning more bandwidth prompts deployment of new bandwidth hogs.  Will Barkis of Mozilla, with his neuroscience background, piped up that we will quickly exceed human sensory bandwidth. But look out! Here comes the Internet of Things to consume a lot more!

Here’s where it gets interesting.  Someone chimes in “Why isn’t this like electricity?  My utility company makes it easy to use more power!”  Ah, good point.  But when we use electricity we pay per kilowatt hour and even varying rates by time of day.  If we paid our ISP that way, would we insist on guaranteed high bandwidth or bandwidth scheduling?  Could the ISP deliver that?  When will apps be smart enough to talk *with* networks and not just *over* them? All this highlights our chicken-and-egg situation.  Providers would probably be happy to charge for premium services, but how can they charge for them when most connections extend beyond any individual provider’s control?  How can scheduling work when apps don’t ask for what they need?  How can apps ask for what they need when there’s no standard “network orchestration” service defined?  How will providers and vendors develop that standardized service when demand is fuzzy and interoperability is required before the first dollar will be spent? Although I’ve had technical conversations with colleagues about many of these issues, the discussions usually miss the system level economic or deployment challenges.  The workshop was very effective at getting each of the attendees thinking outside of our familiar brain boxes.

In contrast to US-Ignite, this year’s TED Conference was not networking focused. But, uncharacteristically, it did have a more than one Internet-focused presentation.  Danny Hillis spoke on the vulnerability of the Internet to failure caused by either a cyber attack or inadvertent mistakes.  Given our world’s growing dependence on the Internet, he asserted that we need a “Plan B” network that critical services (hospitals, first responders, etc) could rely on in the event of an internet outage.  When asked, he said that the cost of Plan B wasn’t cheap, but worth it, perhaps a few hundred million dollars.  He also said that “Of all the problems at this conference, this is probably the very easiest to fix.” (Well, maybe… or maybe not.)

Vint Cerf, recognized Internet ancestor, who participated in a session on interspecies Internet, got a chance to comment on Hillis’ Plan B idea.  He essentially dismissed the concern as “unlikely,” saying that the Internet had been around for more than 30 years without much trouble.  This came just minutes after Cerf had mentioned that IPv6 had just launched in the past year.

Sagrada_Familia_01.jpg

My own sense is that both speakers, in compressing their comments for time, and simplifying them for a lay audience, glossed over a lot.  When Hillis suggested that Plan B would need all-new protocols, completely different from existing IP stacks, he skirted past the fact that dependency on IP as a protocol is ubiquitous.  A dozen implementation challenges popped up in my mind, and I quickly imagined that Hillis’ cost estimates might be low by a factor of ten or more (though perhaps some clever approach might mitigate this).  But in contrast to Cerf, I felt that Hillis was right about our risky (and growing) dependence.  A major Internet outage might be viewed as similar to a meteor impact or mutant supervirus pandemic: unlikely but potentially catastrophic and worthy of consideration.

It was clearly not the point of US-Ignite or TED to sort out all the issues that were raised.  Instead, these events served to raise awareness and catalyze innovation and progress.  I personally valued the big picture perspectives that I got from TED and US-Ignite, and the way that they reminded me that as we lay bricks we must not lose sight of the eventual cathedral.

(Image credits: TED session photo by  James Duncan Davidson/TED; Sagrada Familia Cathedral by Bernard Gagnon via wikimedia.org)

Comments
by
on ‎03-15-2013 06:13 AM

I wonder about the feasibility of the 'application tells the network what it needs' model, in general. We've had lots of opportunities for applications to communicate with the network to alter its behavior, but the only example I can find that has really worked is NAT traversal, and that's a small, well-bounded problem space. It seems as though the complexity tradeoff always ends with the decision not to have the network and application depend on each other. The network becomes somewhat less likely to interfere with the application, usually by providing more bandwidth, occasionally by adapting itself in some reactive way, and the application improves its ability to deal with the network's limitations.

That's not to say we won't ever find a place where well-orchestrated interaction between the network and some set of applications would be beneficial to both. I think that the availability of a cross-vendor, cross-platform standard for that interaction will lower the complexity hurdle by quite a bit. But it's still complex, and regardless of how it's implemented, provides yet another point of dependency and potential breakage. The most robust system seems to be one in which the applications and the network each go their own way.