bybensons12-01-201611:53 AM - edited 12-01-201612:04 PM
As the saying goes, “it is difficult to make predictions, especially about the future.” And yet, around this time of year, it has become a ritual of sorts for some of us in the IT industry to attempt to do exactly this. As the calendar approaches its end each year, we look back on the events that have unfolded like a whirlwind since the new year began, and we try to make sense of them all.
byDavid Meyer08-19-201606:00 AM - edited 08-23-201605:53 AM
It is worth making the goal here explicit: We know that we can analyze the diverse data sets generated by networks using a variety of algorithms that will give us novel insights into the design and operation of end-to-end service stacks. In particular, by understanding the structure of our data sets, we can classify current (and past) events, understand hidden relationships, and to some extent predict the future. Among other things, we would like to use this new knowledge to enable a deeper level of automation, essentially automating what we currently think of as automation. Clearly these are worthy goals, but how might we accomplish this vision?
We are now in the 2nd iteration of NFV – maybe not from a standards perspective, but certainly from a vendor development and investment perspective. I thought it made sense to share a blog on my perspectives of where NFV has been, where it is now, and where it is going from a technology perspective, and open the floor for debate.