I attended the Gartner Data Center conference in Las Vegas last week. I did the same in 2010 and 2011. Here’s a quick summary of what I observed then:
The conversation was no longer “what is cloud computing” or even “why cloud computing” but “what’s the best way to get there”.
SDN hadn’t yet become a term, but many IT organizations had enough experience with virtualization to be able to envision new possibilities based on a more “logical” type of IT…and the “how we get there” talk was pretty routinely multidisciplinary.
Converged infrastructure offerings (Cisco UCS, HP Matrix) had been in the market 18 months or less. Gartner had responded by coining the phrase “fabric-based infrastructure”. Brocade had just started shipping the first VDX switches for deploying Ethernet fabrics.
There were several “lessons learned” presentations given by large enterprises who were already underway with private cloud initiatives.
“Fabric-based infrastructure” offerings had matured and gained solid traction in several market segments. TRILL- and SPB-based Ethernet fabrics had debuted from a number of vendors…leading to considerable market confusion about what a fabric is.
Gartner articulated a view that in essence, a “fabric” is simply a high-speed, non-blocking network; there are no assumptions about architecture, topology, or protocols.
Things seemed to be trending in a pretty clear direction in Gartnerland, then:
The future of personalized data delivery modules, c. 2011:
Fully converged infrastructure leveraging any past, present and future protocols to traverse the cloud.
But fast-forward two years, and instead, confusion reigned. Public cloud was consistently invoked as the bogeyman that might render enterprise IT irrelevant. IT Operations Management was discussed entirely through the lens of the incumbent Big 4 management vendors, with OpenStack mentioned only occasionally in passing and CloudStack not at all. (I’ve discussed these subjects in more detail elsewhere, for those interested.)
Better was a panel discussion/town hall on Software-Defined Data Centers. Here the Gartner analysts made the case to a rather skeptical audience that the foundations of SDDCs are now laid, and enterprises need to be planning pragmatically. Joe Skorupa took pains to distinguish between network automation and SDN: the latter requires control plane abstraction, and is not a new way to manage the same old things. He seemed excited about the possibility for innovation offered by SDN, though elsewhere he expressed some frustration about the lack of industry execution in defining and delivering on concrete use cases or applications of SDN. The panel also brought up the subject of writing hooks into enterprise applications to better inform the infrastructure of the apps’ needs. Skorupa made the excellent point that it wouldn’t make sense to rewrite a mature, mission-critical application for an SDN world: “Where your application is in its lifecycle will drive whether and when it fits into a software-defined context.” Tl;dr: “SDDCs” will emerge over time from a growing number of “software-defined” infrastructure pods allocated to specific types of applications.
Perhaps the most interesting session was “5 Questions to Ask Your SDN Vendor”, which included many audience polling questions.
In an earlier, more general networking session, a polling question asked the audience of perhaps 200 people what their top criteria were in selecting networking solutions. In that session, “scalability and reliability” was the clear leader, with “management, agility and orchestration” coming in second with about half as many votes. Nonetheless, the analyst leading that session noted that Management hadn’t even been on the radar two years ago.
In the SDN session with an audience of about 60 people, it was the opposite: 45% voted for management and orchestration, with availability and scalability getting about 31%. Pricing got no votes and vendor relationship only 13%, which should hearten some smaller players while giving some pause to those betting on hardware commoditization.
A more interesting discussion was about who would lead the SDN charge within the enterprise. Skorupa pointed out that the best choices might not be individuals with an incentive to prioritize incumbents to protect their own investments in certifications. He encouraged the audience to look seriously at 2-3 vendors. And then he asked the audience who within their own orgs would be the primary party to evaluate SDN solutions.
Network teams came out first, but not by a really large margin. The results may have been swayed slightly be the preceding commentary, but nonetheless, SDN is clearly understood to be an architectural transition with interdisciplinary ramifications, not a matter of new protocols.
And of course, on the small matter of the SDN rubber hitting the road:
You could really read this two ways: 60% of those few organizations interested enough in SDN to attend the session still see it as somewhere in the future. OR you could consider that for a technology trend barely two years old, the fact that 40% of session attendees are already working with it in some fashion is really extraordinary.
Overall, the conference presented a wildly uneven view on the state of enterprise IT, with both analysts and audiences being all over the map in terms of awareness and embrace of emerging technology and architectural trends. Private conversations with a few analysts revealed clear differences of opinion on a number of key topics that have yet to be worked out within Gartner. The real weak point for me was the scant attention paid to the rise of open source in all areas of IT (beyond the realm of server OS’s) and to the realm of data center orchestration. It seems these are being left to filter up from within existing infrastructure disciplines.
On the other hand, the last two years have seen an unusually rapid series of tectonic shifts in the overall technology landscape, and correspondingly, seemingly stable partnerships and business models are suddenly less certain. The area of IT that tends to be “stickiest”, or most resistant to change is in fact the layer in which business processes are embedded, because humans are much harder to change than hardware. So perhaps it shouldn’t be surprising that discussions in the IT Operations arena lagged so far behind the rest. And SDDC lock-in, when it comes, will also be at the orchestration layer.