I was recently talking to a large retailer about how much IT infrastructure is wasted because they have to build to handle peak loads around the holidays. The percentages are staggering. Some retailers estimate that more than 50 percent of their infrastructure is built for these peak loads. Just imagine the cost savings if they could dynamically add resources during those peak periods.
While cloud bursting is often talked about in this context, I wonder if we will burst the cloud as every retailer vies for additional resources in the cloud at the same time. So, the question is “What can be done before enacting a full hybrid cloud model?”
A beginning step could be to redeploy resources in the data center to the production applications and away from the development during those peak loads. To support this type of resource redeployment, we will have to enable a broader range of virtual machine mobility than exists today. And, to do that, we will need to build larger, flatter Layer 2 networks than what currently technologies such as Spanning Tree Protocol (STP) will realistically allow.
Historically, data center networks have been deployed in an hierarchical, multitier fashion: • Layer 2 terminated at the edge or in the access layer, for good reasons (apropos of my previous blog) • Layer 3 at the distribution/aggregation layer • Core routing protocols in the next layers of network infrastructure
This hierarchical, multitier approach has been the most prevalent and widely accepted way of designing, deploying, and managing data center networks.
While this approach provides the benefits of not having to deal with STP (for the most part), it imposes different challenges for the engineering and administration teams. First, this can be an expensive architecture as you continue to grow your data center network. One of the main reasons for this is that routing ports are more expensive than Layer 2 switching ports. It costs vendors more to build them and therefore more for customers to purchase them.
More compelling than the additional capital cost, however, is the ongoing operational expenditure of introducing Layer 3 in the edge/access layer as it complicates network design, deployment, administration, and monitoring. Complexity equals ongoing administrative costs. For instance, each port in this hierarchical network can be running a number of finicky protocols, each with its own idiosyncrasies and associated best practices that have two negative impacts on modern data centers:
First, this increases the number of management touch points, resulting in more administration required.
Second, it makes adding on-demand capacity a non-trivial, very carefully planned and choreographed exercise. This limits the viability of building a truly dynamic data center, which is a cornerstone of private clouds and virtualized data centers.
The virtualized data center is one that requires an agile service delivery model, the ability to add network capacity and services on demand, and new levels of operational simplicity in network deployment, administration, and monitoring. It is no coincidence then that the notion of scaling out flatter Layer 2 networks resonates with network architects.
If IT organizations can create these flat Layer 2 networks with loop-free topologies, lightning-fast reconvergence times, and extremely efficient use of bandwidth, the virtualized workloads will have a much larger range of mobility in the data center (remember, server virtualization clusters terminate at Layer 2 boundaries). Additionally, since converged storage traffic such as FCoE is not routable over IP, a larger Layer 2 domain provides a larger domain for storage access—where hundreds if not thousands of physical machines can access shared storage in a reliable and efficient manner.
Brocade is striving to bring precisely these values to the virtualized data center. In my next blog, we will investigate the requirements posed by converged storage traffic and the value that these flat Layer 2 networks provide for shared storage access to servers in this new virtualized cloud data center.