Data Center

On-Demand Data Center Validation: What to Do with All that East-West Traffic

by mschiff on ‎09-26-2013 10:57 AM - last edited on ‎10-28-2013 01:23 PM by bcm1 (1,460 Views)

Ok, now to start getting into the meat behind the On-Demand Data Center.  Over the course of this series I am going to break down several of the challenges which data center operators continually tell us keep them up at night. And then, more importantly, describe the validated real-world solutions that can make them rest a little easier. So what challenge are we looking at today?  How about one caused by server virtualization?

 

“Wait, but, I thought server virtualization was supposed to solve my problems, not be the cause?  It’s the building block of the cloud!”

 

Yes, sure it is. Virtualization is great. It increases resource utilization, improves efficiency, reduces provisioning time… yadda yadda yadda…you have your cloud.  But if Seinfeld taught me anything it’s that there’s a lot that can happen in the yadda.  In particular, there are several challenges that server virtualization places on your data center network.  The focus of this entry will be how to deal with the massive increase in east-west traffic that it is causing.  So today, let’s go east-west, young man (or woman).

 

The traditional access/aggregation/core topologies have been around for a long time and are widely implemented in data centers everywhere.  However, with the rise in east-west traffic (~80% of traffic will be east-west by 20141, and about half of that is going across VLAN boundaries), the current model does not efficiently manage server to server routing. Due to P2V migration, VMs are being increasingly spun up, consolidating the amount of physical servers (~82% of server workloads will run in virtual environments in 20162).  This presents an opportunity to significantly reduce core bound traffic that was being routed within the data center to other servers or in some cases to the same server! The additional hops needed for a VM to communicate with another VM in a different subnet can be greatly reduced by adding a router into the virtual environment with the VMs.

 

Let’s take a look at an example. The diagram below shows the traditional traffic pattern used for routing between VMs.  The green flow would be greatly enhanced with a virtual routing solution allowing intra-server routing, eliminating hops and reducing latency caused by sending the traffic to the core.

 

6.png

 

Now let’s say the traffic needs to cross server boundaries.  To fully optimize the traffic in this case, you will want to have a very efficient ToR solution with full Layer 1 multi-pathing capabilities because you are doing the routing within the server layer. In the below diagram you can see both intra-server traffic flow with virtual routing (red) and inter-server traffic flow with virtual routing and multi-pathing at the ToR (blue).

 

7.png

 

Brocade is the only vendor that can deliver this complete solution for your data center today. The Vyatta vRouter delivers dynamic routing, Policy-Based Routing (PBR), stateful firewall, VPN support, and traffic management in a single package that is optimized to perform in virtualized environments. The Brocade VDX Series with VCS Fabric Technology supports 100% multi-pathing at all layers of the network: Layer 1, Layer 2, and Layer 3. Layer 1 multi-pathing is achieved via Brocade ISL Trunking, providing the industry’s best load balancing across a trunk group.  By leveraging these technologies, we were able to reduce core bound traffic by 40%.

 

Oh, just last week we announced even more advancements for these solutions in support of the On-Demand Data Center. Check it out!

 

Next entry we will stay on the virtualization theme, but look at ways to deal with congestion that it causes on the rest of the network, specifically in the core and aggregation.

 

1 Gartner—Your Data Center Network is Heading Toward Traffic Chaos (April 2011)

2Gartner Forecast Analysis: x86 Server Virtualization, Worldwide, 3Q12 Update (November 2012)