The amount of traffic that crosses the data center continues to grow, as does the need for speed and capacity. If your data center network seems sluggish and is not providing the response time your organization requires, it could be your data center network infrastructure is causing the issue. It might be time to consider upgrading to higher capacity speeds all tiers of your network – leaf, spine and core.
Let’s start at the leaf layer. Server virtualization, while decreasing the overall number of servers, is increasing the number of overall servers per rack as well as the number of virtual machines (VM). Organizations will require higher bandwidth to address this growing server and network utilization. Today's one Gigabit Ethernet (GbE) switches are not able to provide the bandwidth and low latency needed to prevent bottlenecks and sluggish response times. If you haven’t already, now is a good time upgrade to 10 GbE based switches for several reasons. With a virtualized data center, you will need 10 GbE to manage the increased capacity on the servers. Nowadays, servers are shipping with 10 GbE connectors, thus having a 10 GbE switch will help with performance and latency issues at the edge. In addition, 10 GbE switches are no longer cost prohibitive. The price gap between one GbE and 10 GbE is narrowing. When you also factor power consumption and cooling older systems as well as performance costs, 10 GB starts to look a lot more affordable.
With the move to 10 GbE in the leaf, this will put pressure on the spine. Where you were once running 10 GbE, it is time to considering upgrading to 40 GbE. The increase in traffic will only continue, according to IDC (Market Analysis Perspective: Worldwide Datacenter Networks, 2012), data is projected to 6 times its current rate by 2015. When you add in voice, video and security as well as the number of always on mobile devices, on top of the data, you will need 40 GbE to handle this explosive growth in traffic coming from the leaf switches. If you only have 10 GbE switches in your spine, they will not be able to accommodate the multiple 10 GbE link connections from the leaf. Link aggregation has worked well in the past, but it is not a scalable solution. There will be a time where linking together too many 10 GbE connections will cause the network to slow.
With the migration toward 10 GbE in the leaf and 40 GbE in the spine, it makes sense to start considering 100 GbE in the core. Indeed, 100 GbE is a nascent technology and just arriving on the horizon. However, if you are overhauling your data center architecture, it is worth considering 100 GbE for your spine to core connection. The aggregation of multiple 40 GbE switches will create the need for higher bandwidth in the core. For those organizations where a lot of data travels across the network and there is a heavy reliance on its performance, 100 GbE makes sense. Instead of configuring ten ports of 10 GbE or three ports of 40 in a lag configuration, it is easier to configure one 100 GbE port. You also get the benefit of a full 100 GbE pipe, versus segmenting your connections into multiple pipes that may not utilize the full available bandwidth. In addition, cloud computing is creating a shift in how networks are designed, creating a need a for multiple, high bandwidth connections, that will be able to more quickly deliver applications and services.
Lastly, as you evaluate your data center infrastructure and determine which speeds make sense at which layer, you should consider purchasing a solution that includes Ethernet Fabric. Deploying Ethernet Fabrics in data centers help with any-to any connectivity, meaning that traffic can travel in any direction including east to west, a key requirement for server-to-server communications, and deliver the automation and simplicity needed by data centers.