04-15-2011 12:09 PM
As discussions about requirements for cloud enabled IT infrastructures heat up, this will likely mean that the requirements for the network infrastructure that enables the cloud, will also heat up.
Today, the front side LAN, in the form of the new Data Center Bridging, (DCB), Ethernet, is going through significant changes in order to handle future network requirements, in new and innovative ways. The Ethernet network is becoming more Virtual Machine, (VM), aware, through features enabled in Brocade VCS Ethernet Fabric solutions, such as Automated Migration of Port Profiles, (AMPP), and support of the Virtual Ethernet Port Aggregator (VEPA) standard being developed in the IEEE, which off-loads all switching activities from VM hypervisor "virtual switch" to the VCS switches.
On the SAN side, the innovations and advancements are more evolutionary, rather than revolutionary, because the underlying network already works the way we want it to, by providing lossless, in order, low latency, delivery of frames. As a matter of fact some of the biggest challenges facing SAN architects is in building a network where managing multiple generations of FC devices, is done easily. One of the challenges SAN architects face is that when incorporating different generations of FC devices into the enterprise SAN, some slower devices, can create congestion points in the network, by consuming and then slowly returning, the FC buffer credits, that are part of the FC flow control. Brocade, in their Fabric Operating System, (FOS), version 6.4, introduced the ability to detect these slow draining devices, so that the admin can be alerted when one is discovered, and Brocade has the ability to isolate these types of devices, when coupled with Brocade's FC Host Bus Adapter, (HBA), technology, through a feature called Target Rate Limiting.
So this raises the question of what the future holds for Fibre Channel in the data center "cloud" infrastructure. In 2011, Brocade will introduce its new line of seventh-generation Fibre Channel switch technology, in the form of new switches, (both fixed as well as modular), and HBA solutions, all built around a new 16 Gpbs FC ASIC, internally referred to as Condor3. While it might be normally expected to associate advances in network technology with advances in bandwidth, most of the cloud enabled capabilities of FC will be delivered through new sets of features in the areas of manageability, scalability, and availability, such as new diagnostic and recovery features. We should expect that the FC network architecture will evolve to not only be able to detect error conditions that exist in the network, but to self-heal from these conditions as well.
We should also expect to be able further flatten, the already flat, FC network architecture, due to the advancing bandwidth capabilities. Which is another way of saying that while it's hard to imagine any one device needing 16 Gbps of FC bandwidth to itself, it's not hard to imagine how that kind of bandwidth could be utilized in the core of a very large SAN network, to make sure lots of devices got all the bandwidth they required. This will play an important role in advancing the scalability of FC networks.
One last note on network bandwidth. It should be fairly understood that if you know the type of workload of your network, you can accurately judge how much increased bandwidth you will require, as you grow the network. However, if you introduce a brand new workload into your network, that wasn't part of your projections, you're likely to fall short of your bandwidth needs. Every network architect would rather have some excess capacity available, providing elasticity for these new kinds of workloads, rather than having to rebuild the network once you've run out of capacity.
It just so happened that as the cloud evolved, some new workloads did appear. One of these new workloads is something called Virtual Desktop Infrastructure, (VDI), which seeks to do the same for the workstations in an enterprise, that VM/Hypervisor technology did for servers. It seeks to replace the workstation computer with a computer which transmits the input to, and receives the display from, a remote VM running the desktop OS and Applications on a server in the data center. A great advancement for the cloud infrastructure to be sure, but it's a new workload, that wasn't expected. So once the scale is appropriate, perhaps a hundred virtual desktops or so, when the morning rolls around, and all 100 VDI sessions, boot, it creates major traffic jams at the link between the VDI/Hypervisor server and the storage device. A traffic jam on the SAN. This is one reason why many IT architects are counting on FC in order to enable VDI, cloud, infrastructures.
And that's just one new workload that we know about. What about the ones we don't know about? There will certainly be more creative technologists coming along, who will look for more bandwidth from their cloud network architecture. The good news is that the FC network architecture will have plenty to spare.
I'm curious what your thoughts are on the future of Fibre Channel in your own enterprise's data center infrastructure.