Fibre Channel (SAN)

Reply
Occasional Contributor
Posts: 15
Registered: ‎03-31-2011

Storage Network Considerations for Data Centers

It's interesting to note that recently there's been an awful lot of attention paid to the network part of the data center. What I mean by that is that the IT industry is alight with discussions about "cloud" infrastructures, typified by enabling the easy movement of both applications and data. Of course none of this would have been remotely possible without the data center network.

As a matter of fact, it's my contention that when the data center network deepened, circa 1997(ish), things got much more interesting, for developing the IT infrastructure, which everyone would eventually call the "cloud". Prior to that, the computer network existed at only one level, to connect computers together. Each computer was therefore connected to both internal resources, such as CPU, RAM, and storage, and then connected to other computers via the Local Area Network, (LAN). The communication model was straightforward enough. If you needed access to another computer's internal resources, you would have to traverse the entire machine. (Kind of like doing all your shopping in one megastore). What that enabled, (once you got enough nodes connected to the network), was a significant amount of movement as it related to data storage, in the guise of file servers, printing, in the guise of print servers, and some types of application division, in the guise of, you guessed it, application servers.

Now comes 1997, and on the back of a lot of work from folks in the ANSI T11.3 subcommittee, (which included many Brocade engineers), Brocade introduces the Silkworm I, the industry's very first Fibre Channel network switch, which supported FC Classes 2 and 3, (connectionless), communications. It was 16 ports of 1 Gbps FC, and retailed for about 80,000USD, or about 5,000USD per port. It was also the industry's very first SAN switch, which would enable SCSI communications, in the protocol form of FCP frames, to be moved in a many to many switched network, from server to storage device.

A remarkable achievement which completely changed the way things moved within the data center. In this new, deep, data center network infrastructure, you could have movement on both the LAN and SAN level, which were independent of one another. This allowed very different models of computing to emerge. For the first time, IT architects could build a storage "layer" within the data center, which could be accessed quickly and easily, from any computer connected to the SAN. This lead to the ability to allocate, not only the data and applications LUNs, but also the boot LUNs, from the SAN network, creating a model in where physical servers could now be installed into the data center, "given" a set of boot/app/data LUNs from the SAN, and be up and online in a fraction of the time required previously.

But the real impact of this deepening of the data center network, was that it meant that you could take hypervisor technology, which virtualized the OS/Application stack, and you could move this new Virtual Server atomic unit, from one physical computer to another, AND quickly switch access to all its storage allocations, (as opposed of having to copy all that data out over the LAN before you were allowed to move the VM). This deep data center network architecture made moving VMs, fast, efficient, and practical. It also meant that VMs could be created and allocated storage allocations, all dynamically, without touching any server or storage hardware. It was this deep data center network, that accelerated the development of the cloud. Once you could dynamically allocate your virtual servers, (enabled by LANs and Hypervisor technology), and you could dynamically allocate storage for those servers, (enabled by SANs), you basically had 90% of your private cloud. The only thing missing was some overlay tool that allowed you to manage, meter, and bill for the usage, of the system.

So it's clear by now, that the FC network architecture was designed to be the storage area network. Designed to carry SCSI communications from server to storage device. It does this by guaranteeing, lossless, in order, delivery of frames from source to destination. This is something that most folks in the general network industry don't realize. FC communications are lossless. That means every frame that is sent, is received. And they are received in the order they were sent. An appropriate analogy would be the difference between your newspaper being delivered intact, with all pages in correct numerical order, (Fibre Channel), and your newspaper being delivered with some sections missing, some pages out of order, and you needing to both reorder the pages, but also contact your delivery person to have the missing section re-delivered, (classic Ethernet).

I'd point out here that it is possible to lose frames in a FC network, when things are not working correctly. Given that your network is made up of many different devices, including lots of switches, cables, optical transceiver, and node devices, a problem with any component, can result in lost frames, but then this is typically discovered, analyzed and fixed.

It's also clear that since its inception, Fibre Channel has been remarkably successful in carrying the world's server to storage communications. So that brings up the question as where to go from here? Currently FC has evolved through several generations, beginning with 1 Gbps from 1997-2001, then 2 Gpbs from 2001-2004, followed by 4 Gpbs from 2004-2008, and 8 Gpbs from 2008 to the present. This year Brocade will introduce the next generation of FC SAN switching technology based on a 16 Gbps ASIC, and the current ANSI T11.3 standards define a 32 Gbps specification, beyond that. As each generation of FC network technology is developed, it should be noted that many of the abilities of the network, have less to do with overall bandwidth, but more to do with making sure that the communication streams are more reliably delivered. To that end successive generations of Brocade FC technology have included more management, diagnostic, and error recovery technology, making it a safer network architecture, in addition to adding bandwidth.

I'd be curious to hear what your thoughts are on the future of FC within your data center, and what you feel your storage network infrastructure needs to deliver, in order meet the needs of your organization moving forward.

Jason

Join the Community

Get quick and easy access to valuable resource designed to help you manage your Brocade Network.