Mainframe Solutions

FICON FABLES and FOLLIES: Chapter 1 - Mainframe Channel Cards

by Dave-Lytle on ‎02-04-2012 10:00 AM (2,013 Views)

In chapter one of this new blog series of mine I am going to discuss a FICON FABLE first.

As you know, FICON Channel Features provide the System z with Fibre Channel port connectivity for FICON storage and FCP storage. Each of these connectivity cards have ports: 1 (very old 1G FICON), 2 (old 1G, early 2G and new PCIe 8G) or 4 (last 2G, all 4G, first 8G) channel ports (CHPIDs) on the blade. On a channel card all of the CHPIDs will contain either long wave optics (SPFs) or short wave optics but never a combination of both. And on a System z there is basically no cost difference between long wave channel cards and short wave channel cards. That is one of the principal reasons that the vast majority of System z mainframes are ordered with FICON channel cards that contain long wave optics since there is more benefit to be gained from long wave connectivity without paying any additional acquisition cost over what short wave costs.

So those are the basics of mainframe channel cards. So what is the FABLE that I want to make you knowledgeable about?

The mainframe channel cards are known as FICON EXPRESSn cards where n is a digit representing the link rate. FICON EXPRESS4 indicates that the CHPIDs on the blade provide 4Gbps connectivity. FICON EXPRESS8 indicates that the CHPIDs on the blade provide 8Gbps of connectivity, and so forth. And if a 4G CHPID is directly attached to a 4G, 8G or 16G switch port or 4G/8G storage port then that link will autonegotiate to 4Gbps. So all seems fine at this moment…but…it simply is not the whole story.

Let’s take FICON EXPRESS8 at 8Gbps as an example. A FICON EXPRESS8 channel card should have four CHPIDs each capable of connecting at 8Gbps (1) and delivering 8Gbps of throughput (2). Well, 1 out of 2 isn’t so bad I guess but it isn’t what most customers are expecting. Why 1 out of 2? Well the link does become an 8Gbps link but it cannot provide 8Gbps of throughput. And the same goes for FICON EXPRESS2 and FICON EXPRESS4. They will attach at the advertised link rate but they have no capability of providing that much throughput – and therein lies the FABLE!

If you go onto the internet and find this web page  you will find an IBM document that contains a chart about FICON Channel Performance. This chart indicates the performance a customer can expect from any specific FICON EXPRESS card used on a specific System z machine. The breakdown of performance is by both I/O per second and Megabytes per second (MBps).

In the case of an 8Gbps CHPID that is running on a z10 or z196, for example, the IBM chart indicates that for standard FICON (Command Mode FICON) the maximum reached throughput for large sequential read and write data transfer I/O operations was 620 MBps. For the same card on the same machine types but running in high performance FICON (zHPF) mode, the large sequential read and write data transfer I/O operations reach 770 MBps.

But let us take stock of the situation. An 8Gbps channel path is theoretically capable of transferring 800MBps for reads and 800MBps for writes concurrently thereby providing a full duplex throughput of 1600 MBps per link. Yet neither Command Mode FICON (620 MBps) nor zHPF (770 MBps) come anywhere close to the potential of 1600 MBps. Command Mode FICON can do about 39% of real 8Gbps full duplex and zHPF can do about 48% of real 8Gbps full duplex. They actually act more like very capable 4Gbps links than as 8Gbps links.

The reason for their shortfall in throughput has to do with three factors: a) average frame size; b) microprocessor utilization; and c) PCI Bus utilization:

  • A. FICON never averages full frame sizes for a number of reasons that I will not discuss in this article. For a typical 4K DASD read or write I/O exchange, the typical average frame size for Command Mode FICON is going to be from 850 to 950 bytes in size – nowhere near the 2,148 bytes of a full frame.
  • B. Each CHPID on a channel card is run by a microprocessor (MP). The MP creates the start I/Os that drive the I/O throughput. The smaller the average frame size the more start I/Os must be issued in order to maximize the utilization of a channel path link.

Imagine that we are running on a z196 processor and using FICON EXPRESS8 channel cards. The data on DASD for this example is blocked at 4K (very common), and that we want to drive the MP to 100% utilization. That will require the MP to issue 20,000 start I/Os. That will be all that it can do – the MP will be maxed out. Those 20,000 Start I/Os will be able to drive a mixture of large sequential read and write data transfer I/O operations, in Command Mode FICON, up to a maximum of 620 MBps. This example makes it clear that small average frame sizes take many more start I/Os, and all of the MP’s processing power, to drive fewer MBps than if the average frame sizes are larger.

  • C. The MP creates the start I/Os that push frames out across the PCI Bus and onto the channel path link. The PCI Bus acts in direct correlation to the capability of the MP to give it frames. The larger the frames the more throughput it can produce per second. So once again, small frame sizes like those seen with FICON, simply cannot get enough start I/Os through the MP so that it can drive the PCI Bus to produce throughput past about 48% of link utilization at best. Even though you must pay full price for these 8Gbps channel cards and CHPIDs, one simply cannot derive the full 8Gbps value out of those resources.

In summary, when architecting and deploying FICON on a mainframe you must take into consideration the FICON EXPRESS channel card link attachment rate as well as its real capability of providing data over that link. In your fabric architecture and deployment, if you were to treat a FICON EXPRESS8 CHPID as a real 8Gbps link then you might be tempted to do too much fabric Fan In – Fan out. Fabric Fan In Fan Out at 1600MBps (8G full duplex) can be on a much larger scale than fabric Fan In Fan Out for only 620MBps full duplex.

Ahhhh, but there is a saving grace. And you should know about that as well.

FICON EXPRESS8S is the latest channel card released by IBM and it is the first in a new series of PCIe format channel blades. This new design, and very capable card, contains only 2 CHPIDs but now has enough MP processing power to push throughput across the PCI Bus at up to 1600MBps full duplex when using zHPF workloads on z196 and z114. In my opinion, this is without a doubt the very best FICON EXPESS channel card that IBM has ever offered its customers. It provides true value against the cost of the resource purchased.

Try it, you’ll like it.

on ‎02-19-2013 02:25 PM

Currently we have several 4gb ficon connections to long wave sfps over multimode fiber. It has been working for awhile with no apparent problems. Should this work?