Mainframe Solutions

Avoid FICON Infrastructure Heartburn When Upgrading Your DASD Storage.

by Dave-Lytle on ‎02-29-2012 06:39 AM - last edited on ‎10-28-2013 09:11 PM by bcm1 (1,694 Views)

I have been visiting with customers in Asia and Europe recently who are in the midst of doing infrastructure and/or storage refresh in their data centers. Typically the results from upgrading your fibre channel infrastructure is good news for all involved but I have found one instance where these technology refreshes might not provide the value that the customer is seeking.

 

FICON performance is gated by a number of factors with one of those major factors being the maximum link speed that each connection point will accept. As you know, if one end of an optical cable is 8Gbps but the other end of that cable is 4Gbps then the link is obviously auto negotiated down to 4Gbps. Usually this is not a problem, it is just a single link after all.

 

And if the customer (wisely) is upgrading their switched-FICON infrastructure to keep pace with their storage capabilities then our current switching products will transmit data at a maximum of either 8Gbps or 16Gbps. Our DCX 8510 16Gbps Director family is becoming very popular for technical refreshes but, of course, there are no DASD storage arrays that connect at 16Gbps – the fastest currently being 8Gbps. So those 16Gbps connections will auto negotiate down to 8Gbps or 4Gbps – the highest common link rate that both ends of the optical cable and their SFPs can provide. Of course, this is just what you would expect to have happen.

 

Before I touch on my concern let me lay just a little bit of groundwork.

 

DASD, regardless of vendor, is well known and widely used in mainframe shops. On DASD storage the typical use case is about 90% read I/O and about 10% write I/O. Every shop is different and that is not my point anyway. It is my experiences that there is still a lot of 2Gbps and 4Gbps DASD in mainframe shops but many mainframe enterprises are realizing that they need to upgrade their DASD to 8Gbps performance.

 

And, although it is not always architected or thought out very well, across all of the links that make up the total path between CHPIDs and storage, we should never have the target of the I/O exchange to be slower than the initiator of that I/O exchange. I will show you diagrams of what I mean very soon.

 

So what is my concern?

 

Below is a graphic representing what I consider to be a good deployment for an switched-FICON I/O infrastructure.

 

8.png

Figure 1

 

This is actually the ideal model for DASD since most DASD applications are 90% read and 10% write. So, in the case where the CHPID is reading DASD data, the "drain" on the I/O path will be the 8Gb CHPID and the "source" on the I/O path is the 4Gb storage port. The 4G source port (DASD) simply cannot send data fast enough to overrun the 8G drain (8G CHPID). Even if the DASD is upgraded to 8Gbps ports, the source will still not be able to overrun the drain. (And yes I know this is a simple picture as in reality there is a lot of fan in – fan out that could be taking place.)

 

What concerns me is that customers have decided to upgrade DASD arrays to 8Gbps even if the mainframe CHPIDs are still at only 4Gbps (FICON Express4). I have spoken with several customers where that has occurred. So what does that look like?

 

9.png

Figure 2

 

It actually works very similarly, regardless of whether cascaded links are in use or not, but cascaded links will create a worse scenario for what I am discussing with you than switched fabrics without cascaded links. We will see that in a few paragraphs.

 

But my point here is that this is potentially a very poor performing, infrastructure!

 

In this case the "drain" on the I/O path is the 4Gbps CHPID and the "source" on the I/O path is now an 8Gbps storage port. In this simple example configuration the I/O Source can out-perform the I/O Drain. Even without ISLs this can cause local connectivity back pressure towards the highly utilize CHPID. When you include ISL links the problem potentially becomes even worse. Regardless, the 4Gbps CHPID (actually its switch port) now has the potential to become a slow draining device.

 

10.png

Figure 3

 

Since the 4Gbps port on the local switch cannot keep up with the 8Gbps rate of the data that is being sent to it, the switch port servicing the 4Gbps CHPID will begin placing the data frames in its buffer credit queue (BCQ). Backpressure begins to build up within the infrastructure for access to that switch port.

11.png

Figure 4

 

The buffer credit queue on the switch egress port leading to the 4G CHPID will fill up. Of course, other local switch ports that have I/O frames bound for that very busy 4G CHPID switch port will have to save as many frames in their own BCQ as possible and then finally stop trying to transmit data until buffer credits become available for them.

 

In this case, once the switch egress port to the 4G CHPID finally fills up its BCQ, that switch port cannot receive any additional data. However, the 8Gbps data flow continues. So now it is the ISL ingress port’s turn to start having problems.

 

The ISL ingress port is on that same local switch as the 4G CHPID switch port. Since it is transmitting the DASD data to the now full queued up CHPID switch port, it will have to start filling up its own BCQ. It will slowly pass frames from its BCQ to the CHPID switch port BCQ as buffer credits become available. However, when the ISL ingress BCQ fills up – well, that is when really bad things start to happen.

12.png

Figure 5

 

ISLs are used to transmit I/O exchanges for many different storage devices and CHPID ports. If an ISL BCQ fills up then it affects not only the slow draining device data flow but all of the data flows for all of the other CHPID-storage port pairs that use that ISL (or trunking) link.

 

At this point we have BCQs all over the place on the local switch that are negatively impacting throughput and performance on local switch ports. Some of the local storage might want to send data to local CHPIDs other than the one that is causing the problem. Unfortunately, if they are storage ports that are also servicing the highly utilized 4G CHPID switch egress port, then their BCQs will be full so that those storage ports cannot transmit data to anyone.

 

That is backpressure at work in the infrastructure causing more and more problems for throughput and performance.

 

Of course other storage ports on the same DASD array, not transmitting data to the slow draining 4G CHPID, and therefore not filling up their BCQ, would still be transmitting frames. So performance and throughput become erratic on a port-by-port basis.

 

But the worst situation here is that I/O queuing is now impacting all of the I/O traffic flow (from many storage ports on both switches) that is attempting to use the ISL link (or trunk) that has now used up all of its BCQ and cannot transmit any more data. And it won’t transmit any more data until one or more buffer credits becomes available. Very inconsistent and erratic performance might now occur across the entire fabric and not just on the local switching device. Some or all of the ISL links (or trunks) are becoming congested and backpressure becomes intense across the entire fabric.

 

Keep in mind that this is a simple example. There are many things that I am ignoring in this blog in an effort to keep this posting simple – things like virtual channels and protocol intermix environments.

 

The real probability in this example (and in many shops worldwide) is that all of the mainframe CHPIDs are 4Gbps and are trying to service 8Gbps DASD. The problems become orders-of-magnitude worse at that point than the picture that I’ve painted above!

 

So I think that there are one or two things that an enterprise can do to keep away from this kind of trouble.

 

The best course of action would be to upgrade their FICON Express channel cards to match the maximum link rate of any of their storage ports. If storage uses 8Gbps ports then FICON Express8 or FICON Express8S should be deployed on the mainframe. Of course the FICON/FCP switching infrastructure elements also need to match the link rate capabilities of the storage and the CHPIDs. This helps the enterprise derive the full value of their investment in their technology refresh. Of course, some customers are utilizing earlier mainframe models that do not support FICON Express8/8S. If that is the case, and refreshing your mainframe is not possible, then my next suggestion is all that I can offer at this time as a way to overcome the backpressure issues that you will face.

 

The second, and in my opinion poor, course of action would be to manually set the higher storage port link rate to match the slower CHPID link rate. This would keep the source I/O ports from overrunning the target I/O ports. The FICON/FCP switching infrastructure elements would then just auto negotiate to meet the demands of the attached ports. This might be a good temporary remedy until you have time to deploy higher speed CHPIDs but it should not be considered as a permanent remedy. Of course even as a temporary remedy it has its problems since it is not a trivial task to change port speed and each port will take an outage as you adjust its link speed.

 

But if an enterprise is going to solve its problem by downgrading port speed, just what value does buying that new storage bring to that enterprise?

 

All things considered, I hope that you are not faced with the scenario that I have described above. But if you are having unintentional performance and throughput problems after upgrading your DASD farm then maybe this article has helped explain what is going on and what you can do about it.

 

And if you are just considering upgrading, or in the initial process of upgrading, DASD to 8G and were not thinking about making sure your CHPID link rates match and also that your FICON switching infrastructure matches, then maybe I have helped you keep your enterprise in tip-top shape.

 

I hope so.