Mainframe Solutions

The Storage-Network Line of Demarcation Correlation

by Dr.Steve.Guendert on ‎10-17-2012 12:55 AM (1,749 Views)

A line of demarcation is defined as: A line defining the boundary of a buffer zone or area of limitation. A line of demarcation may also be used to define the forward limits of disputing or belligerent forces after each phase of disengagement or withdrawal has been completed. The term is is commonly used to denote a temporary geopolitical border, often agreed upon as part of an armistice or ceasefire. The most famous line of demarcation in recent history is the Military Demarcation Line, also known as the Armistice Line which forms the border between North Korea and South Korea.  There also was the late Libyan leader Muammar Gaddafi's ironically named "Line of Death".  But enough history, you get the point.  The more important thing is:  How does this relate to your data center?

 

I meet with many of our customers across the globe, and one thing that the vast majority have in common is this: there are lines of demarcation that exist to separate the responsibility for the various teams who manage the mainframe, FICON SAN, mainframe storage, and the network.  If you are running Linux on System z, there are likely more, but that is another story (future post).  I'd like to focus the rest of this post on the line of demarcation that exists in many business continuity architectures.  That is the line of demarcation between the team(s) that manage the mainframe/mainframe storage/ FICON directors-channel extension, and the team that manages the network for cross site connectivity.  I like to call this the Storage-Network Line of Demarcation.

 

The majority of you have two sites, and have a cascaded, multi- fabric FICON architecture connecting these two sites. The distance between sites and your RPO/RTO will dictate if you are doing synchronous or asynchronous DASD replication across the network between sites.  Many of you also have a form of mainframe virtual tape solution performing replication between sites via your network as well.  Some of you have more than two sites.  For example, you may have two sites located fairly close together and you perform synchronous DASD replication between those 2 sites.  You then have another site located a long distance apart, and perform asynchronous DASD and/or virtual tape replication between them.  If you are running IBM's Geographically Dispersed Parallel Sysplex, you have even more host related connectivity traversing the network.

 

Question for my mainframe and storage friends:  who is responsible for the network connectivity/hardware for your data traffic between sites?

 

What I typically see when I meet with our customers is a line of demarcation exists between mainframe/storage and network.  For example: a customer running XRC between sites and using a Brocade FX8-24 FCIP extension blade in their DCX 8510 FICON director.  That FX8-24 will be connected to a network of some sorts, which then connects to another FX8-24 and DCX8510 at the remote site.   More often than not, the team responsible for the FX8-24s, and the data traffic running between them, have no say at all when it comes to the network they are connected to.

 

And guess where the problems usually are (especially if DWDM is involved)?  And guess who typically is held responsible/ gets the blame for the intersite data traffic performance problems (hint: not the network team and their equipment)?

 

Or worse yet, in addition to the line of demarcation, I often see that the network equipment, whether it be IP, DWDM, etc is old, slow, poor performing hardware that simply cannot keep up with the FX8-24 and DCX8510 performance, hence creating a bottleneck.  What does this do to your RPO and RTO?

 

Finally, I have heard that working with others on the other side of this line of demarcation, is often like dealing with the USA Prime Credit customer service team.

 

There is a better, less political way, that allows you to have control of all the hardware in these networks.    Less infighting, with a simpler network with better performance that is easier to manage.  And an improved disaster recovery/business continuity posture for your enterprise. I think your CIO and CTO will like it too.

 

IBM just published a Redguide (authored by Brocade) that will introduce this idea of an integrated sysplex network in more depth.  I will be blogging more frequently on this topic to discuss it in more technical depth. I also will be "tweeting" frequently on this topic.  For now, please enjoy the brief Redguide.

 

 

Dr. Steve

 

Follow me on twitter for all things related to mainframe and Brocade's role in the enterprise data center: FICON SAN, business continuance, networks, etc

@DrSteveGuendert