Mainframe Solutions

Dave Lytle

Not Your Father’s or Grandfather’s Mainframe Any More!

by Dave Lytle on ‎05-14-2014 12:54 PM (1,674 Views)

The evolution from “my way or the highway” kind of early day mainframe computer processing allowed the mainframe to morph into a platform for the multitudes where legacy mainframe applications run happily alongside more distributed workloads such as Linux and Unix – and all on the same re-centralized and easier to manage computer platform. IBM has once again made the mainframe relevant across the spectrum of computer users that inhabit the globe today. In fact, IBM often refers to its “mainframes” as large servers and emphasizes that they can be used to serve distributed users and smaller servers in a computing network.

 

Here is a great example. In recent years the notion of “Cloud Computing” has emerged and many customers have a desire to move toward a cloud based structure. This alone has re-energized interest in the mainframe since it is the only platform really capable of providing completely robust, private cloud computing services.

 

For the distributed world, the primary stumbling block for cloud computing has been the hypervisor. All of the entities in the cloud must coordinate their capabilities, availability and resources which require a common hypervisor to collect and communicate this information before acting on it. That has proven to be more than difficult when using a distributed server farm.

 

But today’s goals and requirements for cloud computing can be met with the industry leading RAS features that are built into the mainframe along with its additional ability to distribute resources as the demand ebbs and flows. One of the embedded mainframe services, the Resource Manager, already provides the necessary coordination functions and the system resources are managed in a homogeneous manner. Consequently, the System z®/zEnterprise® (referred to as System z in the rest of this document) mainframe is really the closest implementation of the heralded cloud computing complex available today. There is not a single distributed solution can match the cloud capabilities of the mainframe – and System z® can do it right now!

 

Beyond the high level notion of cloud computing, high-performance software solutions have also evolved that can leverage the performance, security and availability of the mainframe in this world of internet time, Web interfaces and Service-Oriented Architectures (SOA). An example of mainframe agility for distributed processing workloads would be the IBM WebSphere Portal for System z®. Today’s mainframes are designed to excel at business computing, which typically involves hundreds or thousands of transactions per second.

 

The mainframe has always had it strengths: Its robust Reliability, Availability and Serviceability that provides for zero or almost zero downtime over a year or many years; Scalability which is the ability of the hardware, software, or a system to continue to function optimally as it is changed in size or volume; Security which provides protection against unauthorized access, transfer, modification, or destruction, whether accidental or intentional; and Virtualization which builds on physical partitioning and offers the ability to simulate availability of hardware – CPU, memory and I/O – and operating system (OS) resources.

 

Of course, a distributed system has its strengths as well: Speed of deployment; Inherent distribution; decent (or “good enough”) Reliability; perceived Cost Savings and incremental Scalability for growth. Overall, I believe that the traditional benefit of distributed computing has been that it enables a customer to optimize their computing resources for both responsiveness and economy. But neither of these technologies works in a vacuum so some really good minds have looked at these various technologies and incorporated bits and pieces that would help their own systems work better.

 

As some of the mainframe technologies trickle down to distributed systems, those systems are getting better at hosting mainframe-class applications and they are slowly beginning to achieve some of the traditional mainframe benefits like high availability, scalability on demand and improved overall utilization. But, at the same time, the mainframe is becoming more like distributed systems with an ability to locally execute UNIX and Linux applications and also to link with IBM blade servers and manage AIX, Linux and Windows applications using the unique mainframe-based Resource Manager application. So in our world today, all of this makes it perplexing for a customer to decide which is the best platform to meet their unique computing needs.

 

When customers are trying to understand the difference between distributed platforms and a mainframe platform one of the significant differences is how their I/O subsystems work. I am sure that customers sometimes puzzle over the benefits and costs of running DB2, WebSphere, Unix and Linux on the mainframe versus running them on an open systems platform. I also suspect that they often calculate total cost of ownership without understanding both the benefits of collapsing the different tiers into one, much more easily managed system as well as the cost and performance benefits that can accrue by using a mainframe I/O subsystem. So I think that it is important for a user to understand how significantly different I/O is accomplished on distributed processor systems compared to the mainframe.

 

Mainframe I/O is Main Stream Functionality and It Is Very Robust:

 

On a mainframe, I/O is arguably just as important a task to be performed for applications as the computing that is done for those applications. In order to make certain that I/O is treated as a mainstream task the mainframe I/O subsystem has several unique and very powerful design features that create a major differentiator between distributed computing systems and a mainframe computer environment.

 

First of all, virtualization is everywhere in the mainframe and has been everywhere for decades allowing it to mature into a very stable infrastructure. For example, through the use of virtualization, a DASD storage array can have as many as 256 Logical Control Units (LCUs) each with 256 devices so a mainframe can address up to 65,536 total volumes within just one storage array. All of the information needed to get to any LCU and volume is contained within each frame of a mainframe Fibre Connection (FICON) I/O. The mainframe does not need any special services to make this happen. What do I mean by a special service? An example of a special service for distributed processing would be Single Root I/O Virtualization (SR-IOV) that allows a PCIe device to appear to be multiple separate physical PCIe devices. In effect, this provides a form of virtualization. An example of a special service on the mainframe would be Node_Port ID Virtualization (NPIV), developed in the mid-1990s, which is an FCP SCSI I/O service that allows Linux on the Mainframe to capitalize on a similar type of channel virtualization that the mainframe has been providing for its legacy applications that used ESCON or FICON for 30+ years.

 

One of the most important differences in how I/O is carried out by the mainframe and how it is carried out by distributed processors is how server to storage connectivity is initially created. In my experience, mainframe people tend to be Type A, control-oriented personalities so they have historically always desired to directly manage everything that happens on their mainframes. A mainframe systems programmer will use a tool called Hardware Configuration Definitions (HCD) to describe his mainframe environment including exactly the path(s) that every I/O will take from the CHPID out to a storage device. If it is not described in HCD then the I/O just will not happen. Performance and predictability are king in the mainframe world so mainframe technicians rely upon their own tools to create robust I/O delivery. This is being augmented to some degree by new capabilities such as System z Discovery and Configuration (zDAC) but I/O connectivity is still up to the mainframe systems programmer to manage.

 

Distributed systems administrators, on the other hand, have to rely more on the Plug-n-Play model and are more casual about how I/O gets accomplished. In their world it seems that ease-of-use and simplified management is king. Therefore they utilize all of the protocol stack capabilities of the Fibre Channel Protocol including the name server service to identify I/O routes and connectivity. The Systems Administrators then leave it up to the FC protocol to find the I/O path(s) that provides them with server to storage connectivity. This Plug-n-Play methodology will generally be successful but sometimes at the expense of poor performance and less robust I/O frame delivery since the I/O path connectivity is completely left up to the FC protocol.

 

And mainframe Systems Programmers have other tools that aid in providing robust I/O delivery. The System z® operating system has a built in capability known as “Path Group”. On the mainframe a user can group up to 8 of their physical connections between the Channel Path IDs (CHPIDs), which are the mainframe I/O ports, out to connected storage ports. It is the mainframe channel subsystem that decides which path in the path group will be used by deciding which path is least busy and which paths are operational, etc. Path Groups allow I/O to be automatically spread evenly and fairly across a number of physical channel paths without over-subscribing any given I/O path. Recent updates to this capability make it even more robust in delivering I/O traffic.

 

Mainframe Path Group functionality is not a capability that is provided for distributed processors and their data paths. Once again special services must be provided to balance I/O across multipath configurations between servers and storage (and this software is often left out of TCO calculations). Examples of this are EMC® PowerPath® Multipathing and IBM® System Storage® Multipath Subsystem Device Driver (SDD). Both are special purpose software applications installed on a distributed server to control and balance multipath I/O operations. Other vendors also have their own multipathing solutions.

 

Addressing, and particularly device addressing, is very robust on mainframe platforms. If a customer needs to do a great deal of I/O to online or tape files, the mainframe is a customer’s best choice. Mainframes utilize hexadecimal addressing (base 16) and a single mainframe channel can access device addresses from x”0000” to x”FFFF” (e.g. 0000 to 65,535 in decimal).

 

A logical partition (LPAR) on a mainframe is allowed by the z/OS operating system to have up to 256 channels to access data. That is potentially 256 channel paths, each running today at 800Gbps, and all together they can be connected to 256 devices concurrently from just one LPAR. A z9, z10, z196, z114, zEC12 or zBC12 mainframe can have up to 60 LPARs running concurrently. And each of them can be using up to 256 channels (often channels are shared between LPARs) to access data. Mainframes can have as many as 1,024 physical channels that can be parceled out to as many LPARs as are running on the mainframe, but no LPAR can utilize more than 256 I/O channels. While a System z® processor complex may be capable of running thousands of applications simultaneously across as many as 60 logical partitions (LPARs), since each System z® has only 256 channels (paths) that it can supply to any given LPAR, channel addresses are a precious commodity and must be used wisely. Even with this channel limitation per LPAR, mainframes can fairly easily access and make use of many thousands of the potential 65,535 addresses (data volumes) that are available to it per channel and per storage array.

 

Now consider a Parallel Sysplex (multiple mainframes working together) where, for very large enterprises maybe as many as 20 or 30 mainframes are participating together in this clustered kind of environment! The scale obviously ramps up until it is just incredible.

 

Many customers agree that mainframes provide the most robust and secure I/O connectivity available. And today’s most modern mainframe channel can run at 800Gbps (an aggregate 1600MBps full duplex) which is the same speed that is possible when using HBAs on distributed servers. But even with similar performance characteristics, the difference is night and day between mainframe I/O and distributed I/O.

 

Fibre channel protocol-oriented, distributed, online I/O (FCP) will map Small Computer Systems Interface (SCSI) into the payload of a frame from the FC-4 protocol layer that is then sent over fiber cables to disk devices. At its core SCSI provides an agreed upon set of standards for physically connecting and transporting data between distributed server initiators (computers) and targets (peripheral devices). The target port is always responsible for making sure frames are received in order sequentially as well as making sure that all frames meet high requirements for data integrity. I/O is accomplished through I/O “exchanges”. Timing of I/O in SCSI environments is rather tolerant. Disk is parceled out in Logical Units (LUNs) that can be size formatted in a variety of ways.

 

Fibre channel protocol-oriented, mainframe, online I/O (FC FICON) will be to “Count, Key, Data” formatted Direct Access Storage Devices (DASD) volumes. Input/Output (I/O) is accomplished through I/O “exchanges”. The major difference here is that mainframe FICON will have a standards-based FC-SB2, FC-SB3 or FC-SB4 payload in the frame from the FC-4 protocol layer while FCP always has a standards-based SCSI payload which is incompatible with FICON payloads. The FICON receiving port is always responsible for making sure frames are received in order sequentially as well as making sure that all frames meet high requirements for data integrity. Timing of I/O in mainframe environments is very strict (2 second channel timer). DASD is parceled out in Volumes that can be size formatted in a variety of ways. Volumes are further sub-divided into data sets and it is data sets that are used by mainframe applications.

 

On the mainframe there are two modes of FICON I/O operation: Command Mode FICON (standard FICON); and High Performance FICON (zHPF). zHFP does a more effective job of building frames to meet the requirements of the I/O exchange that it is transporting than does standard FICON. This results in a dramatic increase in the number of Start I/Os and MBps of data transferred for zHPF compared to Command Mode FICON. But both deliver I/O frames more successfully and robustly than any SCSI-based distributed server. An 8Gb mainframe channel running zHPF has the capability to deliver as many as 92,000 Start I/Os per second. And if a customer is looking at throughput as a metric, 8Gbps mainframe zHPF channels can deliver as much as 1,600MBps of throughput each. In the ultra-extreme and highly unlikely case that all of the 8Gbps mainframe channels were running full speed, then the zEC12’s 320 FICON Express8S channels being used by zHPF would be providing 512,000 Megabytes per second of data movement – (320 x 1600MBps = 512,000) – a phenomenal .5 Terabytes per second of data throughput rate.

 

From the mainframe outbound, FICON allows many different commands to be done in one I/O stream which is not the case for FCP SCSI I/O. In one operation a mainframe can execute lots of different I/O operations. And when using Command Mode FICON a channel path can disconnect pretty easily from its destination port. zHPF, however, is more strict. When using zHPF a path disconnect can only be done on the last command of a string of commands. Channel End/Device End (CE/DE) status will signal the end of a FICON I/O operation.

 

From the storage array outbound, it is the storage control unit that really chooses a return channel path after an I/O disconnect. This allows each storage vendor to have a different algorithm to accomplish sending I/O back to the mainframe. Although an I/O often uses the same path from CHPID to Storage and then again from Storage to CHPID, a storage adapter busy or other condition might have the storage Control Unit (CU) pick a different channel path than the original path for the I/O path reconnect.

 

With all of the complexity inherent in a data processing complex today, it would seem appropriate that some form of coordination take place across the common resources like CPU, I/O and Storage so that all of the applications running on computer systems benefit accordingly. Not too surprising, the distributed world is just beginning to develop common, cohesive, dedicated functions to provide this detailed level of resource coordination. Thankfully, the mainframe has had it for decades.

 

One of the strengths of the System z® mainframe and its z/OS operating system has been its mature ability to run multiple workloads (legacy and distributed) at the same time within one operating system image or across multiple images. Such workloads have different, often competing completion and resource requirements. These requirements must be balanced in order to make the best use of the resources of an installation, maintain the highest possible I/O throughput and achieve the best possible system responsiveness. The unique mainframe function that makes this possible is its dynamic workload management which is deployed via two synergistic functions – the Workload Management component of the z/OS operating system and the Unified Resource Manager (URM – sometimes called zManager).

 

With z/OS Workload Management (WLM), a customer defines performance goals and assigns a business importance to each goal. The customer defines the goals for legacy work in business terms, and the System z® decides how much resource, such as Channel, CPU or Storage, should be given to it to meet the goal. Workload Management will constantly monitor the system and adapt processing to meet the goals. The Unified Resource Manager, introduced in conjunction with System z196, enables a customer to install, monitor, manage, optimize, diagnose, and service resources and workloads from a single point of control while extending System z® qualities of service across the entire infrastructure including its distributed processing. It’s important to recognize that the URM provides value to heterogeneous workloads running only on the Computer Electronics Complex (CEC), meaning a z/OS and z/Linux workload. The use of Linux on System z® is growing and the URM makes deploying a workload on a Linux server running on System z® much easier than ever before. And although it is beyond the scope of this paper to discuss, when the System z is connected to an IBM zEnterprise BladeCenter (zBX) the URM can not only manage all of the System z® z/OS and z/Linux workloads, it can also manage Linux, AIX and Windows applications that are running on the zBX – a level of distributed processor blade center management never possible before.

 

All of these System z® capabilities have led to easier but very robust storage management for the mainframe system administrators. At the same time, storage management has been a rocky road for distributed processor administrators. Some analysts have projected that a non-mainframe storage administrator should be able to manage an average of 30 terabytes of disk storage. In comparison, the typical mainframe storage administrator, using powerful tools, effectively manages well over 100 terabytes of DASD storage. Mainframe environments simply require less manpower resources to manage.

 

I/O Is Taken So Seriously On The Mainframe That It Is A Specialized Function:

 

As I have already pointed out, many applications running on many LPARs may simultaneously traverse a relatively small number of I/O paths on a mainframe. This means that channel path bandwidth utilization is almost always significantly higher (i.e. more efficiently utilized) on System z® than on a typical distributed systems FCP path. The mainframe therefore must take care to feed the I/O appetite of its applications very carefully – and it does.

 

So another tremendous differentiator is that the mainframe, unlike its distributed server cousins, DOES NOT use its own compute processors to do application I/O!

 

The mainframe can make use of some specialized processors that are designed to enhance performance and hold down mainframe software costs. These special processors are: the IFL or Integrated Facility for Linux which is dedicated to Linux OS processing (and optionally used under z/VM); zAAP or System z® Application Assist Processor which is currently limited to running only Java and XML processing; and zIIPs or System z® Integrated Information Processors which are dedicated to running specific workloads including DB2, XML, and IPSec.

 

The final specialized processor type are the I/O channel processors (System Assist Processors – SAP) which are dedicated to handling I/O. Basically, when a mainframe wants to do an I/O it writes that request to memory, then alerts the SAP that the I/O is waiting for it in memory, and then the mainframe itself moves on to other compute-oriented work. The special I/O channel processors then take care of getting the I/O processed and sent down the appropriate channel path and they do all the waiting for devices to respond to commands, the data, etc. Once an I/O is complete the SAP communicates that status back to the mainframe who can then return to processing the application that issued the I/O in the first place.

 

All of these specialized processors combined are why mainframes can get such a tremendous amount of compute work done. All that the mainframe engines do is “compute” work while the lengthy and time consuming I/O interactions are handled by the specialized I/O processors. This unique division of work and I/O, with each process doing what it does best, is unique to the mainframe!

 

What follows then is that compute-centric, transaction related workloads, such as data warehouses, all run better on the mainframe than anywhere else. As long as the workload is I/O intensive and not CPU intensive, the mainframe is hands down the best platform for a customer to utilize. On the other hand, that is also why not all workloads will run as well on a mainframe as on a distributed server. A customer must choose these platforms wisely. There are many forums that will distinguish which workloads run best on mainframes and which run best on distributed servers.

 

On a distributed server, when a customer does I/O, they are doing it by utilizing the main CPU on that chassis. Of course, distributed CPUs can be very speedy but it is important to note that in conjunction with the introduction of the System z196, the mainframe now has the fastest-in-the-industry microprocessor and clock speed. All of that notwithstanding, I/O takes time and distributed processors must therefore have their CPU wait until the required I/O is complete before more computations can occur on that chassis. Since all processors wait at the same speed, when dealing with I/O intensive applications the customer using distributed servers can end up with marginal processor utilization (often distributed servers average only 20-30% busy). These customers will also receive much less than the full value of their fast server processor particularly during I/O operations. Obviously, per the discussion above, this is not true on mainframes.

 

There is a nice white paper titled, “Why Your Organization Should Use Workload Optimized Servers“. The paper was written by Clabby Analytics (www.clabbyanalytics.com).

 

According to this paper, “Clabby Analyticshas obtained benchmark information on System z® (mainframes), Power Systems® (POWER-based servers), and System x® (x86-based servers) from IBM’s software group project office located in Poughkeepsie, New York. This data compares how each environment handles workloads that involve heavy I/O, heavy data-intensive processing, and light workload processing. In each case, System z®, Power Systems®, and System x® servers were asked to handle the same workload and were given identical service level requirements.”

 

In the paper’s example, the cost of processing an online Linux banking application that completes 22 transactions per second while processing 1 MB of I/O per transaction varied significantly when comparing Mainframes to Power Systems® and to x86 servers.

 

It can be extrapolated from that specific study that a single System z196 32-way mainframe could run 240 of these Linux workloads when using 32 Integrated Facility for Linux (IFL) features. More powerful current mainframes can, of course, do more.

 

Compare that to a distributed systems Intel® Xeon 8® core blade which was capable of running only 10 virtual machines handling the same application at the same service level per core blade. So it would require 24 Xeon 8® core blades (and associated enclosures, networking components, and software) to handle the same application at the same service level as a mainframe.

 

And for the final comparison, an 8-way Power System® blade can run about 15 virtual machines running the same application at the same service level. This means that it would take 16 Power System 8® core blades (and associated enclosures, networking components, and software) to handle the same workload as a mainframe. And keep in mind that the mainframe would be maximizing the value of its processors; would be able to utilize NPIV in a switched-FICON environment to drive high bandwidth utilization per channel path; would be would be much easier to manage; and would require less power, less cooling, and less floor space.

 

Let’s Discuss Why Switched-FICON I/O Provides The Best Value For Data Traffic:

 

Brocade Communications Inc. has a whitepaper that describes the benefits of doing mainframe I/O through Fibre Channel switching devices. It can be found at the following website: http://www.brocade.com/downloads/documents/white_papers/why-ficon-wp.pdf

 

What I want to do here is simply provide a few of the many, many reasons why it is prudent to deploy a FICON I/O infrastructure by utilizing switching devices rather than by direct attaching mainframe channels to storage ports.

 

Both Storage Area Networking (SAN with FC SCSI) and FICON Fabrics (FC FC-SB2/3/4) can and should make use of FC switching devices. But with the mainframe there are several capabilities that do not apply to SAN implementations and these are the capabilities that I want to mention here.

 

Since the delivery of the System z9 years ago, IBM has been modifying the mainframe I/O subsystem to provide users with additional functionality. Some of this functionality can only be utilized when switched-FICON fabrics are deployed. In fact, IBM has announced a series of technology enhancements that require the use of switched FICON infrastructures. These include: NPIV support for z Linux SCSI I/O;Dynamic Channel Path Management (DCM) for FICON; and z/OS FICON Discovery and Auto-Configuration (zDAC).

 

Node_Port ID Virtualization (NPIV), as discussed above, is an excellent special process available for Linux on the Mainframe. NPIV allows many FCP I/O users to interleave their I/O across a single physical but virtualized channel path which minimizes the number of total channel paths. For example, if a System z® is running 300 Linux guests then maybe 20 channels can be virtualized with NPIV such that each set of 15 Linux guests makes use of one of the 20 virtualized channel paths driving each of those individual physical channels’ utilization towards peak performance. NPIV is only available when using switched-FICON environments.

 

FICON Dynamic Channel Management (DCM) provides an ability to dynamically add or remove channel resources at the Workload Manager application’s discretion. This allows Workload Manager’s Goal Mode to effectively utilize mainframe channels to make sure that applications are completed on time. Use of DCM is available only in switched-FICON environments.

 

z/OS Discovery and Configuration (zDAC) provides a simplified and discovery-oriented method for configuring new and/or changed FICON connected DASD and tape configurations. zDAC is only available when using switched-FICON environments for users running V1 of z/OS.

 

As IBM continues to introduce innovation onto the mainframe, customers will very likely see more and more I/O capabilities that are tied to the use of switched-FICON infrastructures. And the only way that customers will reap the benefits of these new functions is to deploy switched-FICON fabrics.

 

 

The Latest Generation of Mainframe Is Simply The Swiss Army Knife of Computers:

 

The latest System z® Business-class mainframe (zBC12) is single frame design about the size of a refrigerator; uses less energy than an American clothes dryer; heterogeneously runs legacy as well as UNIX and Linux distributed system applications; effectively manages legacy and distributed systems on its own CEC as well as AIX, Linux and Windows on attached zBXs; can handle about 30 Linux servers per processor core and about 300 Linux servers in total; can deploy a new virtual Linux server in just a few minutes and provision each of its Linux servers at a cost of about US$500 per year; yet surprisingly, an entry level zBC12 sells at an economical cost of around US$75,000. It is also important to note that zBC12 uses the PCIe I/O (Peripheral Component Interconnect Express protocols) infrastructure which is which is similar to that used in UNIX and other distributed systems.

 

As for the latest Enterprise-class mainframe (zEC12), well it can do everything the z196 could do and much more. The zEC12 is a two frame design that uses the world’s fastest microprocessor which clocks in at a blazing 5.2 GHz. According to IBM, a zEC12 mainframe is capable of executing more than 50 billion instructions per second; each of its processors uses less energy than a 40 watt old style light bulb so it provides up to 85% lower energy costs (when considering both power and cooling) than distributed systems; it can support up to 47 distributed servers (like Linux) on a single core and up to 1000’s on a single system; and it also can make use of the PCIe I/O infrastructure.

 

It is obvious to me that IBM is laser-focused on improving the mainframe’s momentum for providing computing to both large and medium market segments. All of the factors mentioned above will help almost any customer achieve better systems management, faster deployment and quicker response time. I also suspect that many industry observers, who once saw the mainframe as a fading dinosaur, now must concede that this new “Big Iron” is going to stick around and that IBM is working hard to keep it relevant.

 

In Summary:

 

IBM’s System z® mainframe draws on decades of innovation and collaboration with advanced customers in all segments of computing – customers who run the most complex computer operations on the planet. Executives all over the world are finding out that the System z® is simply the most powerful tool available to them to reduce cost and complexity and improve security and reliability in their enterprises. A telling point to that argument is the mainframe’s upsurge in adoption, over the past several decades, for solving the world’s most complex business, governmental and academic challenges around the world.

 

When you go trolling around the internet you can do some searches on the amount of data hosted on mainframes. Dozens of entries will proclaim that, “more than 70% of the world's business-critical data resides on mainframes.” Since that data has to be processed, maybe some of the points in this article have made it clear to you why so much of the world’s business-critical data IS hosted and processed on mainframes. I hope so.

Announcements

Caption This! Contest

Avoid downtime with the Brocade 7840 and Fabric Vision over distance

Win a $50 Amazon Gift Card! Click here to submit your caption. Happy Captioning!