Print Email

Don’t Believe the Myths

The facts about mainframe outweigh the common misconceptions

7/17/2013 1:01:01 AM | Editor’s note: This article is based on content initially published in a series of SHARE President’s Corner blog posts.

Here at SHARE, we believe the mainframe is the most-secure, lowest-cost and best-performing mixed-workload computing platform on the planet. SHARE continues to serve the mainframe community, helping to show our members the best practices for managing the mainframe environment and optimizing the value that the mainframe delivers. It’s the core of the computing environment for many companies.

Still, misconceptions and myths persist—like the idea that few mainframes remain in use today. The reality is SHARE represents more than 20,000 individuals from nearly 2,000 companies. Those companies include: state and federal government agencies, universities, retail, energy, manufacturing, banks and insurance companies. More specifically:

• Ninety-six of the world’s top 100 banks, 23 of the 25 top U.S. retailers, and nine of the world’s 10 largest insurance companies run System z.
• Seventy-one percent of global Fortune 500 companies are System z clients.
• Nine out of the top 10 global life and health insurance providers process their high-volume transactions on a System z mainframe
• Mainframes process roughly 30 billion business transactions per day, including most major credit-card transactions, stock trades and money transfers, as well as manufacturing processes and ERP systems.

That doesn’t sound like a technology that’s no longer in use, or going away anytime soon. Other common myths about the mainframe include:

1. They’re old.
2. They don’t run modern applications.
3. Mainframes are expensive.
4. The skills to manage mainframes are not available or you need more people.

Mainframes Are Old?
The mainframe is celebrating its 50th birthday in 2014. But, there have been generational differences between the mainframe introduced in 1964 and today’s mainframe. The automobile is more than 100 years old, but no one suggests that automobiles are an old or outdated technology.

Are the cars of today different from the cars of 1964? Absolutely. Likewise, today’s mainframe is faster, has more capacity, is more reliable and energy efficient than the mainframe of the 1960s, ’70s, ’80s, or even those delivered three years ago in 2010.

The modern mainframe, known as the zEnterprise System, delivered in 2010 improved single-system image performance by 60 percent, while keeping within the same energy envelope when compared to previous generations. And the zEnterprise EC12, which shipped in 2012, has up to 50 percent more total system capacity, as well as availability and security enhancements. It uses 5.5 GHz hexa-core chips—hardly old technology. It’s scalable to 120 cores with 3 TBs of memory. Clearly larger (more capacity) and faster than anything available in the ’60s, with a smaller physical footprint and better energy consumption characteristics.

IBM has a corporate directive for every generation of mainframe: each successive model must be more reliable than the previous one. Incremental and breakthrough improvements have been made over 20 generations of mainframes. Fault tolerance, self-healing capabilities, and concurrent maintainability are characteristics of the mainframe that are lacking in many other systems. The integration of mainframe hardware, firmware and the OS enables the highest reliability, availability and serviceability capabilities in the industry.

Mainframes Don't Run Modern Applications?
Mainframes have been running Linux workloads since 2000 and those workloads on the mainframe are growing. From IBM’s 2012 Annual Report—“The increase in MIPS (i.e. capacity) was driven by the new mainframe shipments, including specialty engines, which increased 44 percent year over year driven by Linux workloads.”

The mainframe also has a specialty processor specifically intended to run Java workloads. How about Hoplon Infotainment running their TaikoDom game hosted on System z?

You say green screens are ugly? There are graphical interfaces and even iPhone and Android apps that put a pretty face on the green screens for those who those who are trying to use business applications. More and more, interfaces that the general public is familiar with and comfortable with are being utilized, even in business contexts, to make access to the mainframe easier and more transparent. (How many people are accessing a mainframe on a regular basis today and don’t know it? Most of them!)

Those who manage the mainframe often prefer the green screens. These are incredibly fast interfaces that can deliver sub-second response time. When is the last time you clicked your mouse and got sub-second response from a Java application?

What about cloud? The cloud is actually an online computer environment consisting of components (including hardware, networks, storage, services and interfaces) in a virtualized environment that can deliver online services (including data, infrastructure, storage and processes) just in time or based on user demand. By this definition of cloud computing, the System z platform has been an internalized cloud for more than 40 years!

Starting in 2007, IBM embarked on its own server-consolidation project called “Project Big Green.” The company consolidated 3,900 servers onto 16 mainframes, decreasing energy and floor space by more than 80 percent. The electrical power went from $600/day to $32/day and required floor space dropped from 10,000 to 400 square feet. Cooling costs for those mainframes were less than those of distributed servers handling a comparable load as well. In addition, those mainframes required 80 percent less administration/labor, dropping from more than 25 workers vs. less than five.

Mainframes Are Expensive?
Actual costs depend on what you’re looking at. In terms of hardware acquisition costs, certainly, a single mainframe costs more than a single server or even several servers. But, you would need more individual servers to match the computing capability of a mainframe. Add to that the fact software and labor costs for servers grow linearly: The more servers you add, the more software licenses and systems administrators you need. And yet, the mainframe delivers higher utilization, lower overheads and the lowest total cost-per-user of any platform. When all cost factors are considered fairly, the mainframe is usually the lowest cost alternative.

Often when considering the cost of the mainframe, people only look at the initial hardware purchase and overlook the ongoing maintenance costs. With 100 servers, you have 100 times more chances something will break. So you need an army that has to be ready at any time to fix hardware. Each of those servers has an OS on it, and all of them need patches, upgrades and applications deployed to them on a regular basis. So, you need another army for that. Then your applications are spread all over the place, so when the software fails or gets overloaded, it takes an army to monitor the applications and locate the problem. Servers are cheap to buy, but those savings are eaten away by all of the people required to run and monitor them.

Don’t forget electrical and air-conditioning costs also increase when you add servers. Then, you need to make sure that you count ALL the servers.

Mainframe Skills Aren’t Available or You Need More People?
As we’ve already seen, it takes fewer people to manage a mainframe than a set of servers delivering comparable capability. Do you need specialized skills to manage a mainframe? It depends. If managing Linux on System z, you’ll find that Linux is Linux regardless of platform. So if you can manage Linux on Intel, you can manage Linux on the mainframe. This means those students coming out of universities that know Linux can, with very little additional training, manage a Linux on System z environment.

In addition, IBM has been investing in increasing the available skills. The IBM System z Academic Initiative ensures a System z and z/OS skills shortage does not happen. Since 2004, the program has worked with more than 1,000 schools to educate more than 50,000 students worldwide! Many people in the mainframe community are using the System z Academic Initiative to assist and enable schools to teach mainframe skills.

Then there is SHARE’s zNextGen community that connects more than 900 young mainframe professionals from more than 24 countries. And don’t forget the two annual SHARE conferences and year-round webcasts, which offer hundreds of hours of mainframe skills training and numerous opportunities for peer networking. The next event, SHARE Boston, is scheduled Aug. 11-16 at the Hynes Convention Center.

There’s plenty of access to the skills required to manage a mainframe and scores of experts who are happy to share their knowledge and experience. Being well versed in mainframe technologies is a pretty good career choice.

MIPS Don’t Lie
Are companies running from the mainframe? Certainly a few are contemplating or attempting to migrate off the mainframe. But the work being done by the platform is increasing, not decreasing. Just looking IBM’s Annual Report, 2010 saw a 22 percent increase in MIPS shipped over the previous year, 2011 had a 16 percent increase, and 2012 saw 19 percent growth. Clearly, the truth about mainframes is out there—especially for those willing to look beyond the myths.

Janet L. Sun is the immediate past president of SHARE Inc.

Please sign in to comment.

Sign In


Akbar
I am a dedicated mainframe professional, who became one right after collge over 30 years ago. If anyone wants to hear a truth as opposed to myth, then be assured that mainframe has never been out of the destination of large corporations ever since its inception in 1964 and never will be. This has been the reality due to dedication and maximum potential of innovation laid out by IBM.
8/1/2013 11:08:02 AM
Join Now!
Six Groups and Resources for Millennial Mainframers

Six Groups and Resources for Millennial Mainframers

Many resources and groups are available for mainframers still in school or early in their careers, no matter if they are looking for more training, resources, jobs or a community of like-minded individuals.

Read more »

Big Opportunities in Big Data

Big Opportunities in Big Data

Big data was a big focus at the recent SHARE conference in Anaheim, Calif. Event organizers devoted two days of conference content to a Big Data Spotlight with a concentration of sessions designed to address the “most pressing big data questions and concerns” of its community.

Read more »