Print Email

The Much-Maligned Mainframe

Are companies mistakenly overlooking what might still be one of their biggest assets?

8/21/2013 8:56:46 AM | In their quest to build the right IT platform, many companies have determined their mainframe is no longer fashionable or capable of supporting the business on its own. Thus, off they go adopting a variety of new tools they believe are much better suited to meeting the enterprise’s current needs.

Unfortunately, in doing so, they often end up creating a monster: a tangled mess of technologies and best-of-breed packages that don’t work together and threaten to overwhelm the organization with its cost and complexity.

How has this happened? The root cause is the pursuit of change under the misguided perception that moving away from its existing mainframe will necessarily result in a better, and lower-cost, platform—which it often doesn’t. Many companies become mesmerized by the “latest and greatest” tools, thinking they will give them greater capabilities. In reality, they overlook the fact that their mainframe may still be more than up to the task and end up simply layering more complexity onto their architecture. Making matters worse, when they acquire these tools, companies often fail to fully think through the cost of adoption beyond the new tools’ purchase price—and are later surprised by the ripple effects the tools have on their IT cost structure.

In other words, many companies automatically assume their mainframes are passé and incapable, and that change will put them in a better place. But they fail to consider four key factors that may create a situation that’s actually worse than if they hadn’t done anything at all.

Network Performance
When pursuing cost reduction, the first thing a company might attempt is to move processing off the mainframe and move the database to a dedicated database server. On paper, this this might be a simple line connecting the “new” world to the “old.” But it’s much more complicated than that.

Companies often underestimate the impact that moving large amounts of data across a distributed or cloud-based processing environment will have on network latency—and, consequently, on the ability of people who must access and use that data to do their jobs effectively.

They also often don’t fully consider the security implications. As a company expands the application ecosystem across a physical network to enable ubiquitous processing, it also needs to have ubiquitous security. At a minimum, that means every time processing shifts from one place to another, the data needs to be encrypted (which also could further degrade network performance).

One company found this out the hard way. It decided to move a critical application from its mainframe to a distributed environment—putting the database on one server and the application on another. The resulting network performance was so bad it was unusable, and the company was forced to abandon the effort and return the application and database to the mainframe.

Operations and Management
As a company glues together more, and more distinct, technologies to create a “best of breed” computing solution, operating and managing these new physical and virtual platforms become exponentially more difficult and complex—especially if the application ecosystem is extended to the cloud. And it becomes impossible for the company to create a more agile IT infrastructure and organization that can flex and change with the needs of the business.

For instance, one company was enticed by the potential new capabilities it could acquire by taking an application off its mainframe and rewriting it in Java. However, the company soon realized that it didn’t have the Java expertise in house and had to add those skills to its IT organization. The company ended up needing 50 percent more people to run the application in Java than on the mainframe.

Development Platforms
New development platforms often can help companies reduce their IT operating costs and increase the productivity of their IT staff. But when adopting a new development platform, many companies often don’t stop to consider what makes the new platform lower-cost and more productive.

In many cases, lower costs are a function of being able to run on Linux regardless of the chip architecture, while productivity is typically a byproduct of a modern integrated development environment (IDE) and a flexible framework that reduces the amount of code that must be written to create applications. In both instances, as a company adds a new programming language to its repertoire, it must also add more—and more specialized—programmers. Unfortunately, greater specialization means less ability to share programming expertise across a range of applications and, thus, a company will end up needing more people to do the same amount of work.

Business Continuity
As companies extend their processing off the mainframe and into a model that includes distributed servers and external clouds, the amount of physical infrastructure capacity needed instantly doubles to handle required availability and ensure business continuity in the event of a failure. Unfortunately, that massive footprint of excess capacity must be paid for whether a company uses it or not—unlike in a mainframe environment, where a company typically doesn’t pay for the hardware it’s not using in its backup data center.

Beyond these significant cost implications is the complexity that comes with having to coordinate the applications and data on hundreds of servers during a failure—as opposed to just one mainframe.

Consider the Whole Picture
So what should you do to avoid these scenarios? Four simple steps can help:

1. Truly understand what you already have in place and what you want to achieve. Many companies make platform decisions with incomplete information on their existing capabilities, or they assume that the change to a new platform will solve the perceived problems. In many cases, a company can actually upgrade its mainframe and get the same improvement for less money while avoiding additional complexity.

2. Calculate the true baseline costs of a planned extension of your platform. Many organizations think mainframes cost too much because they’re the largest single line item on the budget. But they don’t consider the number of applications or amount of business that single box supports. Typically, when a mainframe’s costs are normalized against the book of business, they are lower than most people think. By being able to compare the mainframe’s true cost with the cost of the new tools, a company can create a more accurate business case for its actions.

3. Be clear about what your existing technologies and IT staff do well. Many companies don’t realize the mainframe is the only platform within the enterprise that currently provides the needed 24x7x365 availability required for mission-critical applications. And they often fail to consider that they also have good people with sophisticated mainframe skills already in place to support the existing platform. If a company already has capable technology and people with the right, relevant skills, why shouldn’t it continue to leverage them?

4. Get educated on the “art of the possible” on a modern mainframe. Rather than embarking on a costly and potentially risky adoption of new tools that might not be necessary or an improvement over what they already have, companies might be surprised to learn what their mainframes can do, and how they can enhance their mainframe’s capabilities and performance via a low-risk modernization initiative. For instance, modern tools such as IBM Rational Developer for System z enable legacy technologies by increasing developer productivity and helping programmers visualize the flow of the code (and thus become more self-sufficient).

There’s no question companies can benefit from the adoption of new tools that can help the business run more effectively and efficiently and drive growth. Yet change for change’s sake is never a good strategy when it comes to building or extending one’s IT platform. When tempted to embrace new technologies, you should carefully think through the full operational, management and cost implications. You also should seriously question whether you can get the improvement you’re looking for from what you already have in house: the much-maligned mainframe that, although born in another era, still has much to offer today’s modern business.

Mark Neft is a managing director in Accenture’s Enterprise Architect and Application Strategy group with over 30 years of IT experience. He has extensive experience in many industries/technologies and has achieved the highest technical certification level within Accenture: Master Technical Architect and Certified Solution Architect. Over the past 10 years, he has focused on portfolio modernization where he has received several patents. Neft is a regular speaker at both SHARE and INNOVATE conferences.

Please sign in to comment.

Sign In


Marc
Good article. For more examples (old, but relevant), check out these case studies from ACTS Corp.
http://www.actscorp.com/reboothill.htm
9/5/2013 3:34:39 PM
Join Now!
Staying Connected While Working Remotely

Staying Connected While Working Remotely

While telecommuting has been discussed widely for years, rarely has it been practiced successfully. Consider these tips to address cultural and technical stumbling blocks.

Read more »

Threat Protection Team

Threat Protection Team

The IBM X-Force Exchange is a threat intelligence platform that allows users to stay informed and collaborate with peers.

Read more »