Print Email

A Contained Project

Docker deploys Linux applications easily on z Systems

2/1/2017 12:00:24 AM | Spending most of my life in Michigan, my family grew up on lakes. Both of my kids, Zach and Cayla, learned to waterski and tube at a young age. The one thing that was a given at the end of the day was the return to the dock. Docks are safe havens for boaters offering all of the amenities that are needed. They tend to be self-contained offering food, gas, facilities and a place to tie up and secure the boat. When leaving the dock, you can be assured that all is safe and well and you don’t have to be concerned with each individual service—it is there and provided in one place.

The current Docker technology is much in the same way of thinking. A French platform-as-a-service company, dotCloud, developed Docker as an internal project. Docker represents an evolution of dotCloud’s proprietary technology that was built on other open-source projects. Docker was originally released in March 2013 and is written in the GO programming language. It’s currently one of the most starred GitHub projects. Simply put, Docker is an open-source project that automates the deploying of Linux applications inside what is known as software containers.

Deploying Linux Applications

A software container can be thought of as software placed in a complete filesystem containing everything needed for execution. This includes code, runtime logic, system libraries and system tools—anything that you can install on a server. The major benefit of this is that it guarantees that it will execute in the same way no matter the environment.

Another major benefit is that Docker avoids the cycles of starting and maintaining VMs because it provides an additional layer of abstraction and automation of the OS level of virtualization on Linux. I visualize this as a box full of business applications, OS applications and anything need to run and maintain the environment. Like I said, think of the dock at the marina: a self-contained environment.

To further explain the technology, as actions are completed against a Docker base image, union file system layers are spawned and documented so that each independent layer will fully describe how to create the needed action. In this way, a Docker image is very lightweight, as only the actual layer updates need to be propagated. This is much easier than maintaining a full VM image.

How It Works

As stated by the industry analyst firm 451 Research, “Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where the application can run, whether on premise, public cloud, private cloud, bare metal, etc.”

Docker uses a very high-level API, to provide lightweight containers. In this way, processes can run in complete isolation. This enables flexibility to almost any environment. As we know, using good APIs that can be contained makes it much easier to develop programs. They provide the needed building blocks that can be utilized during code construction by the programmer.

The Linux kernel provides a ton of facilities and a Docker container, unlike a VM, does not need or include a separate OS. Instead, it uses the kernel’s functionality and utilizes resource isolation of the CPU, I/O, network and memory. Docker has the ability to use the libcontainer library within Docker itself.

Because the containers are extremely lightweight, a single VM or server has the ability to execute many containers at the same time. A Docker analysis completed in 2016 found that on the average, 5 containers are run per host and many organizations are running 10 or more at any given time.

In using Docker containers, services can be controlled, system resources can be isolated and independent processes provisioned. This gives just about a complete private view of the OS with their own process ID space, network structure and file system structure. Think of it as a system in a system.

When Docker is used in the management and creation of containers it leads to simplification of highly distributed systems. It allows many applications and processes to run by themselves on a single physical machine or even across many VM images. Deployment is simplified as resources become available or when nodes are needed. This can be thought of as a platform as a service style of deployment and scaling. Another task that is easier with Docker is workload queuing.

The Role of z Systems

What role does IBM z Systems play in a Docker implementation? As we know, Docker provides for the automation of the deployment of applications inside software containers by providing that additional layer of abstraction and automation of OS-level virtualization on Linux. Now, when these containers are spun up in an IBM z Systems environment, the performance is amplified, as is the container density per processor. Running on an IBM z13, you have the ability to put thousands of containers within the z System environment in an extremely safe and secure manner.

Docker shortens the development effort that is required to construct applications and move them between different platforms. By using containers, services and solutions are decomposed in what is known as microservices. Breaking big solutions into smaller pieces and then being able to replace and move those chunks makes for a much simpler upgrade and development of new overall solutions. This in itself is a best practice for DevOps. So the idea that Docker can run on Linux on z Systems brings a whole new dimension to enterprise computing.

The availability of Docker for z Systems compliments the current capabilities that are available on IBM z. This allows the transfer of key platform attributes to the newly developed Dockerized applications targeted for z Systems. All of the attributes that z Systems have are vital in all large enterprise environments. Key attributes that fit perfectly with Docker are things like:

  • Application Portability: Docker allows applications executing on distributed platforms to be moved directly to z Systems to take advantage of Linux on z
  • Virtualization: z Systems are truly king with virtualization capabilities
  • Workload Consolidation: IBM z is well known as a platform for consolidation in using the amazing virtualization capability inherent to z
  • Security: z Systems LPAR technology puts in place the best non-military certification of virtualization isolation. Also, the z/VM hypervisor also provides a very high level of isolation, likewise for KVM on z Systems. This high level of security and isolation of workloads is a critical concern to large enterprise clients in today’s world.
A Perfect Fit

Docker is a tremendous technology providing an open-source approach that automates the deploying of Linux applications inside software containers. The ability to easily transport the Dockerized containers is a huge advantage for operations and DevOps. z Systems will continue to play a huge role with this technology and is a perfect fit for large scale enterprise grade solutions.


Patrick Stanard is a z Systems Architect Manager for IBM. He’s a 34-year professional in the industry spanning roles as a systems programmer, developer, manager, adjunct faculty member and director of operations. He has a Bachelor of Science in CIS from Saginaw Valley State University and an MBA from Michigan State University.

Please sign in to comment.

Sign In


Don.Gerace
Pat, Great article on the role of Docker on z
3/2/2017 10:28:51 AM
Join Now!
Filling the Generation Gap

Filling the Generation Gap

"I welcome the challenge to step into the generation gap and look forward to learning as much as I can from Big Iron veterans,” says Vance Morris.

Read more »

Myth Busting

Myth Busting

The demise of IBM’s mainframe has been predicted since the ’90s, yet despite negative press, the platform continues to thrive, with new education initiatives, hybrid capabilities, green computing and a steadfast commitment that has lasted for 48 years.

Read more »