Cloud #3 – Containers


Introduction

It the first installment in this series, we covered the IT history leading up to what we now call the “Cloud”. In the second installment, we discussed the different usages of the term “Cloud”. This usages included IaaS, PaaS, SaaS, as well as a new software stack (e.g. Containers). IaaS can be considered as Servers being provided in the Cloud. PaaS can be considered as Servers plus infrastructure software being provided in the Cloud. SaaS can be considered as Applications, with all of their supporting servers and software, being provided in the Cloud. In this installment, we will consider the “Cloud” from the point of view of the “Cloud Native” software stack. Cloud Native refers to Applications designed from the ground up to be provisioned and deployed within a “containerized” environment.


Virtualization

Containers are another step in the long history of computer virtualization. Virtualization began with “virtualizing” the CPU of a computer. This was not initially termed virtualization, but was called multi-programming or multi-tasking and was a service provided by the Operating System (OS). Whatever its name, it referred to sharing a single CPU across multiple software instances. Only one software instance (process or program), of course, could use the single CPU at the same time

Virtualization was next directed towards RAM (Random Access Memory), with a given amount of physical RAM shared across multiple processes. Each process perceived that it had all of the available RAM. This Operating System approach was termed “Virtual Memory” and was implementing by swapping RAM and Disk storage transparently to the program. Both CPU and RAM virtualization date back to the 1960s.

Both Tape and Disk drives also became “virtualized”, although this was after a considerable gap in time. Generation 1 and 2 (see the first Installment in this series) disk drives had a hardware addressing scheme that reflected the physical geometry of the drives. The Cylinder, Head, and Sector (CHS) addressing used described the physical location of a block of data on the drive. With the release of the 3390 Disk Drive family in 1989, the logical geometry (CHS) of the preceding 3380 family was retained. This meant that the CHS addresses used no longer matched the physical reality of the 3390 drives. In effect, the 3380 Disk Drive architecture had become virtualized. Finally, virtualized tape was first introduced by IBM in 1997 (Virtual Tape Server; or VTS).

The history of hardware virtualization, as seen through IBM Mainframe Operating Systems, is shown below:

  • TOS/360 (Tape Operating System; for systems with no disk drives)
  • DOS/360 (Disk Operating System)
  • OS/MFT (Multiprogramming Fixed Tasks)  –  First IBM “Virtual” CPU  –  1966
  • OS/MVT (Multiprogramming Variable Tasks)
  • OS/SVS (Single Virtual System)  –  First IBM Virtual Memory  –  1972
  • OS/MVS (Multiple Virtual Systems)
  • 3390 Disk Drives – First IBM Virtual Disk Drive (Used 3380 disk architecture)  –  1989
  • VTS (Virtual Tape Server)  –  First IBM Virtual Tape System  –  1997

Virtual Machines

As we have seen, the hardware components (CPU, RAM, and I/O hardware) of a computer system have become virtualized over time. This virtualization has been supported by a combination of both hardware and Operating System enhancements. Operating Systems themselves have become virtualized. Just as an Operating System provided virtual CPU through multi-tasking, or virtual memory (with a hardware assist), so too could software provide an image of dedicated hardware resources when they were actually shared. The software providing this capability is termed a “Hypervisor”. The VMWare product is one common example of a Hypervisor.

LOGO1A Hypervisor presents an image of underlying hardware and executes software that would typically run directly on top of the hardware. The software executed by a Hypervisor is typically, but not restricted to, an Operating System. Hypervisors, however, may run either underneath an Operating System (e.g. next to bare metal) or on top of an Operating System. A Hypervisor that runs on bare metal is called a Type 1 Hypervisor. A Hypervisor that runs on top of an Operating System is called a Type 2, or hosted, Hypervisor.

A running Operating System consists of a large number of processes. This can easily be seen on Microsoft Windows through the “Task Manager” or on UNIX through the “ps” command. Furthermore, most Operating Systems have a significant initial “boot” process. The boot process, or Initial Program Load (IPL), performs all necessary startup and initialization tasks. Furthermore, the initial program typically needs to be read from disk into memory prior to execution. Finally the OS may need to launch a number of Application programs, each one of which may also take time to initialize. All of this takes time, from a small number of seconds into the significant number of minutes category.


Containers

An Operating System running on top of a Hypervisor is termed a “Virtual Machine” or “VM”. A single piece of hardware, running a Hypervisor, is thus capable of supporting a number of “Virtual Machines”. All of the virtual machines running on a Hypervisor share the underlying hardware resources (CPU, Memory, Disk, etc.) managed by the Hypervisor. There is no theoretical limit to the number of virtual machines running on a Hypervisor, but there are practical limits. When competition for physical resources gets too intense, performance degrades across all of the virtual machines. The net effect of virtual machines is that each FM only has a fraction of the underlying resources dedicated to it. This allocation of resources (e.g. how many CPUs are assigned to this VM) is done through the Hypervisor console when the FM is initially set up.

The end result of the FM startup is a running OS, potentially running Application software, with some period of latency incurred during the startup process. The running OS provides a general purpose computing platform, capable of simultaneously running a number of different applications. The OS is capable of starting new programs and fulfilling many different service requests (Email, FTP, DNS lookup, Network Services, HTTP Services, etc.). Operating Systems are designed and architected to be long running, so the IPL time can be amortized over hours/days/weeks of uptime. Within an OS, individual applications are considered “Processes” and are assigned a Process ID (PID). The OS allocates resources (CPU and memory) to each individual Process (PID).

Containers are often compared or contrasted with Virtual Machines, but this is a faulty comparison. The purpose of an OS is to provide a platform to support some number of simultaneously executing processes. Thus, an OS is a “container” (but not in the sense used here) for executing multiple Applications and processes. A Container (in the sense used in the Cloud) is a platform for executing exactly one process. A Container is started by a process already running in the OS and therefore incurs only the overhead of process startup. A Container is therefore considered much more “light weight” than an Operating System.

Containers do, however, represent another layer of virtualization, which is why they are compared to VMs. A Container does represent a virtual OS, but supplies only a part of that OS. The other part of the OS, the kernel, is either shared with the underlying OS or is provided by FM software. This is why a Container is considered “lighter weight” than a VM. If the container is a “mini VM”, then why have this partial FM on top of a real VM? Why have it at all?

The answer is that the purpose of the container is not to provide a VM, but to provide a stable VM. An immutable and constant FM platform in which to execute software processes. Since the container only uses the kernel from the host Operating System, it is immune to most other OS settings. Applications deployed inside of a container can be ensured of the same runtime environment every time that they are launched. Every server, regardless of OS and version, that they are launched from. Every time. This stability is the reason why containers exist. They exist to isolate Application software from external variations. All necessary environment settings are made within the container. Therefore, every time the container executes, it executes in an identical environment. Application stability in deployment is to goal of containerization.

From the Application perspective, a container is much more similar to a Java Virtual Machine (JVM) than it is to a true VM. Both technologies strive to provide a virtualized environment to support software execution. A Java Runtime Environment (JRE), of course, only provides a virtual runtime for software written in the Java programming language. Furthermore, in practice, Java has proved to be quite brittle, with many configuration dependencies. Note that containers provide a perfect environment into which to deploy a stable JRE. A containerized Java environment would be correctly configured for it’s Java Application every time. As it would for any other programming language. Containers are thus far more versatile than JVMs.

One final caveat regarding containers. While most of the FM settings reside within the container itself and so travel with the container, the Linux kernel upon which the container depends is external to the container. The entire edifice of containerization is built upon the stability of the Linux kernel! If, at some point in the future, the Linux kernel were to change substantially enough to invalidate legacy containers then substantial engineering could be required.


Container Technologies

There are currently a number of different software product layers that support containerization. These layers include: Container software (e.g. Docker), Container Orchestration software (e.g. Kubernetes), Container Package Management software (e.g. Helm), and Service Mesh software (e.g. Istio). For each of these layers, there are multiple competing products and open source communities available. This is a very dynamic time for containerization, with both the layers and the products continuously changing. This is symptomatic of both rapid growth and tremendous interest. It is important to note that while capabilities are expanding, significant consolidation is also occurring. What is emerging is a community consensus rather than a set of competing vendor technologies.

The purpose of the Container software is to provide a stable virtualized runtime for application software. Containers provide this runtime in a very efficient and scalable manner. The scaling capacity is dramatic and has created an entirely new software development paradigm. The target software environment is called “Cloud Native” and is based upon an extremely dynamic, ephemeral, and highly scalable environment. This kind of dynamism in a runtime environment is something new in IT and has triggered a cascade of related capabilities.

DockerThere are a number of Container providers, but by far the most dominant is Docker. Docker is support by the IBM, and all other, Cloud providers. In addition to Docker, there is also the rkt (“rocket”) Container service. Finally, there is the Open Container Initiative (OCI). The OCI is a community driven initiative and all of the major vendors, including IBM and Docker, are participating. The goal of the OCI is to achieve an open and vendor neutral container environment. Since Docker will be OCI compliant, currently there is little reason to use another option. Look for a following Installment in this series to cover Docker in depth.

KubernetesThe purpose of Container Orchestration software (e.g. Kubernetes) is to manage Containers. There are a number of Container Orchestration products. These products include Kubernetes, Docker Swarm, and Apache Mesos. Kubernetes, sometimes abbreviated as k8s, seems to be the leading Orchestration software. It is also the Container Orchestration provider used in the IBM Cloud.

Container Orchestrators provide a number of services that support container operations. These services include:

  1. Cluster Management (Clustered deployment, Rolling updates, Canary deployments, Red/Green deployments)
  2. Scheduling (Starting containers across different processing nodes)
  3. Service Discovery (Providing a registry of what services are running in which containers)
  4. Replication (ensuring that the correct number of container instances are running)
  5. Health management (detecting and replacing containers no longer functioning correctly).

The overall goal of Container Orchestration is to automate the management of groups (aka Clusters) of containers rather than individual containers. Look for a following Installment in this series to cover Kubernetes in depth.

HelmThe purpose of Container Package Management is to manage the creation and deployment of Container Orchestration components. The Container Package Manager for Kubernetes is Helm. The deployment of an Application in Kubernetes generally requires the creation and configuration of multiple Kubernetes objects. Helm groups these objects into a “set” that are defined in a “Helm Chart”. These Helm charts also separate the manifest portion of the configuration from the environment configuration settings. This allows for a single deployment artifact that is easily modified to support changing environments (i.e. Dev, Test, Prod).

Finally, the ephemeral nature of a containerized environment creates its own challenges. With the location of Service instances constantly changing as containers are started and stopped, Service Discovery becomes an essential capability. Mechanisms designed for a far more static world, like DNS, are insufficient to cope with the demands of a containerized environment. A new category of software, called a Service Mesh, has been created to fill this need. While there are multiple Service Mesh providers, the two leading ones are Netflix OSS and Istio. Istio is used within the IBM Cloud.  Look for a following Installment in this series to cover Helm in depth.

Istio

Istio

The purpose of a Service Mesh like Istio is to:

  1. Provide a Service Registry (where ephemeral services are recorded)
  2. Provide a Service Discovery mechanism (using the Registry) to locate and route to service instances
  3. Provide automatic “Circuit Breaker” processing (stop calls to a non-responsive service)
  4. Provide automatic “Bulkhead” processing (ensure access to open connections)
  5. Support automated failure testing within the service environment.

As part of the Service Discovery mechanism, several additional capabilities are required. These include:

  1. Dynamic TLS certificate processing for endpoints
  2. Dynamic routing to endpoints
  3. Load balancing across endpoints.

Finally, a Service Mesh also provides instrumentation, dashboards, etc. for the management of the service environment.  Look for a following Installment in this series to cover Istio in depth.


As can be seen, the “Cloud” creates an entirely new runtime environment, built on top of an existing Linux Operating System.  This new runtime environment can execute traditional runtimes and languages such as Java running in Liberty.  It can also support new runtimes and languages such as JavaScript in Node.JS and Golang.  Furthermore, the new “Cloud Native” runtimes are designed to scale massively and to support ephemeral and dynamic connections to the Internet of Things (IoT).  The result is an entirely new computing environment with new sets of challenges.  The diagram below shows the new Cloud Native runtime stack.

Cloud-Containers-1


First in Series.    Previous.    Next.    Last in Series.


Note: This Whitepaper was first published by this author in the IBM Middleware User Community (May 2018).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s