Cloud #2 – Cloud Overview

Welcome to the second in a series of Whitepapers about the “Cloud”.  This series of discussions is designed to provide a somewhat comprehensive overview of what the Cloud is.  See the navigation buttons at the bottom of this paper to travel forwards and backwards through these Cloud presentations.  Be sure to join in the discussion!  The depth of the conversation enriches us all.

The meaning of the word “Cloud”, as it has come to be used in IT circles, has became as nebulous and diffuse as an actual Cirrocumulus cloud.  There are a number of reasons for this.  One reason is that the term “Cloud” has come to represent a number of distinct concepts and it is not always clear which concept is being referred to.  Another reason is the the “Cloud” has been an area of rapid technological change, and hence hard to keep up with.  Finally, there is a lot of overlap, confusion, and incomplete understanding of the “Cloud” concepts.  So let’s clear up some of that misunderstanding.  The purpose of this presentation series is to do exactly that.

Cloud Overview – Infrastructure as a Service (IaaS)

The Cloud has evolved quite rapidly over a relatively short time.  This change began, in fact, as more evolutionary than revolutionary.  It initially began with hardware outsourcing.  That outsourcing was, in turn, a natural progression from the growing geographical positioning of Data Centers.  This was necessary both for Resiliency and Disaster Recovery (DR).  Finally, economics came into play.

As the size and footprint of hardware requirements continued to decrease, it became feasible to put multiple tenants into a single Computer Center.  This had become quite common for DR locations.  As organizations outgrew their existing Computer Centers, it became possible to lease computing power in a third party site.  Virtualization accelerated this capability by providing hardware to be deployed in small increments.

IBM-SoftlayerThe ability to “lease versus buy” became an important scaling capability.  SoftLayer, founded in 2005 and now part of IBM, is the world leader in this area.  As of 2018, SoftLayer has nearly 60 data centers in 19 countries world wide.  In keeping with the Service Oriented Architecture (SOA) terminology of the time, this was termed “Infrastructure as a Service” (IaaS).  This was the first step into what we now call the Cloud.  IaaS can refer to either the provisioning of bare servers or servers plus Operating Systems (OS).

Note that IaaS provides a number of valuable features, even at this basic Cloud entry point.  These features include: (1) Facilities, (2) HVAC, (3) Fire suppression, (4) Network, (5) Storage, and (6) Server hardware.  Typically server hardware is virtualized, but both bare metal and virtual machines can be offered.  This lets IaaS users provision additional capacity at very attractive marginal rates.  It also lets users purchase only the computing power currently required while having a built-in upgrade path.  These two factors have made IaaS an important part of infrastructure management.  IaaS may either include bare servers or servers plus Operating System installation and support.

In the software arena, there is increasingly a “buy versus build” option.  The “buy” part of this dynamic can have a variety of mechanisms.  Payment options range all across the gamut of the “own versus lease” dynamic.  Virtualization technologies have also enable a “pay for use” pricing model.  The “pay for use” pricing model allows infrastructure to be deployed dynamically and costed as used.  IaaS thus brings the “own versus lease”, with all of its various options and trade-offs, into the server infrastructure world.

Finally, it is also important to note that IaaS also involves a shift in computing responsibility.  The responsibility for maintaining the computer infrastructure moves from the consumer (the purchasing organization) to the provider (the Cloud IaaS provider).  Thus, IaaS can also be seen as a part of the larger trend of organizations outsourcing their peripheral activities and focusing instead on their core “mission critical” areas.

Cloud Overview – Platform as a Service (PaaS)

The natural evolution of IaaS is to provide additional software support in addition to the OS level support.  Application run-times (i.e. WebSphere Application Server), Development Toolkits, Databases (Oracle, Db2, Information Server, etc.), Middleware (Messaging, Message Broker), and domain specific tools (WebSphere Commerce Server, WebSphere Portal) can all be managed by the Cloud provider in addition to the Operating System.  This extension to the scope of the IaaS offering is referred to as “Platform as a Service” (PaaS).  PaaS should properly be seen as an evolution of IaaS.  The transformation of IaaS providers into PaaS providers was simply a gradual, but steady, increase in the number of software services offered.

Amazon-AWSThe PaaS market began in 2005 with Google’s initial release of the “App Engine”, a software development environment.  It is interesting to note, that this initial PaaS offering was not connected to an IaaS offering.  Instead, it was an entirely separate Google offering.  Google would not introduce its IaaS service (Google Compute Engine) until 2013; another 8 years.  The two most well know PaaS providers are probably Amazon (AWS) and IBM (IBM Cloud, fka Bluemix).  Their PaaS logos are displayed to the left.

PaaS rapidly expanded from its initial software development platform beginnings to become a rich and varied offering.  As with IaaS, PaaS can be seen as a shift in responsibility from the consumer to the provider. With PaaS, the Cloud provider is responsible for the installation, maintenance, and security of the managed software.  The service consumer is responsible for any data used by the software (e.g. Database content).  There will normally be some shared configuration responsibility (e.g. User provisioning) between the consumer and provider.  The goal of any PaaS shared responsibility is to provide the consumer rapid access to their software resources while, at the same time, minimizing the corresponding administrative burden to the consumer.

IBM Bluemix

Obviously, PaaS software sits on top of the same servers that provide IaaS.  With PaaS, the consumer is purchasing infrastructure software services which sit on those servers rather then the servers themselves.  It follows from this that the consumer is relatively uninterested in the underlying infrastructure configuration.  They may be concerned with the Quality of Service (QoS) provided by the PaaS in terms of availability, capacity, security, etc., but not directly interested in the details of how that QoS is provided.  This leads very naturally to a “Pay for Use” pricing model.  Usage can be assessed in terms of usage hours, volume of work, or other metrics.

In addition to seeing PaaS as another step in the transition from “Managing to Using” software, it is important to keep in mind the pricing implications.  These pricing implications are potentially transformative for the Independent Software Vendors (ISV).  Historically, software often had what economists call a high “Barrier to Entry”.  The cost to purchase, design, install, and configure the software all had to be paid before any initial benefit could be realized.  This made the acquisition and deployment of expensive enterprise grade software difficult.  The “Pay for Use” pricing model has potentially provided the capability for a significant expansion in the market saturation of individual Enterprise software products.

Cloud Overview – Software as a Service (SaaS)

Software as a Service (SaaS) is closely related to PaaS and can be seen as an extension of that type (PaaS) of computing.  The origins of SaaS, however, evolved independently of IaaS and PaaS.  This evolution can be traced all the way back into the 1960’s, with Service Bureaus providing this type of capability.  Salesforce, which is generally considered a perfect example of SaaS, was launched in 1999.

SalesforceWith PaaS, the Cloud provider supplies a “Platform” upon which the consumer can build and deploy a business solution.  With SaaS, it is the business solution itself that is provided.  The consumer, other than QoS, has no interest in the underlying server infrastructure nor on the technologies deployed on top of the servers.  Instead, the consumer’s business (rather than IT) users interact directly with the software itself.

SaaS generally leverages the increases in communication embodied by the internet to be able be able to deliver business functionality directly to an individual’s workstation.  The software itself, as well as the supporting IT infrastructure for that software, is hidden from the consumer.  SaaS clients simply interface with software directly from their web browser, thus completely eliminating the need for an IT infrastructure to support the application.  With SaaS, the only required IT support is: (1) standard Desktop support, and (2) standard networking support.

By its very nature, SaaS can support a number of different pricing models.  SaaS providers may provision their own IT or use PaaS to externalize their infrastructure.  Due to the very nature of the SaaS business model, and because the underlying infrastructure is transparent to the consumer, SaaS providers are normally multi-tenanted.  Note that this is NOT a SaaS requirement, simply a natural side effect of the underlying business model.

One important point to consider about SaaS regards the consumer’s application data.  With SaaS, while a consumer owns the data, the SaaS provider normally manages the data.  This is no different that a PaaS database solution, for instance.  With PaaS, however, the consumer has the ability to architect the overall solution, placing what is desired into the Cloud and keeping what is necessary on-premises.  With SaaS, such architectural flexibility is not available unless designed into and provided by the vendor.

 Cloud Overview – Function as a Service (FaaS)

Apache OpenWhisk

Function as a Service (FaaS) is the most recent of the Cloud offering types, only being introduced in 2014-16.  FaaS is also referred to as “Serverless computing”.  This is obviously a misnomer, as there are underlying servers!  The intent of this term is to indicate that the entire infrastructure is entirely transparent to the developer.  The Apache OpenWhisk logo shown to left is the IBM Cloud FaaS offering.

While all of the software components in FaaS could be provided in a PaaS environment, the real difference between these two concepts/services lies not in the underlying software but in the operational management of that software.  FaaS is designed for a “Cloud Native” computing environment.  See the next section for an overview of this environment.  Cloud Native environments have a common standard runtime.  They also have a common approach to automation.  This means lots of it.  The expectation is that deployment is both simple and rapid.

It is this operational aspect of the computing environment that really separates FaaS from PaaS.  You can think of PaaS as a “Full Service” Cloud, capable of running anything and meeting any and all consumer requirements.  You can think of FaaS as an absolutely standardized Cloud with a fixed QoS.  FaaS environments are not capable of handling all work loads.  They are not capable, for example, of handling Online Transaction Processing (OLTP) or Online Analytical Processing (OLAP) workloads.  Thus, FaaS is designed to complement, not replace, some core existing workloads.  These workloads are envisioned to continue executing, either on-premises or in a PaaS Cloud.

Cloud Overview – Cloud Native Computing Foundation (CNCF)

Cloud-Native-Computing-Foundaton-(CNCF)The Cloud Native Computing Foundation (CNCF) is an Open Source community supported by virtually every Cloud or technology component provider.  The purpose of the CNCF is to define the “Cloud Native Architecture” so that consumers, developers, and providers all have a standardized runtime.  To quote from the CNCF charter: “The Foundation’s mission is to create and drive the adoption of a new computing paradigm that is optimized for modern distributed systems environments capable of scaling to tens of thousands of self healing multi-tenant nodes”.  The charter goes on to state the primary properties of Cloud Native Architecture (CNA):

  1. Container packaged.
  2. Dynamically managed.
  3. Micro-services oriented.
12 Factor

The Microservices aspect of Cloud computing is, in my opinion, suffering from both hype and misleading comparisons.  Many descriptions of a Microservice will describe the design goals as “loosely-coupled”, “highly-cohesive”, and “business centric”.  These descriptions are made in a misguided attempt to differentiate a Microservice from a SOA service.  However, these are EXACTLY the design criteria for a SOA service.  Furthermore, these are the same design principles originally promulgated in the 1970s.  As foundational principles or laws of software, they remain true.  The real conversation should be about poorly designed versus well designed software.

Cloud Native applications are designed from their inception to execute in a Cloud environment.  This has a number of implications.  First of all, the underlying Operating System is Linux.  No exception to this rule can be allowed.  Second, the processes will execute in a Container.  While this standard may evolve over time, any changes will have to be backward compatible.  Currently this means that the services will be executed in Docker containers.

Another important Cloud Native concept is an operational one.  The Cloud assumes that Continuous Integration/Continuous Deployment (CI/CD) processes are in place.  The CI/CD processes automate the tasks of building, testing, and deploying the microservices.  This automation can be performed in the Cloud by software like Jenkins or urban{code}.  None of these concepts is, of course, new.  What is different is the rapidity with which these processes are anticipated to execute.  The Cloud Native environment envisions a break down in the separate of duties between development and operational staff.

There is also a strong bias for some, and against other, languages within a Microservices environment.  This is due, in large part, to the way in which the language interacts with a containerized runtime environment.  Java, for instance, does not launch quickly and so is considered less desirable in a containerized runtime.  The primary microservice development languages are: Go, JavaScript, and Python.

To attempt to improved services design, the “Twelve Factor” methodology was developed.  The Twelve Factor logo is displayed on the left.  The “Twelve Factors” microservices development should display are described as follows:

  1. Codebase. Use a Version Control System. (Standard Practice)
  2. Dependencies. Explicitly declare and isolate library dependencies. (Standard Practice)
  3. Config. Store configuration information in the environment. (Decoupling)
  4. Backing Services. Treat backing services as attached resources. (Decoupling).
  5. Build, Release, Run. Strictly separate build and run stages. (Standard Practice)
  6. Processes. Execute the Application as one or more stateless processes. (Decoupling)
  7. Port Binding. Export service via binding directly to a port. (Decoupling)
  8. Concurrency. Enable horizontal scaling. (Decoupling)
  9. Disposability. Fast startup and graceful shutdown.
  10. Dev/Prod parity. Keep environments similar. (Standard Practice)
  11. Logs. Treat logs as event streams.
  12. Admin processes. Run administrative tasks as separate embedded processes.

As can be seen from the preceding Twelve Factor list, most of these factors are either already standard practice, can be derived from basic principles, or represent a Best Practice tailored for the Cloud environment.  Nonetheless, they serve as a useful guide for the development of microservices.  You will also note that the Twelve Factors all involve being properly designed for the Cloud and to not provide any guidance on how to design software and solve business problems.  The Twelve Factors are thus Cloud design criteria to keep in mind IN ADDITION TO the standard development practices and methodologies in use.

Cloud Overview – The Cloud Software Stack

IBM Cloud

As has been previously described, the Cloud comprises an entirely new software stack.  The foundation of this stack is the Linux Operating System.  This is a requirement, since the next layer (Docker) requires Linux.  On top of the Linux Operating System, software is deployed into a Containerized environment.  While container standards are currently being developed, for all practical purposes this means Docker.

On top of Linux and as peers to Docker, the Cloud has a number of new Cloud Native software components.  These include Cloud friendly data storage systems.  There are different categories of these systems, from simple key/value pair file systems all the way through sophisticated noSQL databases.  Some of these products include Redis (Key/Value), CouchDb (noSQL), and MongoDB (noSQL).  IBM has a significantly enhanced version of CouchDB named Cloudant.

The containerization software includes Docker for containers.  Container management and scaling is handled by a layer above Docker.  This can either be Docker Swarm or Kubernetes.  IBM uses Kubernetes.  Kubernetes can be managed using Helm, which uses Helm Charts to manage the Kubernetes configuration.  Finally, on top of Helm is Istio.  These products combine to support the dynamic, resilient, and scalable features that define Cloud Native computing.

On top of all of the container software are the application runtimes.  These runtimes support Java as well as more Cloud friendly languages like Go, JavaScript (node.JS), and Ruby.  They are extended by OpenWhisk, which provides a FaaS environment.  The automation to provide CI/CD behavior for these languages is provided by Jenkins and urban{code}.  Version control is normally performed through GitHub.

As can be seen, there are a number of products that provide support for logging.  These include the ELK stack (Elasticsearch, Logstash, and Kibana) used by IBM as well as other products.  Finally, there are a number of new operational tools and the top of the Cloud network stack.  All of this adds up to a lot of new moving pieces as well as gaps when compared to current capabilities.

Cloud Overview – Public, Private, and Hybrid Clouds

As we have seen, the “Cloud” began as a public IaaS offering.  What public meant, in this case, was that the Cloud facilities were offered by a private company.  A consumer’s resources, stored within the private company, were co-located with other consumer resources in a multi-tenanted environment.  Perhaps a better term for this situation would have been quasi-private.  In addition to the shared tenancy, consumers also needed to access their compute resources over a public network (the internet).  The term public Cloud emphasizes the shared nature of some of this type of Cloud.

Cloud providers have telecommunications networks, storage, physical servers, and virtual servers.  Large Cloud providers have multiple data centers, generally spread out around the world.  This means that these providers have a large number of potential configuration options.  Virtual servers could potentially be either shared or dedicated. For practical purposes, these are not usually shared.  Physical servers could potentially be either shared or dedicated.  Storage can be either shared or dedicated.  Network (e.g. subnets) can either be shared or dedicated. The sharing of any of these resources is normally labeled “multi-tenanting”.

The trade-offs are obvious.  Real resources dedicated to a customer are more expensive.  Therefore, services using dedicated hardware must be more expensive. Shared resources can have a less expensive price point, but come with a reduced QoS.  When dedicated resources are deployed in a “Public” Cloud, the resulting infrastructure is termed a “Private” Cloud.  A Private Cloud can therefore be just as secure as an externally managed Data Center.

The Cloud software can also be deployed on-premises.  This type of Cloud deployment is also called a Private Cloud.  So we can have three different deployment models:

  1. Public Cloud – Cloud provider uses multi-tenanted resources.
  2. Private Cloud – Cloud provider uses dedicated resources.
  3. On-premises Private Cloud – Cloud provider manages configuration on your resources.

There are some final complicating factors in discussing these Cloud options.  The first factor is that there are multiple Cloud providers!  The three major Cloud providers are Amazon (AWS), IBM (IBM Cloud), and Microsoft (Azure).  In addition to the major players, there are dozens of smaller Cloud providers.  The second factor is that each of the different Cloud configuration options has a different set of advantages and disadvantages.  Finally, there is no real reason why a consumer should be restricted to either a single Cloud provider or a single Cloud model.  Therefore, they won’t!  Virtually all consumers will end up with a “Hybrid” Cloud environment.

  • Zero or more in-house Data Centers.
  • Zero or more vendor managed Data Centers.
  • One or more Cloud providers.

Cloud Overview – IBM Cloud Offerings

Year IBM Cloud Offering
 2010   IBM Cast Iron (SaaS support for Cloud).
 2011   IBM Smartcloud (IaaS).
 2013   IBM SoftLayer (IaaS).
 2014   IBM Bluemix (IaaS/PaaS).
 2017   IBM Cloud (IaaS/PaaS/FaaS) branding replaces Bluemix.  
 IBM Cloud Private (ICP).

Cloud Overview – Summary

So, the Cloud.  IaaS, PaaS, SaaS, and FaaS.  Public versus Private.  On-premises versus Off-premises.  An entirely new software stack from top to bottom.  There’s really not that much to it is there?

Actually, there is quite a bit to it, isn’t there?  That’s what makes the usage of the term “Cloud” so challenging.  Which of these concepts is the user referring to?  Do they fully understand the concepts?  Does the listener?  Using some of the more specific terminology introduced in this discussion may help clarify meaning.  After all, we’ve developed all of these terms as short cuts to represent different ideas.  Let’s use them.

Year Cloud event.  
  1999    Salesforce launched.  Start of SaaS
  2005    SoftLayer founded.  Start of IaaS.  
  2006    AWS launched (IaaS).   
  2008    Google launches “App Engine” as a software development environment.  Start of PaaS.
  2010    Microsoft launches Azure (IaaS).  
  2013    IBM acquires SoftLayer.  
  Google Compute Engine (IaaS).
  2014    Docker available on Amazon.  IBM commits to Docker.  
  Amazon AWS Lambda (initially only node.JS).  Start of FaaS.  
  2015    Google releases Kubernetes to the CNCF.  
  2016    IBM/Apache Open Whisk (Multiple languages).  (FaaS).
  2017    Google/Lyft/IBM launch Istio (Microservices Mesh).  

Cloud Overview – Final Thoughts

I hope that from this discussion you have seen that the term “Cloud” embraces a number of different concepts.  It is important to gain some kind of understanding of these concepts because the “Cloud” is here to stay.  That does not, by any means, imply that all computing will move into the Cloud.  Indeed, the truth is far from that simplistic viewpoint.  Just as the advent of distributed computing (IBM i, UNIX/Linux, and Windows) did not spell the end of the mainframe (although many predicted it), so the advent of the Cloud will not spell the end of distributed computing.

The reasons that the Cloud is here to stay are varied.  The “Cloud” has arisen to serve needs that were not being well served in the existing IT environment.  See the previous installment in this series for a deeper dive into the historical trends driving these changes.  As communications and the Internet of Things (IoT) has rapidly evolved, software development and deployment have not kept up.  Services Oriented Architecture (SOA) was designed to make business more agile in business terms.  Some organizations were more successful than others in this transition.  The major driver of the Cloud is again an increased need for agility in the business place.  Those that understand and embrace this change will keep up.  Those that do not will fall further behind.  The choice is ours.


First in Series.    Previous.    Next.    Last in Series.

Note: This Whitepaper was first published by this author in the IBM Middleware User Community (April 2018).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s