Deployment platform recommendations

To deploy Reactive microservices and Reactive streaming applications in the most scalable and resilient way, Lightbend recommends a Kubernetes-based platform such as OpenShift, GKE, or IBM Cloud Pak.

This page provides an overview of the Kubernetes concepts that are foundational to our recommendations that follow. If you are already familiar with Kubernetes-based platforms, skip to Factors to consider when planning cluster resources.

Images, pods, and containers

Kubernetes enables you to deploy applications to a cluster of nodes—​either on premises or in the cloud—​using images, pods and containers. A Docker image packages the application, with all its data, configuration, and dependencies. Kubernetes uses the image to instantiate a pod at runtime. A pod can contain one or more containers in which an instance of the application and its required resources run. The following illustrates this basic Kubernetes architecture. In this example, three instances of the same application are running on different nodes.

Basic Kubernetes architecture

A pod’s containers share network connections and storage. Pod contents are always co-located and co-scheduled, and run in a shared context: a set of Linux namespaces, cgroups, and other facets of isolation—​the same things that isolate a Docker container. A pod context can define further levels of isolation for individual applications.

Containers isolate application processes from each other. Containers within a pod share an IP address and port space and can find each other via localhost. They can also communicate within a pod using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different pods have distinct IP addresses and cannot communicate by IPC without special configuration. These containers usually communicate with each other via their pod’s IP addresses.

Pods model the pattern of multiple cooperating processes that form a cohesive unit of service. They simplify application deployment and management by providing a higher-level abstraction than the set of their constituent images. Pods are the unit of deployment, horizontal scaling, and replication in Kubernetes. Kubernetes automatically handles colocation (co-scheduling), shared fate (e.g. termination), coordinated replication, resource sharing, and dependency management for containers in a pod.

A key goal of Kubernetes is to provide an easy-to-use platform that promotes reproducible deployments at any scale. Thus its very common for Kubernetes to be coupled with the concepts of CI/CD (continuous integration/continuous deployment). In the most common scenario, the user builds a number of containers, specifies their deployment characteristics using an application description, and CI/CD tooling deploys the application within the Kubernetes cluster. Kubernetes goes beyond simply offering deployment options for applications. It also provides a number of services to make application development and deployment easier, like load-balancing, service discovery, self-recovery, etc.

Controllers

Kubernetes components are categorized as master or node components. Much of the work managing clusters is handled by master component Controllers. Controllers provide advanced options for continuously managing applications, such as:

OpenShift provides the following default controllers:

The operator pattern

Kubernetes-based platforms do a great job of simplifying and automating deployment and management of containerized applications. However, with stateful applications, adding or removing instances may require preparation and/or post-provisioning steps – for instance, changes to internal configuration, communication with a clustering mechanism, and interaction with external systems like DNS. This can be both an added burden for administrators as well as the opportunity for errors.

Operators remove this burden by encapsulating an application’s operational considerations and ensuring that all aspects of its lifecycle, including upgrades, monitoring, and failure-handling, are integrated into the Kubernetes framework and invoked when needed. The operator pattern was first announced in a CoreOS blog post by Brandon Philips and later in a talk at CloudNativeCon called Writing a custom controller: Extending the functionality of your cluster by Aaron Levy. CoreOS went on to introduce the Operator Framework.

The operator pattern is modeled after the way Kubernetes built-in resource definitions are designed and implemented. An operator-based project will usually consist of a custom Kubernetes operator (or controller) application and an accompanying CustomResourceDefinition (CRD). The operator is an application written in any language that interacts with the Kubernetes API server to handle all requests associated with its CRD.

The operator uses an active reconciliation process. It will watch instances of a CRD and calculate what actions are required to get to the desired state. This includes initial instantiation and deployment as well as updates. Once a plan is calculated on how to achieve the required state, the operator will interact with the API server to execute it. If you’re familiar with Apache Mesos, you can think of an operator as being analogous to a second level scheduler implementation that manages an Apache Mesos framework and its tasks.

For example, to scale out Kafka you would update the Strimzi Kafka CRD by incrementing its replica count. The operator would detect this change and compute that the Kafka StatefulSet needs to increase by one pod, then it will scale the StatefulSet and provide the relevant Kafka configuration to the new pod (its Broker ID, ZooKeeper cluster info, etc.)

The function of the operator can be summarized as the active reconciliation between current and desired state and is best illustrated with the following pseudo-code for an infinite loop:

for {
  desired := getDesiredState()
  current := getCurrentState()
  makeChanges(desired, current)
}

Managing storage

Managing storage in Kubernetes-based platforms is a distinct task from managing computational resources. There are two main approaches to managing storage today, local and persistent volume storage.

Local storage

Local storage is allocated on-the-fly and has the same lifecycle as the pod it’s allocated for. If the pod is lost, its local storage is lost. If the pod is rescheduled, it starts with empty local storage. This suits specific transient use cases, such as storage for a local cache. Due to its simplicity, it is also useful for local testing and debugging.

Kubernetes persistent volumes

The PersistentVolume subsystem provides an API for users and administrators to abstract the details of how storage is provided from how it is consumed. It defines storage that is not a part of the image nor the container. It separates disk management from pod management, thus allowing you to preserve data even if a pod dies or completes its lifecycle.

The Kubernetes PersistentVolume subsystem is based on the following API resources:

  • PersistentVolume (PV): A PV is an instance of a volume. Cluster administrators can offer a variety of PVs that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented.

  • PersistentVolumeClaim (PVC): A PVC is a request for storage by a user. It is analogous to a pod in the following ways. Pods consume node resources and a PVC is bound to a PV. Pods can request specific levels of resources (CPU and Memory) and PVCs can request specific storage sizes and access modes, e.g., they can be mounted once read/write or many times read-only, etc.

  • StorageClass: A StorageClass provides a way for administrators to describe the types of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to other, arbitrary policies determined by the cluster administrators. Kubernetes is not opinionated about what these classes represent. This concept is sometimes called “profiles” in other storage systems.

    The most common usage of a StorageClass is for dynamic provisioning. When a PVC that specifies a StorageClass should be created, Kubernetes should "return" storage in the form of a PV that will satisfy requirements declared in the StorageClass. You don’t know when exactly it will happen, although it is usually quickly. It could also fail. In a way, it is analogous to an async programming promise.

PVs can be provisioned statically or dynamically:

  • Static provisioning: A cluster administrator creates a number of PVs. They carry the details of the real storage which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.

  • Dynamic provisioning: When none of the static PVs the administrator created matches a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses; the PVC must request a StorageClass and the administrator must have created and configured that StorageClass in order for dynamic provisioning to succeed.

OpenShift documentation lists the currently-supported PV types. We’ll discuss more details about PVCs and PVs when we discuss specific use cases.