Deploying Lagom Microservices on Kubernetes

Lagom is an opinionated microservices framework that makes it quick and easy to build, test, and deploy your systems with confidence. Kubernetes, an open-source solution for container orchestration, provides features that complement running Lagom applications in production. This guide will cover the configuration required to run your Lagom-based system on Kubernetes, taking advantage of many of its standard features.

The Challenge

Deploying a Lagom service on Kubernetes presents the following challenges:

  • Lagom’s Persistent Entity API leverages Akka Cluster and this has its own set of considerations when deploying to Kubernetes.
  • Lagom applications make use of a Service Locator that must tie in with the facilities that Kubernetes provides.
  • Running an application on Kubernetes requires containerization and Lagom systems, being composed of many microservices, will require many Docker images to be created.

The Solution

This guide covers the steps required to deploy a Lagom microservices system to Kubernetes. It provides an overview on the strategy for deploying to a Kubernetes cluster and then dives into the commands and configuration required. It specifically covers deploying to your local Kubernetes cluster, by way of Minikube, deploying to IBM Bluemix, a cloud platform as a service (PaaS) built on Kubernetes, as well as IBM Bluemix Private Cloud, an on-prem Bluemix deployment. Other Kubernetes environments can be used with minimal adjustment.

The Setup

This guide demonstrates the solution using the Chirper Lagom example app. Before continuing, make sure you have the following installed and configured on your local machine:

About Chirper

Chirper is a Lagom-based microservices system that aims to simulate a Twitter-like website. It’s configured for both Maven and sbt builds, and this guide will demonstrate how artifacts built using both build tools are deployed to Kubernetes. Chirper has already been configured for deployment on Kubernetes. The guides below detail this configuration so that you can emulate it in your own project.

Kubernetes Resources

Since Chirper uses many advanced features like Akka clustering, service discovery, ingress routing and more, deploying on Kubernetes uses many different types of resources. Reference the following to discover what kinds of resources are used and why they’re necessary.

Resource Purpose
Pod The basic unit of execution in Kubernetes. A Pod includes one or more co-located and co-scheduled containers. While Chirper doesn’t use Pods directly, the Pods are being created through the use of other resource such as StatefulSet.
StatefulSet A controller that provides a unique identity to a set of Pods. This guide will cover how Chirper uses StatefulSet to bootstrap its Akka Clusters with a seed node referenced by environment variables. Chirper defines StatefulSet resources for each of its services: friendservice, activityservice, chirpservice, and web.
Service Provides the means to expose TCP and UDP ports to other Pods within the Kubernetes cluster, and it integrates with DNS so they can be discovered via DNS SRV.
Ingress A collection of rules that allow external traffic to reach services running inside Kubernetes. This enables, for example, requests to /api/users to be routed to the friendservice while requests for / are routed to web. It also provides a central place to terminate TLS. In this example, Chirper is configured to use NGINX as the ingress controller and to terminate TLS.

Refer to Chirper’s resources at deploy/kubernetes/resources for more details.

Service Location

Lagom makes use of a Service Locator to call other services within the system. Chirper is configured to use the service-locator-dns project to provide a service locator that takes advantage of Kubernetes Service Discovery.

Because the names of your service locators will not exactly match the DNS SRV address, service-locator-dns has the ability to translate addresses. Chirper uses this feature to ensure, for example, a service lookup for friendservice will be translated into a DNS SRV lookup for _http-lagom-api._tcp.friendservice.default.svc.cluster.local. Chirper is configured with the following in each of its service’s application.conf:

service-locator-dns {
  name-translators = [
      "^_.+$" = "$0",
      "^.*$" = "_http-lagom-api._tcp.$0.default.svc.cluster.local"

  srv-translators = [
      "^_http-lagom-api[.]_tcp[.](.+)$" = "_http-lagom-api._http.$1",
      "^.*$" = "$0"

Refer to the various application.conf files in the Chirper repository for more details.

Manual Deployment

Now that all the resources required for deployment have been described, this guide will cover how to automate the process of deploying them to Kubernetes.

Deploying Chirper requires the following actions:

  1. Setup Kubernetes
  2. Deploy Cassandra
  3. Build Chirper Docker images
  4. Deploy Chirper
  5. Deploy NGINX
  6. Verify Deployment

Let’s take a look at how these tasks can be performed from your own terminal. Make sure you’ve cd’d into your clone of the Chirper repository before proceeding.

1. Setting up your Kubernetes Cluster

You can deploy Chirper to any number of Kubernetes environments. Below, you’ll find information on how to do this on your own local cluster, Minikube, as well as IBM’s Bluemix. If you have access to a different Kubernetes environment, ensure that you’ve setup kubectl and docker to point at your cluster and docker registry. The sections below offer some information on getting both of these environments setup.


Minikube provides a way for you to run a local Kubernetes cluster. The command below will reset your Minikube and ensure that kubectl and docker can communicate with it.

Note that the following commands will reset any existing Minikube session.

(minikube delete || true) &>/dev/null && \
minikube start --memory 8192 && \
eval $(minikube docker-env)
IBM Bluemix

IBM Bluemix offers Kubernetes clusters that can be used in production environments. To use your Bluemix cluster, follow the instructions on their website. The IBM Bluemix console will guide you through creating a cluster, installing the bx tool, and using that to configure kubectl.

Because this example makes use of Ingress, it requires a Standard cluster in Bluemix and will not work with a Lite cluster. For more on the differences between Standard and Lite clusters, see the Bluemix documentation.

You’ll then need to setup the Container Registry. Consult the Getting started guide for more details.

IBM Bluemix Private Cloud

IBM Bluemix Private Cloud is an on-prem deployment of IBM Bluemix. To deploy to your Bluemix Private Cloud cluster, you’ll need a working deployment of IBM Bluemix Private Cloud and access to a Docker Registry.

Once you’ve configured your environment, you should be able to verify access with the following command:

kubectl get nodes

2. Deploy Cassandra

To deploy Cassandra to Kubernetes, the requisite resources must be created. The command below will create the resources, wait for Cassandra to start up, and show you its status.

kubectl create -f deploy/kubernetes/resources/cassandra && \
deploy/kubernetes/scripts/kubectl-wait-for-pods && \
kubectl exec cassandra-0 -- nodetool status
service "cassandra" created
statefulset "cassandra" created
Datacenter: DC1-K8Demo
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
UN  99.45 KiB  32           100.0%            9f5ffc06-ba53-4f7d-8fbb-c4a522ae3ef8  Rack1-K8Demo

Refer to the files in the Chirper repository at deploy/kubernetes/resources/cassandra for more details.

3. Build Chirper Docker images

Applications must be packaged as Docker images to be deployed to Kubernetes. This can be accomplished with both sbt and Maven build tools, both covered below.

For general assistance on setting up your Lagom build please refer to “Defining a Lagom Build” in the Lagom documentation.


By using fabric8’s docker-maven-plugin, these images will be built and published to the Minikube repository. The command below will build Chirper and the Docker images using Maven and this plugin.

Note that if you see a [ERROR] DOCKER> Unable to pull... error with the following then you’ll need to update your Java version as per a known issue with Java TLS.

mvn clean package docker:build

Refer to the various pom.xml files in the Chirper repository for more details.


By using sbt native packager Chirper is configured to be able to build Docker images. The command below will build Chirper and the Docker images using sbt and this plugin.

sbt -DbuildTarget=kubernetes clean docker:publishLocal

Refer to build.sbt in the Chirper repository for more details.

Next, inspect the images that are available. Note that the various Chirper services all have their own image. These will be deployed to the cluster.

 docker images 

REPOSITORY                                             TAG                 IMAGE ID            CREATED              SIZE
chirper/front-end                                      1.0-SNAPSHOT        717a0d320d9b        56 seconds ago       132MB
chirper/front-end                                      latest              717a0d320d9b        56 seconds ago       132MB
chirper/load-test-impl                                 1.0-SNAPSHOT        db537c9eb880        About a minute ago   143MB
chirper/load-test-impl                                 latest              db537c9eb880        About a minute ago   143MB
chirper/activity-stream-impl                           1.0-SNAPSHOT        cef7df4abf64        About a minute ago   143MB
chirper/activity-stream-impl                           latest              cef7df4abf64        About a minute ago   143MB
chirper/chirp-impl                                     1.0-SNAPSHOT        c9f353510b73        About a minute ago   143MB
chirper/chirp-impl                                     latest              c9f353510b73        About a minute ago   143MB
chirper/friend-impl                                    1.0-SNAPSHOT        2c7aa5d29ce8        About a minute ago   143MB
chirper/friend-impl                                    latest              2c7aa5d29ce8        About a minute ago   143MB
openjdk                                                8-jre-alpine        c4f9d77cd2a1        2 weeks ago          81.4MB    v1.6.1              71dfe833ce74        8 weeks ago          134MB         1.14.2              7c4034e4ffa4        2 months ago         44.5MB        1.14.2              ca8759c215c9        2 months ago         52.4MB   1.14.2              e5c335701995        2 months ago         44.8MB            v6.4-beta.1         85809f318123        4 months ago         127MB                        v12                 a4abd0fb26a4        4 months ago         241MB                   3.0                 99e59f495ffa        14 months ago        747kB

4. Deploy Chirper

To deploy Chirper, the requisite resources must be created. The command below will create the resources, wait for all of them to startup, and show you the cluster’s pod status.

kubectl create -f deploy/kubernetes/resources/chirper && \
deploy/kubernetes/scripts/kubectl-wait-for-pods && \
kubectl get all
service "activityservice-akka-remoting" created
service "activityservice" created
statefulset "activityservice" created
service "chirpservice-akka-remoting" created
service "chirpservice" created
statefulset "chirpservice" created
service "friendservice-akka-remoting" created
service "friendservice" created
statefulset "friendservice" created
service "web" created
statefulset "web" created
NAME                READY     STATUS    RESTARTS   AGE
activityservice-0   1/1       Running   0          20s
cassandra-0         1/1       Running   0          5m
chirpservice-0      1/1       Running   0          20s
friendservice-0     1/1       Running   0          20s
web-0               1/1       Running   0          20s

Refer to the files in the Chirper repository at deploy/kubernetes/resources/chirper for more details.

5. Deploy NGINX

Now that Chirper has been deployed, deploy the Ingress resouces and NGINX to load the application. The command below will create these resources, wait for all of them to startup, and show you the cluster’s pod status.

kubectl create -f deploy/kubernetes/resources/nginx && \
deploy/kubernetes/scripts/kubectl-wait-for-pods && \
kubectl get pods
ingress "chirper-ingress" created
deployment "nginx-default-backend" created
service "nginx-default-backend" created
deployment "nginx-ingress-controller" created
service "nginx-ingress" created
NAME                                        READY     STATUS    RESTARTS   AGE
activityservice-0                           1/1       Running   0          52s
cassandra-0                                 1/1       Running   0          5m
chirpservice-0                              1/1       Running   0          52s
friendservice-0                             1/1       Running   0          52s
nginx-default-backend-1298687872-bmhdc      1/1       Running   0          21s
nginx-ingress-controller-1705403548-pv36b   1/1       Running   0          21s
web-0                                       1/1       Running   0          52s

Refer to the files in the Chirper repository at deploy/kubernetes/resources/nginx for more details.

6. Verify Your Deployment

Chirper and all of its dependencies are now running in the cluster. Use the following command to determine the URLs to open in your browser. After registering an account in the Chirper browser tab, you’ll be ready to start Chirping!

echo "Chirper UI (HTTP): $(minikube service --url nginx-ingress | head -n 1)" && \
    echo "Chirper UI (HTTPS): $(minikube service --url --https nginx-ingress | tail -n 1)" && \
    echo "Kubernetes Dashboard: $(minikube dashboard --url)"
# The URLs below will be different on your system. Be sure to
# run the commands above to produce the correct URLs.

Chirper UI (HTTP):
Chirper UI (HTTPS):
Kubernetes Dashboard:

Note that the HTTPS URL is using a self-signed certificate so you will need to accept it to bypass any browser warnings.

Automated Deployment

This guide has covered the steps required to manually deploy your resources to Kubernetes. In a production setting, you’ll often wish to automate this. Chirper includes an install script that will take care of creating the resources for you. You can find it in the Chirper repository at deploy/kubernetes/scripts/install. Use it as a template for automating your own deployments.

Note that for both solutions described below, you’ll need to ensure your environment is configured for access to a Docker registry (if applicable) and that kubectl has access to your Kubernetes environment.

Deploying using Minikube

For environments that don’t use a registry, such as Minikube, simply launch the script to start the process.

deploy/kubernetes/scripts/install --all --minikube
Deploying using a Docker registry

For production environments, you’ll need to use a Docker registry. The install script takes an optional argument that specifies the Docker registry to use. When provided, the script pushes your images there and ensures that the resources point to them. For example, the following can be used to deploy to a registry namespace my-namespace that has been setup on IBM Bluemix. You’ll need to reference the documentation for the registry you choose, but if running on IBM Bluemix, the Container Registry is a natural fit. For IBM Bluemix Private Cloud deployments, you’ll need to configure our own Docker Registry.

deploy/kubernetes/scripts/install --all --registry


Kubernetes provides many features that complement running a microservices in production. By leveraging the StatefulSet, Ingress, and Service resources a Lagom-based microservices system can easily be deployed into your Kubernetes cluster. The service-locator-dns project can be used to integrate with Kubernetes Service Discovery. Maven users can use fabric8’s docker-maven-plugin to containerize their applications, and sbt users can do the same by employing sbt native packager. Chirper can be referenced by any developer wishing to deploy his or her Lagom or Akka cluster to Kubernetes. It’s the perfect example for learning how to deploy your microservices system into Kubernetes and take advantage of its advanced features like Ingress TLS termination, service location, and more!