Deploying Lightbend applications to OpenShift

This guide describes how to deploy Lagom applications to OpenShift. In future, we hope to expand the scope of this guide to include Akka and Play applications, in the meantime, much of this guide will be applicable to Akka and Play, but will need to be adapted.

It is intended to be used by people who have a cursory understanding of OpenShift or Kubernetes - you should know what Kubernetes and OpenShift are, have a basic understanding of what a pods, services and containers are, and you should have interacted with the oc or kubectl commands before. You are not however expected to be an expert.

While this guide is targeted at OpenShift, much of it will be applicable to Kubernetes in general. Where the guide depends on OpenShift specific features, we will generally mention this. Although this guide uses the OpenShift client command, oc, in most cases it can be substituted with kubectl, since oc for the most part provides a superset of commands supported by kubectl.

Following this guide

There are multiple different configurations that this guide documents, such as, using Scala and using Java, using Maven and using sbt, using Kafka and using a relational database. It is up to you to follow the parts that are relevant to you, and skip over the parts that are not. The sample application that is referenced by this guide is offered using either Scala or Java, either Maven or sbt, and uses Postgres and Kafka.

There are two ways to use this guide. The first is to follow along using the sample applications that we have provided. This is great if you are evaluating the technologies, or just want to get a feel for deployment to production before you deploy your own apps. The second is to follow along with your own application, applying the steps we document to your application. Careful attention will need to be paid to ensuring that all config, in particular, names, get updated to match your application.

Ideally, you should follow along using a realistic OpenShift cluster, something deployed to AWS, GCP or Azure for example, as this will provide a more realistic demonstration of the technologies, allowing you to see many services running across a cluster. However, due to the hosting cost of running such a cluster, this may not always be feasible, and perhaps you are just evaluating these technologies with no budget for hosting them yet. In that case, you can follow this guide using Minishift, running on your local machine. Running this guide in Minishift has some significant limitations, primarily around resources such as memory and CPU. In some cases, you will have to deploy things with only one replica, when in production you should really use at least three. And often you will have to assign only small fractions of CPU resources to an application, especially if you are running many, and this will make the application very slow to start up.

Installing OpenShift

We will not actually document installing OpenShift or Minishift in this guide, since there are already resources on the web for doing this. You may already have an OpenShift installation that you can use, in which case, you can simply use that.

Installing a full cluster

If you wish to install a full OpenShift cluster from scratch, you can follow one of the following:

  • OKD - These are instructions for installing OKD, the open source distribution of OpenShift.
  • OpenShift Container Platform - These are instructions for installing OpenShift Container Platform, RedHats commercially supported OpenShift distribution. It requires a RedHat license to run it.

In this guide, we will assume that you have created a project called myproject, and will use this as the default namespace that all applications get deployed to. For convenience, all the commands we use that need the namespace will use a variable called NAMESPACE, so if you set this in your shell, like so:


Then you’ll just be able to copy and paste all the commands. That said, there are some configuration files that will have myproject hard coded and may need to be updated, so if you’re not using the myproject project, you’ll need to update these.

Setting up docker

You will need to ensure that you set your environment up to be able to push docker images to your OpenShift installation. This requires exposing your OpenShift installations internal registry to the outside world and then logging in. For more information, see here for how to expose the registry, and here for how to log in once the registry is exposed.

Typically, once this is done, your docker registry will be available at a URL like

Installing Minishift

Minishift can be installed following these instructions.

Before starting Minishift, we recommend you configure it to allow it more RAM, and potentially more CPU (by default, it gets 2GB of RAM and 2 CPUs). If you use Minishift for other purposes, you may want to create a custom profile so that you don’t interfere with these, a profile called lightbend can be created and switched to by running:

minishift profile set lightbend

Now you can configure how much RAM should be allocated to Minishift, let’s allocate it 6GB:

minishift config set memory 6GB

The memory can also be set by passing an argument when you start Minishift, but doing it this way, you ensure that whenever you delete and restart your Minishift instance, the config is remembered.

Now you can start Minishift:

minishift start

Once Minishift is started, you need to ensure that the oc binary is on your path, and that your environment is configured to use the Minishift VMS docker host. To do this, run:

eval $(minishift oc-env)
eval $(minishift docker-env)

The first command modifies your PATH to ensure the oc binary is on it, the second sets some DOCKER_* environment variables to tell Docker which host to use and how to authenticate with it when building images. Since these commands just modify environment variables in your current shell session, they will need to be rerun every time you open a new terminal window, or any time you delete and then restart your Minishift instance.