Install Console

Lightbend Console installs as a Helm chart with many configurable parameters. We provide a script that simplifies Console installation in development and production environments. The script verifies the environment before and after the install to help troubleshoot any issues. For cases where you cannot install Helm Tiller in your cluster, there is a section about doing an install without Tiller. It is also possible to install using Helm directly, that might be useful in cases where the install script can’t be used. We also provide an experimental Operator installation.


Prior to installing the Console, you should have already:

  • Started a cluster in a distributed or local environment
  • Set up Helm
  • Set up Storage
  • Set up Credentials
  • Python 2.x available in your path
  • A namespace to install Lightbend Console into. For the purposes of this document, lightbend will be assumed.

Windows support

The install script can be used on Windows 10 with OpenShift client. Running on local clusters like Minikube or Minishift is not supported.

To use the install script on Windows, make sure you have these programs available in your PATH:

Both PowerShell and Command Prompt shells can be used. Be aware that the example commands in the rest of this document need to be slightly altered by removing the leading ./ characters, so on Windows instead of:

./ <commands>

Write this: <commands>

Download the Install Script

Download the script and make it executable.

curl -O
chmod u+x

The script checks your environment for platform dependencies and their versions, installs Lightbend Console into your cluster, can verify existing Console installations, and help with debugging problems by gathering logs and diagnostic data. The script uses Helm, allowing you to pass in chart values and offers other install configuration options.

For fine-grained control over the installation process, including when you aren’t running Tiller on your cluster, see Installing without Helm Tiller.

Performing the install

In some Kubernetes environments, you can simply run the install subcommand and specify the Console version:

./ install --namespace=lightbend --version=1.2.2

We recommend always specifying a version when using install subcommand. If version argument is not provided, Helm will try to get the newest Lightbend Console version, including release candidates. Current version is 1.2.2.

It can take a few minutes for all components to initialize. Once that is done, you can verify that Lightbend Console is running:

./ verify --namespace=lightbend

The following platforms require more specific install commands:

Setting chart values and Helm arguments

Any arguments you provide to after -- are passed directly to Helm.

Chart values can be set in a YAML file and passed to Helm with the --values <values.yaml> parameter:

usePersistentVolumes: true
defaultStorageClass: gp2

It is also possible to pass ad-hoc values on the command line with --set setting=value, but it is strongly recommended to use a values.yaml file to preserve your settings for upgrades.

The following table describes the available chart values and lists their defaults.

Value Key Default Description
exposeServices false Set to NodePort or LoadBalancer to generate services accessible from outside of cluster (eg. http://$(minikube ip):30080 when used with NodePort) for interacting with Lightbend Console.
esConsoleExposePort 30080 Port on which Console will be exposed when value exposeServices is used.
createClusterRoles true Set to true to create a ClusterRole for Prometheus and kube-state-metrics. Set to false to not create them, in case you would like to define your own.
usePersistentVolumes true Use Persistent Volumes by default. If false, Console will use emptyDir for all volumes. See Set up Storage for more information.
defaultStorageClass <none> Name of the StorageClass to use for persistent volumes. The default uses the cluster’s DefaultStorageClass.
prometheusVolumeSize 256Gi Size of the Prometheus volume. Used for storing prometheus data and custom monitors.
esGrafanaVolumeSize 32Gi Size of the Grafana volume. Used for saving custom dashboards, plugins, and users.
prometheusDomain Domain for scrape annotations. For example,
alertManagers alertmanager:9093 Comma separated list of Alertmanager addresses. When installing with createAlertManager=false this is used to specify existing Alertmanagers to connect to.
esConsoleURL n/a External URL for access to the console. Currently used by Prometheus and Alertmanager in alerts.
defaultCPURequest 100m Default container resource request.
defaultMemoryRequest 50Mi Default container resource request.
prometheusMemoryRequest 250Mi Prometheus container memory request.
containerCPURequest n/a Container specific resource request. Replace container with one of esConsole, esMonitor, prometheus, or grafana.
containerMemoryRequest n/a Container specific resource request. Replace container with one of esConsole, esMonitor, prometheus, or grafana.
consoleUIConfig.isMonitorEditEnabled false Set to true to enable monitor editing.

Install configuration

The install subcommand itself has several options which affect how it calls Helm. Pass the --help option to view them all:

./ install --help

Available arguments are:

Argument Description Default
--dry-run Just print the commands that the script would run instead of executing them. false
--skip-checks Skip checking for dependency tools, credentials validity, etc. false
--wait Wait for install or upgrade to finish before returning. false
--namespace Namespace into which to install Lightbend Console. unset (required)
--force-install Set to true to delete the installed chart first. false
--delete-pvcs Override any warnings about possible data loss when uninstalling. USE WITH CAUTION. unset (false)
--set key1=val1,key2=val2 Set Helm chart values. Can be repeated. unset
--es-chart Chart name to install from the repository enterprise-suite
--export-yaml Export resource YAMLs to stdout instead of installing. Set to creds for credentials, console for everything else. unset
--helm-name Helm release name enterprise-suite
--local-chart Location of local chart (tarball). Overrides --repo. unset
--repo Helm chart repository to use
--creds Credentials file in property format with username/password $HOME/.lightbend/commercial.credentials
--version Version of helm chart to install. The latest stable version is 1.2.2. unset

In addition to commandline arguments, the install subcommand supports the following environment variables:

Variable Description
LIGHTBEND_COMMERCIAL_USERNAME Credentials username. If specified in conjunction with LIGHTBEND_COMMERCIAL_PASSWORD the pair overrides --creds argument.
LIGHTBEND_COMMERCIAL_PASSWORD Credentials password. If specified in conjunction with LIGHTBEND_COMMERCIAL_USERNAME the pair overrides --creds argument.

Platform Details

This section provides install tips for specific platforms.


Run the script and in addition to specifying the version, make sure to include the --set exposeServices=NodePort argument to enable external access.

./ install --namespace=lightbend --version=1.2.2 --set exposeServices=NodePort

You should now be able to open the Console in your browser by running:

minikube service expose-es-console --namespace lightbend

To just print the URL instead of opening it in a browser, add the --url argument to the above command.


Unlike Minikube, Minishift doesn’t provide a StorageClass. Disable persistent volumes when installing Console:

./ install --namespace=lightbend --version=1.2.2 --set usePersistentVolumes=false

Kubernetes 1.6 / OpenShift 3.6

To install Lightbend Console on older Kubernetes versions (1.6 and earlier, or Openshift 3.6 and earlier), you may need to specify the older Kubernetes API versions of our resources.

Execute the following command to install Lighbend Console on Kubernetes 1.6 / OpenShift 3.6:

Create a values.yaml file:

deploymentApiVersion: apps/v1beta1
daemonSetApiVersion: extensions/v1beta1

Then install Console:

./ install --namespace=lightbend --version=1.2.2 -- --values=values.yaml

Container metrics graphs will be empty since these metrics are not available in OpenShift 3.6.

Upgrading the console version

The install subcommand will automatically upgrade the chart if it detects it is already installed.

Make sure to read the release notes in case any actions are required.

As Helm doesn’t remember the parameters you used for the initial install, you need to pass the same set of values you originally used.

Specify a new version and the values.yaml file you used for the initial install:

./ install --namespace=lightbend --version=1.2.2 -- --values=values.yaml

If the upgrade fails, you can do a force install to purge the previous install:

./ install --namespace=lightbend --version=1.2.2 --force-install -- --values=values.yaml

Installing without Helm Tiller

By default, uses Helm client commands, some of which require the Helm Tiller server component to be installed. There are situations where this may not be appropriate. For example:

  • You do not or cannot run Tiller on your cluster.
  • You want to manage the Kubernetes resource yaml yourself.
  • You are attempting to install the product on a non-supported platform.

You can use the script to generate the yaml and deploy directly.

Generating YAML From

The script can be used to export the resource yaml definitions that Helm would use. You can use these definitions to manage the installation of the Console into your Kubernetes infrastructure as you see fit. This still requires you to have Helm installed in your local environment, because it uses Helm to render chart templates into Kubernetes resources. Tiller, however, does not need to be installed in your cluster.

The --export-yaml commandline flag enables the yaml export.

First export the Lightbend commercial credentials as a Secret resource:

./ install --namespace=lightbend --version=1.2.2 --export-yaml=creds | \
    kubectl --namespace=lightbend apply -f -

Then export all the resources that make up an install:

./ install --namespace=lightbend --version=1.2.2 --export-yaml=console > console.yaml
kubectl --namespace=lightbend apply -f console.yaml

Any exported credentials yaml will include your credentials. They are base64 encoded but not encrypted. They should be considered to be in plaintext so should be managed carefully. Do not commit them to a code repository for example.

Backwards compatibility and further customization

We provide backwards compatibility of the chart parameters based on the semantic version. For example, 1.x.x releases should not change existing parameter names.

While is public, we provide no guarantee of maintaining its structure.

If you need further customization beyond what the Helm parameters provide, it is recommended to create a support ticket for Lightbend to add it. This gives you the greatest guarantee of compatibility across versions.

Verifying an install

The verify subcommand can be used to check if an existing Lightbend Console installation is running fine:

./ verify --namespace=lightbend

Available arguments are:

Argument Description Default
--skip-checks Skip checks for dependency tools (kubectl). false
--namespace Namespace in which to look for Lightbend Console. lightbend


To remove the Console from you cluster, you can use uninstall subcommand:

./ uninstall

In order to avoid losing your data, this will warn and stop if it detects that the the uninstall would risk the loss of Persistent Volumes containing your monitors and other data.

Available arguments are:

Argument Description Default
--dry-run Just print the commands that the script would run instead of executing them. false
--skip-checks Skip checks for dependency tools (helm, kubectl). false
--helm-name HELM_NAME Helm release name to uninstall enterprise-suite
--delete-pvcs Ignore warnings about PVs and proceed anyway. USE WITH CAUTION! unset (false)

Debug data for diagnostics

To help diagnosing any problems in Lightbend Console, there is debug-dump subcommand:

./ debug-dump --namespace=lightbend

When no other arguments are given, it will look for an existing Lightbend Console installation, gather Kubernetes descriptions of the Console namespace as well as logs for all the containers running there. It will not gather any other data from outside of the Console namespace (lightbend by default). Everything is placed into a zip file in the working directory with a name like

Supported arguments are:

Argument Description Default
--skip-checks Skip checks for dependency tools (kubectl). false
--namespace Namespace in which to look for Lightbend Console. lightbend
--print Print gathered data to stdout instead of making a zip file. false

Installing without

It is possible to install Lightbend Console into your cluster without using the script. We advise that you use the install script if at all possible, but there may be reasons why you can’t. Manual installation consists of two steps - setting up credentials and then using Helm to install the Console chart. The instructions below assume that you have a Kubernetes cluster up and running with Helm installed.

Setting up credentials

Your Lightbend credentials are used to pull the Console Docker images that are only available to authenticated users. Credentials come in the form of username and password, usually kept in the ~/.lightbend/commercial.credentials file. When installing manually, you will have to create a credentials.yaml file, which has your username and password, and pass the location of the file as a parameter to Helm. The yaml file should look like the following:

    username: johndoe
    password: password123

Be aware that this file contains your credentials in plain text. Do not commit it to your version control repository and consider where you are storing it.

Installing the Helm chart

After you have prepared your credentials, add the Lightbend Helm charts repository and update the list of known charts:

helm repo add es-repo
helm repo update

When the repository is updated, install the Console as follows:

helm install es-repo/enterprise-suite --name enterprise-console --namespace lightbend --version 1.2.2 --values credentials.yaml

The previous command has quite a few parameters. Here they are explained in order:

  • install es-repos/enterprise-suite - Specifies the Lightbend Console Helm chart name, do not change this value.
  • --name enterprise-console - name of the Helm release (a local name given to the installed chart). Can be any name you like.
  • --namespace lightbend - Kubernetes namespace where the resources are installed. Can be an existing or new namespace.
  • --version $es.version$ - version of the Console.
  • --values credentials.yaml - a file where your Lightbend credentials are stored in yaml format.

For more information on how to use Helm and its install command, please refer to the official Helm docs.

Operator Installation


The Console Operator is currently an experimental feature. It only supports simple installs and doesn’t handle upgrades yet.

An Operator install works by installing the Console operator, then creating a Console custom resource.

After creating a Console custom resource (CR), the operator will install Console using configuration provided in the CR. If you update values in the CR, the operator will update the Console installation accordingly, and if you delete the CR, Console will be uninstalled. Currently, the operator doesn’t support upgrading between Console versions. In order to upgrade you have to completely uninstall the Console and its operator, then do a fresh install using a new operator version. This should be addressed in future versions of the operator.


We provide a kustomization.yaml to make the install simpler, and also to better integrate with existing tooling. In order to use it, you need kubectl version 1.14 or newer. You can also use the standalone kustomize CLI tool. Alternatively you can apply the Kubernetes resources manually.

Install Operator

First, download the latest version of console-charts from the releases page, unarchive it, and switch to the operator directory:

curl -LO
tar xzvf v1.2.2.tar.gz
cd console-charts-1.2.2/operator

Edit the kustomization.yaml file to your liking (such as specifying the namespace the operator should be installed in). Then, install the operator:

kubectl apply -k .

It is suggested you create your own kustomization.yaml and import our console-operator folder as a directory into your own version control repository. This will let you maintain your configuration separately from the console operator. The website has more information on how to do this.

Install Console

Create a console.yaml with a Console resource:

kind: Console
  name: console
    username: johndoe
    password: password123
  exposeServices: NodePort

Then apply it to the same namespace you created the operator in:

kubectl apply -n lightbend console.yaml

The operator will see this resource and create an instance of Lightbend Console for you in the the same namespace. If you are using Minikube, you should be able to access it at port 30080 of the minikube ip, e.g.

See the console-charts repository for a full example of the Console resource.