Install Console
Lightbend Console installs as a Helm chart with many configurable parameters. We provide a script that simplifies Console installation in development and production environments. The script verifies the environment before and after the install to help troubleshoot any issues. It is also possible to install using Helm directly, that might be useful in cases where the install script can’t be used. We also provide an experimental Operator installation.
Prerequisites
Prior to installing the Console, you should have already:
- Started a cluster in a distributed or local environment
- Set up Helm
- Set up Storage
- Set up Credentials
- Python 2.x available in your path
Windows support
The install script can be used on Windows 10 with OpenShift client. Running on local clusters like Minikube or Minishift is not supported.
To use the install script on Windows, make sure you have these programs available in your PATH
:
- Python 2.7
- OpenShift client
- kubectl
- Helm
Both PowerShell and Command Prompt shells can be used. Be aware that the example commands in the rest of this document need to be slightly altered by removing the leading ./
characters, so on Windows instead of:
./lbc.py <commands>
Write this:
lbc.py <commands>
Download the Install Script
Download the script and make it executable.
curl -O https://raw.githubusercontent.com/lightbend/console-charts/master/enterprise-suite/scripts/lbc.py
chmod u+x lbc.py
The script checks your environment for platform dependencies and their versions, installs Lightbend Console into your cluster, can verify existing Console installations, and help with debugging problems by gathering logs and diagnostic data. The script uses Helm, allowing you to pass in chart values and offers other install configuration options.
Performing the install
You need a namespace to install Lightbend Console into. For the purposes of these instructions, lightbend
will be assumed.
Create the namespace with:
CONSOLE_NAMESPACE=lightbend
kubectl create namespace "${CONSOLE_NAMESPACE}"
In some Kubernetes environments, you can simply run the install subcommand and specify the Console version:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17
We recommend always specifying a version when using install
subcommand. If a version argument is not provided, Helm will try to get the newest Lightbend Console version, including release candidates. The current version is 1.2.17.
The command for installing on Kubernetes 1.15 or 1.16-1.19 is slightly different. See the Kubernetes 1.15 installation notes or the Kubernetes 1.16-1.19 installation notes as appropriate.
In particular, if you miss this for Kubernetes 1.16-1.19, the installation will fail with an error such as:
$ helm install enterprise-suite es-repo/enterprise-suite --namespace lightbend --version 1.2.17 --values /var/folders/_l/q8t5_1gj5yx49v31fk6mr1p40000gn/T/tmpZ8X0d1
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2"
It can take a few minutes for all components to initialize. Once that is done, you can verify that Lightbend Console is running:
./lbc.py verify --namespace="${CONSOLE_NAMESPACE}"
The following platforms require more specific install commands:
Setting chart values and Helm arguments
Any arguments you provide to lbc.py
after --
are passed directly to Helm.
Chart values can be set in a YAML file and passed to Helm with the --values <values.yaml>
parameter:
usePersistentVolumes: true
defaultStorageClass: gp2
It is also possible to pass ad-hoc values on the command line with --set setting=value
, but it is strongly recommended to use a values.yaml
file to preserve your settings for upgrades.
The following table describes the available chart values and lists their defaults.
Value Key | Default | Description |
---|---|---|
exposeServices | false | Set to NodePort or LoadBalancer to generate services accessible from outside of cluster (eg. http://$(minikube ip):30080 when used with NodePort ) for interacting with Lightbend Console. |
esConsoleExposePort | 30080 | Port on which Console will be exposed when value exposeServices is used. |
createClusterRoles | true | Set to true to create a ClusterRole for Prometheus and kube-state-metrics . Set to false to not create them, in case you would like to define your own. |
usePersistentVolumes | true | Use Persistent Volumes by default. If false , Console will use emptyDir for all volumes. See Set up Storage for more information. |
defaultStorageClass | <none> | Name of the StorageClass to use for persistent volumes. The default uses the cluster’s DefaultStorageClass . |
prometheusVolumeSize | 256Gi | Size of the Prometheus volume. Used for storing prometheus data and custom monitors. |
esGrafanaVolumeSize | 32Gi | Size of the Grafana volume. Used for saving custom dashboards, plugins, and users. |
prometheusDomain | prometheus.io | Domain for scrape annotations. For example, prometheus.io/scrape . |
alertManagers | alertmanager:9093 | Comma separated list of Alertmanager addresses. When installing with createAlertManager=false this is used to specify existing Alertmanagers to connect to. |
esConsoleURL | n/a | External URL for access to the console. Currently used by Prometheus and Alertmanager in alerts. |
defaultCPURequest | 100m | Default container resource request. |
defaultMemoryRequest | 50Mi | Default container resource request. |
kubeStateMetricsScrapeNamespaces | "" | (Experimental Feature) Comma-separated list of namespaces to constrain the scope of what is scraped as kube-state-metrics. Note that this has the effect of constraining which deployments/workloads are visible in Lightbend Console. "" means all namespaces. Suggested to set it in values.yaml. See Opt-in namespaces to scrape |
prometheusMemoryRequest | 250Mi | Prometheus container memory request. |
containerCPURequest | n/a | Container specific resource request. Replace container with one of esConsole , esMonitor , prometheus , or grafana . |
containerMemoryRequest | n/a | Container specific resource request. Replace container with one of esConsole , esMonitor , prometheus , or grafana . |
consoleUIConfig.isMonitorEditEnabled | false | Set to true to enable monitor editing. |
Install configuration
The install subcommand itself has several options which affect how it calls Helm. Pass the --help
option to view them all:
./lbc.py install --help
Available arguments are:
Argument | Description | Default |
---|---|---|
--namespace |
Namespace into which to install Lightbend Console. | unset (required) |
--dry-run |
Just print the commands that the script would run instead of executing them. | false |
--skip-checks |
Skip checking for dependency tools, credentials validity, etc. | false |
--wait |
Wait for install or upgrade to finish before returning. | false |
--force-install |
Set to true to delete the installed chart first. |
false |
--delete-pvcs |
Override any warnings about possible data loss when uninstalling. USE WITH CAUTION. | unset (false ) |
--set key1=val1,key2=val2 |
Set Helm chart values. Can be repeated. | unset |
--es-chart |
Chart name to install from the repository | enterprise-suite |
--export-yaml |
Export resource YAMLs to stdout instead of installing. Set to creds for credentials, console for everything else. |
unset |
--helm-name |
Helm release name | enterprise-suite |
--local-chart |
Location of local chart (tarball). Overrides --repo . |
unset |
--repo |
Helm chart repository to use | https://repo.lightbend.com/helm-charts |
--creds |
Credentials file in property format with username/password | $HOME/.lightbend/commercial.credentials |
--version |
Version of helm chart to install. The latest stable version is 1.2.17. | unset |
--keep-chart |
Does not delete the downloaded chart. | false |
--tiller-namespace |
Namespace into which Tiller was installed. | $TILLER_NAMESPACE / kube-system |
In addition to commandline arguments, the install subcommand supports the following environment variables:
Variable | Description |
---|---|
LIGHTBEND_COMMERCIAL_USERNAME | Credentials username. If specified in conjunction with LIGHTBEND_COMMERCIAL_PASSWORD the pair overrides --creds argument. |
LIGHTBEND_COMMERCIAL_PASSWORD | Credentials password. If specified in conjunction with LIGHTBEND_COMMERCIAL_USERNAME the pair overrides --creds argument. |
Platform Details
This section provides install tips for specific platforms.
Kubernetes 1.16-1.19
Although Lightbend Console is functional when running on Azure AKS, there are some caveats, and some metrics are missing. For example, the following metrics are missing:
container_network_transmit_bytes_total
container_network_receive_bytes_total
To install Lightbend Console on Kubernetes 1.16-1.19, you need to configure use of the newer Kubernetes API versions and use a particular version of kube-state-metrics
. For that, create a values.yaml
file with the lines:
kubeStateMetricsImage: quay.io/coreos/kube-state-metrics
kubeStateMetricsVersion: v1.9.7
deploymentApiVersion: apps/v1
daemonSetApiVersion: apps/v1
Then install Console:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --values=values.yaml
Kubernetes 1.15
When installing Lightbend Console on Kubernetes 1.15, you need to use a particular version of kube-state-metrics
. For that, create a values.yaml
file with the line:
kubeStateMetricsVersion: v1.8.0
Then install Console:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --values=values.yaml
Kubernetes 1.6 / OpenShift 3.6
To install Lightbend Console on older Kubernetes versions (1.6 and earlier, or Openshift 3.6 and earlier), you may need to specify the older Kubernetes API versions of our resources.
Do the following to install Lighbend Console on Kubernetes 1.6 / OpenShift 3.6:
Create a values.yaml
file with the lines:
rbacApiVersion: authorization.openshift.io/v1
deploymentApiVersion: apps/v1beta1
daemonSetApiVersion: extensions/v1beta1
apiGroupVersion: authorization.openshift.io/v1
Then install Console:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --values=values.yaml
Container metrics graphs will be empty since these metrics are not available in OpenShift 3.6.
Minikube
Run the script and in addition to specifying the version, make sure to include the --set exposeServices=NodePort
argument to enable external access.
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --set exposeServices=NodePort
You should now be able to open the Console in your browser by running:
minikube service expose-es-console --namespace="${CONSOLE_NAMESPACE}"
To just print the URL instead of opening it in a browser, add the --url
argument to the above command.
Minishift
Unlike Minikube, Minishift doesn’t provide a StorageClass
. Disable persistent volumes when installing Console:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --set usePersistentVolumes=false
Upgrading the console version
The install subcommand will automatically upgrade the chart if it detects it is already installed.
Make sure to read the release notes in case any actions are required.
As Helm doesn’t remember the parameters you used for the initial install, you need to pass the same set of values you originally used.
Specify a new version and the values.yaml
file you used for the initial install:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --values=values.yaml
If the upgrade fails, you can do a force install to purge the previous install:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --force-install -- --values=values.yaml
Installing without Helm Tiller
Helm 3 no longer uses Tiller so you can migrate to Helm 3 to install without Tiller.
By default, lbc.py
uses Helm client commands, some of which require the Helm Tiller server component to be installed. There are situations where this may not be appropriate. For example:
- You do not or cannot run Tiller on your cluster.
- You want to manage the Kubernetes resource yaml yourself.
- You are attempting to install the product on a non-supported platform.
You can use the lbc.py
script to generate the yaml and deploy directly.
Generating YAML From lbc.py
The lbc.py
script can be used to export the resource yaml definitions that Helm would use. You can use these definitions to manage the installation of the Console into your Kubernetes infrastructure as you see fit. This still requires you to have Helm installed in your local environment because it uses Helm to render chart templates into Kubernetes resources. Tiller, however, does not need to be installed in your cluster.
The --export-yaml
command line flag enables the yaml export.
First export the resource definitions that make up an install:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --export-yaml=console > console.yaml
For installation onto Kubernetes 1.16 or later, change that to:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --export-yaml=console -- --set deploymentApiVersion=apps/v1 > console.yaml
Apply the yaml on the cluster:
kubectl --namespace="${CONSOLE_NAMESPACE}" apply -f console.yaml
Then export the Lightbend commercial credentials as a Secret
resource:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 --export-yaml=creds | \
kubectl --namespace="${CONSOLE_NAMESPACE}" apply -f -
Any exported credentials yaml will include your credentials. They are base64 encoded but not encrypted. They should be considered to be in plaintext so should be managed carefully. Do not commit them to a code repository for example.
Backwards compatibility and further customization
We provide backwards compatibility of the chart parameters based on the semantic version. For example, 1.x.x releases should not change existing parameter names.
While https://github.com/lightbend/console-charts is public, we provide no guarantee of maintaining its structure.
If you need further customization beyond what the Helm parameters provide, it is recommended to create a support ticket for Lightbend to add it. This gives you the greatest guarantee of compatibility across versions.
Verifying an install
The verify
subcommand can be used to check if an existing Lightbend Console installation is running fine:
./lbc.py verify --namespace="${CONSOLE_NAMESPACE}"
Available arguments are:
Argument | Description | Default |
---|---|---|
--namespace |
Namespace in which to look for Lightbend Console. | unset (required) |
--skip-checks |
Skip checks for dependency tools (kubectl). | false |
Uninstalling
To remove the Console from you cluster, you can use uninstall
subcommand:
./lbc.py uninstall --namespace="${CONSOLE_NAMESPACE}"
In order to avoid losing your data, this will warn and stop if it detects that the the uninstall would risk the loss of Persistent Volumes containing your monitors and other data.
Available arguments are:
Argument | Description | Default |
---|---|---|
--namespace |
Namespace into which Lightbend Console was installed. | unset (required) |
--dry-run |
Just print the commands that the script would run instead of executing them. | false |
--skip-checks |
Skip checks for dependency tools (helm, kubectl). | false |
--helm-name HELM_NAME |
Helm release name to uninstall | enterprise-suite |
--delete-pvcs |
Ignore warnings about PVs and proceed anyway. USE WITH CAUTION! | unset (false ) |
Debug data for diagnostics
To help diagnosing any problems in Lightbend Console, there is debug-dump
subcommand:
./lbc.py debug-dump --namespace="${CONSOLE_NAMESPACE}"
When no other arguments are given, it will look for an existing Lightbend Console installation, gather Kubernetes descriptions of the Console namespace as well as logs for all the containers running there. It will not gather any other data from outside of the Console namespace (lightbend
by default). Everything is placed into a zip file in the working directory with a name like console-diagnostics-2018-11-12-11-32-15.zip
.
Supported arguments are:
Argument | Description | Default |
---|---|---|
--namespace |
Namespace in which to look for Lightbend Console. | unset (required) |
--skip-checks |
Skip checks for dependency tools (kubectl). | false |
--print |
Print gathered data to stdout instead of making a zip file. | false |
Installing without lbc.py
It is possible to install Lightbend Console into your cluster without using the lbc.py
script. We advise that you use the install script if at all possible, but there may be reasons why you can’t. Manual installation consists of two steps, setting up credentials and then using Helm to install the Console chart. The instructions below assume that you have a Kubernetes cluster up and running and have the Helm client installed.
Setting up credentials
Your Lightbend credentials are used to pull the Console Docker images that are only available to authenticated users. Credentials come in the form of username and password, usually kept in the ~/.lightbend/commercial.credentials
file. When installing manually, you will have to create a credentials.yaml
file, which has your username and password, and pass the location of the file as a parameter to Helm. The yaml
file should look like the following:
imageCredentials:
username: johndoe
password: password123
Be aware that this file contains your credentials in plain text. Do not commit it to your version control repository and consider where you are storing it.
Installing the Helm chart
After you have prepared your credentials, add the Lightbend Helm charts repository and update the list of known charts:
helm repo add es-repo https://repo.lightbend.com/helm-charts
helm repo update
When the repository is updated, install the Console as follows:
helm install enterprise-console es-repo/enterprise-suite --namespace "${CONSOLE_NAMESPACE}" --version 1.2.17 --values credentials.yaml
The previous command has quite a few parameters. Here they are explained in order:
install es-repos/enterprise-suite
- Specifies the Lightbend Console Helm chart name, do not change this value.--name enterprise-console
- name of the Helm release (a local name given to the installed chart). Can be any name you like.--namespace "${CONSOLE_NAMESPACE}"
- Kubernetes namespace where the resources are installed. Can be an existing or new namespace.--version
1.2.17 - version of the Console.--values credentials.yaml
- a file where your Lightbend credentials are stored in yaml format.
For more information on how to use Helm and its install
command, please refer to the official Helm docs.
Operator Installation
The Console Operator is currently an experimental feature. It only supports simple installs and doesn’t handle upgrades yet.
An Operator install works by installing the Console operator, then creating a Console custom resource.
After creating a Console
custom resource (CR), the operator will install Console using configuration provided in the CR. If you update values in the CR, the operator will update the Console installation accordingly, and if you delete the CR, Console will be uninstalled. Currently, the operator doesn’t support upgrading between Console versions. In order to upgrade you have to completely uninstall the Console and its operator, then do a fresh install using a new operator version. This should be addressed in future versions of the operator.
Kustomize Prerequisite
We provide a kustomization.yaml to make the install simpler, and also to better integrate with existing tooling. In order to use it, you need kubectl
version 1.14 or newer. You can also use the standalone kustomize CLI tool. Alternatively you can apply the Kubernetes resources manually.
Install Operator
First, download the latest version of console-charts from the releases page, unarchive it, and switch to the operator directory:
curl -LO https://github.com/lightbend/console-charts/archive/v1.2.17.tar.gz
tar xzvf v1.2.17.tar.gz
cd console-charts-1.2.17/operator/manifests/
Edit the kustomization.yaml
file to your liking (such as specifying the namespace the operator should be installed in). Then, install the operator:
kubectl apply -k .
It is suggested you create your own kustomization.yaml
and import our console-operator folder as a directory into your own version control repository. This will let you maintain your configuration separately from the console operator. The https://kustomize.io website has more information on how to do this.
Install Console yaml
Create a console.yaml
with a Console
resource:
apiVersion: app.lightbend.com/v1alpha1
kind: Console
metadata:
name: console
spec:
imageCredentials:
username: johndoe
password: password123
exposeServices: NodePort
Then apply it to the same namespace you created the operator in:
kubectl apply --namespace="${CONSOLE_NAMESPACE}" --filename=console.yaml
The operator will see this resource and create an instance of Lightbend Console for you in the the same namespace. If you are using Minikube, you should be able to access it at port 30080
of the minikube ip
, e.g. https://192.168.99.100:30080
.
See the console-charts repository for a full example of the Console resource.
Opt-in namespaces to scrape
The opt-in namespaces facility is currently an experimental/alpha feature, ready to be tested by developers for initial feedback.
By default, Lightbend Console scrapes all namespaces. In order to limit the namespaces that are scraped, leverage the install parameter kubeStateMetricsScrapeNamespaces
. For example, if we only want to scrape namespace1 and namespace2, this can be accomplished in the following two ways.
Method1: set it in values.yaml (Suggested)
Step1: create a file values.yaml
with content:
kubeStateMetricsScrapeNamespaces: "namespace1,namespace2"
Step2: pass values.yaml
to installer:
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --values=values.yaml
Method2: set it in install parameter
The drawback is that you need to tweak the escape character.
./lbc.py install --namespace="${CONSOLE_NAMESPACE}" --version=1.2.17 -- --set kubeStateMetricsScrapeNamespaces=\"namespace1\\,namespace2\"
Notice that in both cases you need to put --values
or --set
after --
, which means that it is passed directly to Helm.