Set up Storage

Lightbend Console requires persistent storage for saving the state of its various components:

  • Prometheus requires storage for the time series data it scrapes.
  • The Console requires storage for saving custom monitors.
  • Alertmanager requires storage for saving silences.
  • Grafana requires storage for saving custom dashboards, plugins, and users.

Persistent volumes should be used in any production environment. The Console supports using Kubernetes Persistent Volumes to make storage persistent. The Helm chart bundles PersistentVolumeClaim resources, which request a certain amount of storage for each of the bundled components. A PersistentVolumeClaim depends on an externally defined StorageClass which specifies parameters for backing storage. The user is responsible for providing a suitable storage class.

Setting up and using persistent volumes

When the Console is installed, it uses Persistent Volumes via a storage class and uses the DefaultStorageClass by default. To use a different storage class, specify the name of the StorageClass when installing Lightbend Console.

./lbc.py install --set defaultStorageClass=MyStorageClass

Whenever the Console is uninstalled, the Persistent Volume Claims (PVCs) are deleted. The binding between the PVCs and their associated Persistent Volumes (PVs) is broken. What happens then to the data in a PV is a function of the reclaim policy on the PV. If set to Delete, the data will (eventually) be deleted. In order to ensure data is preserved, the reclaim policy should be set to Retain. The user will either need to provide a storage class with a reclaim policy of Retain or modify the reclaim policy on the PVs after Console installation but before uninstallation.

The lbc.py script will warn and halt when uninstalling the Console if it determines there is a risk of losing data. If you wish to proceed anyway, add the --delete-pvcs flag to the lbc.py command. Use this flag with caution and only if you understand the implications.

Use the kubectl CLI to get information about the storage classes. For example:

$ kubectl get storageclass
NAME                PROVISIONER               AGE
glusterfs-storage   kubernetes.io/glusterfs   128d
gp2 (default)       kubernetes.io/aws-ebs     128d

# gp2 is the default storage class
# Check the reclaim policy...

$ kubectl describe storageclass gp2
Name:                  gp2
IsDefaultClass:        Yes
ReclaimPolicy:         Delete
...

Warning

By default, the Console uses the DefaultStorageClass defined for the cluster, which usually has a reclaim policy of Delete.

Use the official Kubernetes documentation as a reference to setup a suitable StorageClass. See also Kubernetes documentation on Dynamic Provisioning and Storage Classes for a general overview.

Using existing persistent volumes

If an installation of Lightbend Console already exists, subsequent installs or upgrades using the lbc.py script will reuse the existing PVs.

If the Lightbend Console was installed and then uninstalled (and the PVs had a reclaim policy of Retain), you may wish to reuse the previous PVs (and thus data) with a new installation of Console. In order to do so, you must first change the phase of the volumes to Available (typically from Released). The following script is one way to accomplish this.

for name in alertmanager-storage es-grafana-storage prometheus-storage; do
  kubectl patch pv $name -n lightbend -p '{"spec":{"claimRef":{"resourceVersion":null,"uid":null}}}'
done

Replace lightbend with the namespace in which Console is installed.

Using emptyDir volumes

For development or testing, you can use emptyDir volumes which don’t require a StorageClass. Set --usePersistentVolumes=false to do this, but note that these volumes are not persistent and any data will be lost when the pod restarts:

./lbc.py install --set usePersistentVolumes=false