Lightbend Console uses Prometheus Alertmanager for routing alerts that the Console generates. Alertmanager is able to route alerts to many different integration points, including Slack, PagerDuty, and others. It can also route based on severity and workload. Console installs an Alertmanager by default, but you can use your own instead of the default.
The Alertmanager UI is available at
/service/alertmanager, relative to the Lightbend Console URL. On a minikube install, this will be at http://192.168.99.100:30080/service/alertmanager.
For more details on Alertmanager and other alert integration, see:
The default installation of Lightbend Console comes with an Alertmanager deployment. To make it more useful for your project, you will have to do some additional configuration. The Alertmanager bundled in Console is bound to a ConfigMap for its configuration file.
To set up Alertmanager for your own use case, you need to:
- Create an Alertmanager configuration file.
- Load it into Kubernetes as a
- Pass the
ConfigMapname to the Console installer script.
Once done, Alertmanager automatically picks up any updates to the
ConfigMap. The following sections describe these steps in detail.
alertmanager.yml file with your desired configuration. The config file must be named
alertmanager.yml, since the Alertmanager in Lightbend Console has been configured to use this name. We provide an example as a starting point. See the official documentation for more details on configuring this file.
Lightbend Console will generate alerts based on the configured monitors. These alerts have labels that can be used for the routing rules. The example illustrates how to use these labels. The available labels are:
|es_workload||The workload in Lightbend Console. This can be used to route alerts to different groups based on workload.|
Next, create a Kubernetes ConfigMap with the configuration file.
alertmanager.yml and any required
.tmpl files into a new directory:
mkdir alertmanager-config cp alertmanager.yml alertmanager-config/ cp *.tmpl alertmanager-config/
Then create a
ConfigMap in the Lightbend Console namespace. Here we use the
kubectl -n lightbend create configmap my-alertmanager-config --from-file=alertmanager-config/ --dry-run -o yaml | kubectl apply -f -
The last step is to configure Lightbend Console’s Alertmanager to use the new
Pass the argument
--set alertManagerConfigMap=my-alertmanager-config to lbc.py. The installer script is safe to call multiple times, so if you’ve already installed Lightbend Console you can re-run it.
To confirm Alertmanager is using the new configuration, check the
Status link on the Alertmanager UI.
This step is optional.
Alerts generated by Alertmanager (e.g. email alerts) typically contain a link into the Alertmanager UI that describes the alert that fired. The Alertmanager pages also contain (“Source)” links back to the Prometheus data that triggered the alert. These links assume that both the Alertmanager and Prometheus are externally accessible so will be broken for typical installations of the Console.
If you want to make these links work, you need to make the Console externally accessible (e.g. using a proxy) and set the
esConsoleURL Console config accordingly. Consider the security implications of making the Console externally accessible before doing so.
If used, the
esConsoleURL config should be set to the URL used for external access to the console. For example, if external access to the console via an ingress is available at
console.mycorp.com:8080, then pass the argument
--set esConsoleURL=http://console.mycorp.com:8080 to lbc.py.
Once you’ve configured the Console to use your
ConfigMap, any future changes will be automatically picked up by Alertmanager. There may be a small delay before Alertmanager reloads, because we rely on Kubernetes to periodically synchronize the
ConfigMap contents. This can take up to a minute.
You can repeat the steps described in 2.Create the ConfigMap to update the
ConfigMap as many times as you’d like. Another approach is to edit the
ConfigMap directly with
kubectl edit -n lightbend configmap my-alertmanager-config
If you already have an Alertmanager installed somewhere, you can use it instead of Console’s default Alertmanager. It allows for running Alertmanagers in different namespaces or outside of Kubernetes. In addition to that, you can use High-Availability Alertmanager cluster to improve reliability.
To use an existing Alertmanager, you need to pass in values to the Console
lbc.py script. Specifically, set the
createAlertManger Helm chart value to
false and provide an address with a port number where your Alertmanager can be reached:
./lbc.py install --version=$es.version$ --set createAlertManager=false --set alertManagers=my-alertmanager:9093
In the above example,
my-alertmanager is the Kubernetes service name given to an Alertmanager already running in the
lightbend namespace. For connecting to an Alertmanager in a different namespace, use a full Kubernetes service DNS name like
my-alertmanager.monitoring.svc.cluster.local:9093. More about this can be found in Kubernetes docs.
Console connects to your specified Alertmanager through HTTP by doing a DNS lookup for a given name. That’s why you can use Kubernetes service names - they get registered in the internal cluster DNS registry. It also means you can use an Alertmanager running outside of the cluster where Console is installed. It’s even possible to specify an IPv4 or IPv6 address:
./lbc.py install --version=$es.version$ --set createAlertManager=false --set alertManagers=10.2.0.64:9093
Prometheus documentation describes how to set up a high-availability cluster of Alertmanagers. Internally, Alertmanager cluster takes care of deduplicating, silencing, and triggering configured behaviours just as if running a single instance. To use such a cluster, you need to provide addresses for all its instances to the Console install script:
./lbc.py install --version=$es.version$ --set createAlertManager=false --set alertManagers=alertmanager-00:9093,alertmanager-01:9093,alertmanager-02:9093
It’s possible to run your own Alertmanager in many different configurations, so consult with Prometheus docs on the best ways to set it up.
In the case your Alertmanager is running in the same Kubernetes cluster, you can use
ConfigMap for configuring it, as described in Configuration overview. This requires modifying the
Deployment resource to reference the right
ConfigMap and reload it when changes occur. While a detailed description is out of scope for this document, you can look at the included Alertmanager Deployment template as an example.