Release Notes

1.1.0

March 30 2019

Changes

  • Pipelines UI release.
  • UI performance improvements and stability.
  • Update Spark, Strimzi, and Cinnamon Grafana dashboards.
  • Windows support for the installer script.
  • Add Lightbend Pipelines Grafana panels for the dynamic dashboard.

Bug Fixes

  • Fix default monitors not loading on startup (1.0.2 regression).

1.0.2

March 18 2019

Changes

  • Added initial support for Spark and Kafka for Lightbend Pipelines

Bug Fixes

  • Fix installation to work with Helm 2.10+ (should also work on older versions, but no guarantees are provided)
  • Clean up some redundant labeling (kubernetes_namespace re-labeling removed in many cases)

1.0.1

March 6 2019

Actions required if upgrading from 1.0

  • You must specify defaultStorageClass when upgrading if you were using the default value in 1.0 (standard). This may affect you if using Minikube or GKE.

Changes

  • PersistentVolumeClaims no longer have the helm.sh/resource-policy: keep annotation, to simplify the configuration of PVCs. Instead, the Helm parameter usePersistentVolumes controls whether PVCs are created. If set to false, it will remove any PVCs previously created, although the installer will attempt to detect this and warn the user.
  • The default value for defaultStorageClass is now unset - which will use the default storage class in your cluster. However, this requires some action on upgrade - see Actions required.
  • Kubelet metrics are now scraped by default, so they will be available in Grafana and for new monitors.
  • Add standard labels for Kubernetes deployments to better conform with Kubernetes guidelines.

Bug Fixes

  • Fix internal handling of namespace and deployment names to support leading number or dots, eg. “0”, “v1.2.3”.
  • Fix an issue with console upgrades not triggering reloads.
  • Improve default monitors to not fire for a long period of time (10m) when pods only have a few errors at startup.
  • Fix the Grafana links to correctly use the namespace of the focused workload.
  • Handle a missing trailing slash correctly - e.g. https://myconsole.company.com/monitoring should now work.
  • Fix pod grouping color in cluster overview sometimes being incorrect.
  • Fix some minor visual bugs.

1.0.0

February 2019

Changes

  • Various security fixes for older browsers.
  • Update containers and dependencies. Now bundling our own versions of third party dependencies to fix security vulnerabilities where needed.
  • Prometheus updated to 2.7.1.
  • Alertmanager updated to 0.16.0.
  • Grafana updated to 5.3.4.
  • Updated Cinnamon Grafana dashboards to 2.10.13.
  • Add support for Grafana configuration in the Helm chart.
  • Improved self-monitoring - a monitor will fire if something is going wrong on the backend.
  • Node exporter has been removed, as Console doesn’t support node-based monitors. You can use the official chart to install if needed.

Bug Fixes

  • Improvements in UI network performance and memory usage.

1.0.0-rc.6

December 2018

Actions required

  • If the installer script fails an upgrade, pass --force-install to the script to perform a clean install.

Changes

  • Fix bug with upgrades when PVs are present, and the console wouldn’t fully start. Due to an issue in Helm, you may need to force install to upgrade.
  • Many improvements to the lbc.py install script.
  • Can specify multiple external Alertmanagers.
  • Only the console is exposed if exposeServices is set.
  • Updated Grafana version
  • Improved units for axes in Grafana graphs.
  • Improved UI.

Bug Fixes

  • Fixed issues with monitor aggregates.
  • Various monitor fixes.
  • Various UI fixes.

1.0.0-rc.5

Changes

  • Make image names fully configurable to support air-gapped installs.
  • To prevent data loss, PVCs are no longer removed if the Helm chart is uninstalled.
  • Support multiple console installs in a single cluster, with the Helm parameter createClusterRoles=false.
  • New install script lbc.py which provides better pre-flight checks, and additional diagnostics commands - verify and debug-dump.
  • Add version info to UI in a tooltip when hovering over the Lightbend Console icon in the upper left.
  • Links to alert manager and prometheus in the UI control panel.

Bug Fixes

  • Documentation links fixed in the UI.
  • Robustness improvements for UI monitor view.
  • Robustness improvements for UI cluster page.
  • General performance improvements related to data loading and page resizing behavior.
  • Container (cadvisor) and generated metrics (up, scrape_duration_seconds) now work in custom monitors.

1.0.0-rc.4

Changes

  • The persistent volumes for prometheus-server have been consolidated into a single volume. This allows the console to work in clusters with multiple availability zones. This is a breaking change however - existing data will not be accessible anymore.
  • bootstrap-monitors.json is renamed to default-monitors.json in the es-monitor-api ConfigMap, and now will be applied at every start up. This will allow Lightbend to add and improve the default monitors in the future.

Bug Fixes

  • Fix example slack template.
  • Fix install script to work with the local chart when exporting resources.
  • Fix persistent volumes to work on Openshift and GKE:
  • Fixed permission issues so prometheus can write to the PV.
  • Consolidated the two PVs on the prometheus-server into a single one to support multi-az clusters.

1.0.0-rc.3

Changes

  • Enable emptyDir volumes by default to ease exploratory installations. Add new option usePersistentVolumes to enable persistent volume storage. The old useEmptyDirVolumes is deprecated and will be removed in a future release.

Bug Fixes

  • Reload Prometheus configuration automatically on upgrades.

1.0.0-rc.2

Bug Fixes

  • Restore node metrics to support cluster dashboard.
  • Restore Lightbend Telemetry (Cinnamon) Grafana dashboards.

1.0.0-rc.1

October 2018

New Features

  • Support for persistent volumes.
  • Support Play service/metrics.
  • Support for Minishift.

Bug Fixes

  • Fix Alertmanager not starting up in DC/OS clusters.
  • Fix health sampling problem.
  • Performance enhancements.

0.10.3

September 2018

New Features

  • Rename to Lightbend Console.
  • Improved monitor edit page that adds finer grained monitor tuning.
  • Installation script to make it easier to install Lightbend Console.
  • Play & Lagom monitors provided out of the box.
  • Minishift and OpenShift compatibility.
  • Support for Alertmanager, accessible at /service/alertmanager.
  • Support for Ingress paths.

Bug Fixes

  • Performance improvements for the Console’s monitor rendering.
  • Cache busting for the UI. Prior to this, parts of the UI may have been cached across versions, leading to broken dialogues and links.
  • Prometheus UI properly renders now at /service/prometheus.

Breaking Changes

  • Lightbend Console now requires commercial credentials to use. This is detailed in the installation guide.
  • The Helm chart repo has moved to https://repo.lightbend.com/helm-charts. This uses Google Cloud Storage to provide better performance for our customers. We have also moved to an official lightbend domain name.

0.8.0

July 2018

  • The Lightbend Console is currently in Beta, and is therefore not intended to be used for production monitoring. However, most of the key functionality is available for you to get visibility into and monitor your application on a development or test basis. For development in particular, it can be a handy tool to observe the behavior of your application in order to validate difficult to test scenarios, such as node failover or cluster partitions.

Key Features

  • Cluster View of running workloads
  • Auto monitoring of application health for Akka/Play/Lagom applications
  • Preconfigured Grafana dashboards for your applications
  • Preinstalled Lightbend Telemetry Grafana dashboards

Browsers and Platforms

  • The Lightbend Console currently supports the Chrome and Safari browsers on MacOS, Windows, and Ubuntu
  • The beta release of Lightbend Console is targeted for Minikube, running on MacOS, Windows, and Ubunutu. Some individuals have had some success installing it on additional Kubernetes platforms, but we’d like to do more extensive testing on those platforms before officially supporting them