Installing Helm

Lightbend Platform uses Helm for deploying Lightbend Console, Spark, Kafka, and other services and applications. Helm is made up of two parts, a client and a server called Tiller. You typically install the Helm client on workstations and the Tiller service for a cluster.

The same version of Helm and Tiller have to be installed on the cluster and on all workstations using the helm command to manage installations.
The forthcoming v3.0 release of Helm will replace Tiller with an alternative approach that addresses known security vulnerabilities in Tiller.

To learn more about Helm and Helm charts, see the very comprehensive Helm documentation.

Install the Helm Client

Install the helm client based on platform-specific instructions from the Helm documentation, Installing Helm.

For OpenShift, do not attempt to initialize the Tiller server with helm init. Instead, see Install Tiller on OpenShift.

Install Tiller on Kubernetes

Install Tiller using these instructions. In privileged environments installation is as simple as running the helm init command. To learn more about securing your Tiller installation in Kubernetes read this section in the Helm documentation, Securing your Helm Installation.

Install Tiller on OpenShift

The Helm Tiller server requires additional RBAC security role considerations when being installed in OpenShift. The following instructions are based on those provided in an OpenShift blog post Getting started with Helm on OpenShift.

The referenced OpenShift template will create a ClusterRoleBinding of the cluster-admin role to the tiller service account used to run Tiller. Ideally this would not be required, but if you install any charts that also require the installation of a ClusterRoleBinding, then cluster-admin is required. In future versions of Helm, Tiller will no longer be required and will depend on the roles associated with the current user doing an install.
  1. Create a new project (Kubernetes namespace), for deploying to Tiller. The following example uses the project name tiller, where $OPENSHIFT_HOST is the access host for your cluster:

    $ oc new-project tiller
    Now using project "tiller" on server "https://$OPENSHIFT_HOST/".
    ...
  2. Set up parameters for the tiller OpenShift deployment. For TILLER_NAMESPACE, use the project you created in the previous step (e.g., tiller). Assign the most recent version of Helm to HELM_VERSION. At the time of this writing, 2.13.1 is the latest version.

    export TILLER_NAMESPACE=tiller
    export HELM_VERSION=v{helm-version}
  3. Create and apply the Lightbend Tiller template with the parameters specified in the previous step.

    $ oc process -f https://developer.lightbend.com/docs/fast-data-platform/2.1.1-OpenShift/resources/helm-tiller-template.yaml \
      -p TILLER_NAMESPACE="${TILLER_NAMESPACE}" \
      -p HELM_VERSION=${HELM_VERSION} | oc create -f -
    serviceaccount "tiller" created
    role.authorization.openshift.io "tiller" created
    rolebinding.authorization.openshift.io "tiller" created
    clusterrolebinding.authorization.openshift.io "tiller-clusterrolebinding" created
    deployment.extensions "tiller" created

    Wait for tiller deployment to complete. This is a blocking operation. You should see output similar to the following:

    $ oc rollout status deployment tiller
    Waiting for rollout to finish: 0 of 1 updated replicas are available...
    deployment "tiller" successfully rolled out
    $ oc get po --namespace=tiller
    NAME                      READY     STATUS    RESTARTS   AGE
    tiller-<id>-4sxlt         1/1       Running   0          8m

Test Your Helm Installation

Assuming that your Helm client and server are the same version, follow these steps to test whether both are successfully installed:

  1. Verify that your helm client is able to connect to the Tiller server

    $ helm version
    Client: &version.Version{SemVer:"v{helm-version}", GitCommit:"...", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v{helm-version}", GitCommit:"...", GitTreeState:"clean"}
  2. Using a client installation, check the helm installation with a smoke test:

    helm create mychart
    rm -rf mychart/templates/*.*
    touch mychart/templates/ConfigMap.yaml
  3. Add the following to the ConfigMap.yaml file:

    GNU nano 2.5.3                         File: mychart/templates/ConfigMap.yaml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: mychart-ConfigMap
    data:
    myvalue: "Hello World"
  4. Now install it:

    $ helm install ./mychart
    NAME:   eponymous-chipmunk
    LAST DEPLOYED: Fri Sep 21 11:18:52 2018
    NAMESPACE: tiller
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ConfigMap
    NAME               DATA  AGE
    mychart-ConfigMap  1     0s

    Helm generated a name, eponymous-chipmunk. Normally, you’ll want to pass the --name …​ arguments to specify a name.

  5. Use helm list to see if it is installed:

    $ helm list
    NAME              	REVISION	UPDATED                 	STATUS  	CHART        	APP VERSION	NAMESPACE
    eponymous-chipmunk	1       	Fri Sep 21 11:18:52 2018	DEPLOYED	mychart-0.1.0	1.0        	tiller