Cinnamon 2.5 with Akka HTTP support

By Björn Antonsson (@bantonsson), August 14, 2017

A new version of Lightbend Telemetry, Cinnamon 2.5 with support for Akka HTTP has been released.

Akka HTTP implements a full HTTP server-side and client-side stack on top of Akka using Akka streams, giving you end to end backpressure between your services. The new Akka HTTP support in Cinnamon gives you both server and client metrics as well as a support for OpenTracing and distributed logging Mapped Diagnostic Context using context propagation. This gives you all the tools necessary to diagnose server and client health as well as optimizing end-user latency or doing error root cause analysis across multiple services.

HTTP metrics

The metrics that you can collect are for both the Akka HTTP server as well as the client, giving you full visibility into request and response rates as well as response times.

HTTP server metrics

On the server level, there are metrics for connection rate, request rate, and also response rate and latencies grouped by response code class, such as 2xx, 4xx, etc.

Here is an example Grafana dashboard showing server level metrics.

HTTP server endpoint metrics

For server endpoints, there are metrics for request rates and response latencies. If you are using the Akka HTTP routing DSL, then endpoint metrics are automatically named to match the name in the DSL.

Below is an example Grafana dashboard showing server endpoint metrics.

The automatic naming of endpoint metrics from the DSL also handles parameter types. This means that an endpoint being described like this path("user" / Segment / "tweet" / IntNumber) will have its metrics named /user/String/tweet/Integer, as can be seen in the dashboard detail image below.

There is also an API for the programmatic naming of endpoints, as can be seen in the detail image above. Programmatic endpoint naming works with any type of Akka HTTP request handler. Simply wrap the API around your response completion like this Endpoint.withName("myNamedEndpoint") { ... } and the metrics pick up that name. This also allows you to group several endpoints under the same name if you would like. More information is available in the Cinnamon endpoint naming documentation.

HTTP client metrics

On the client side, there are metrics for client pool connections, request rates, and response latencies. The client metrics can be grouped and named via configuration, as described in the Cinnamon client metrics documentation.

Here is an example Grafana dashboard showing client metrics.

There is also an API for the programmatic naming of client requests. You simply use the API around your request like this Request.withName("AccountService") { ... }, and the metrics will pick up that name. This also allows you to group several requests under the same name if you would like. More information is available in the Cinnamon client request naming documentation.

HTTP context propagation

Context propagation allows you to do request tracing either through the use of OpenTracing or by adding request identifiers to a logging MDC that can be used in log statements to identify the request being processed. This can be an invaluable tool to understanding how your distributed system interacts and reacts to load and failures.


OpenTracing is a recently developed, open standard for distributed tracing. OpenTracing provides a common API for instrumenting distributed systems without binding to a particular tracing vendor. OpenTracing is based on earlier work in tracing, including Dapper and Zipkin.

The OpenTracing support that Cinnamon adds to Akka HTTP, as well as Akka actors and Scala Futures, allows you to do full tracing through multiple services over Akka HTTP client calls as well as Akka remoting. The following trace image is a small multi node sample that has an Akka HTTP frontend making an HTTP request to an Akka HTTP service that communicate with an Akka backend via actor messaging. The actor backend does a database call using a Scala future.

In this sample, the database call in the Scala future shows up as a span named database. This is because the future has been named with a Cinnamon future naming API, like this FutureNamed("database") { ... }. Scala futures will propagate traces by default, but they will not add a new span unless they are named, which makes their time become part of the span enclosing them. There are also methods in the API to name the different operations on the future, like mapNamed etc.


Here is the detail view of the service span showing among other things the hostname, port, and url of the HTTP request that came in from the frontend. You can also see that 440 milliseconds into the span, there is an actor message being sent to the backend service via Akka remoting.

Read more about how to enable and configure OpenTracing in the Cinnamon 2.4 release announcement and the Cinnamon OpenTracing documentation.

Logging MDC

Using logging and a Mapped Diagnostic Context to transport information about a request is a common way to correlate requests in a distributed system. The example here will use the SLF4J MDC API described in the Logback MDC documentation and by simply enabling the Cinnamon SLF4J logging MDC support, the information in the MDC will propagate across Akka HTTP client and server requests as well as actor messaging, actor remote messaging and Scala futures.

Adding a value to the MDC is a simple call.

MDC.put("Correlation-ID", "abc123"")

Finally make sure that your logging configuration contains a pattern using that MDC value, like in this example.

<appender name="FOO" class="some.logging.Appender">
    <Pattern>%X{Correlation-ID} - %m%n</Pattern>

Now all your log calls will contain the corresponding correlation id in front of the log message.

Getting started

You can find out how to get started with telemetry in the Cinnamon getting started guide and how to enable it for your Akka HTTP applications in the Cinnamon documentation for Akka HTTP.

Other features in this release

These are some of the other new features and fixes in the Cinnamon 2.5 release.

Try out Cinnamon 2.5. Feedback, questions, and ideas are all welcome.