Addressing Streaming Data Challenges
Akka Data Pipelines and Cloudflow allow you to quickly build and deploy large, distributed stream processing applications by removing the need for you to develop connections between different incoming, processing, and outgoing flows. They spare you from understanding how to configure multiple technologies to make them work together. Instead, you can concentrate on your own business logic. A Cloudflow application represents a self-contained distributed system of data processing services connected together by data streams.
A Cloudflow application includes:
Streamlets, which contain the stream processing logic
Blueprints, which define how streamlets are composed and configured.
Each streamlet represents a discrete chunk of stream processing logic with data being safely persisted at the edges using pre-defined schemas. Streamlets can be scaled up and down to process partitioned data streams. Streamlets can be written using multiple streaming runtimes, such as Akka Streams and Spark. This exposes the full power of the underlying runtime and its libraries while providing a higher-level abstraction for composing streamlets and expressing data schemas.
You compose streamlets into larger systems using application blueprints, which specify how streamlets can be connected together. Cloudflow will take care of deploying the individual streamlets as a whole and making sure connections get translated into data flowing between the streamlets at runtime.
Cloudflow provides tooling for: developing streamlets, composing them into applications, and deploying those applications to your clusters. Akka Data Pipelines provides insight and visibility into deployed applications.
Akka Data Pipelines includes Lightbend Console, which enables you to view important performance metrics and monitor the health of your application, as shown in the examples below: