Nussknacker deployment consists of the following architectural components.
The following paragraphs briefly explain the role of the architectural components in Nussknacker deployment.
Nussknacker - allows to author, deploy and monitor event processing scenarios. More on its capabilities can be found here.
Schema Registry - stores versioned information about the structure of data transported in Kafka streams; in a sense performs a similar role to a relational database schema. Without it, scenarios that write to and read from a Kafka stream would not speak a common language.
Once authored, scenarios are deployed to Flink for processing; Flink in turn interacts with Kafka streams:
- Kafka - a distributed data streaming platform that allows publishing and subscribing to streams of records. A typical process diagram authored in Nussknacker starts from reading data from a Kafka stream and finishes with writing data to a Kafka stream.
- Flink - a distributed processing engine for stateful computations over data. Flink can read data from different types of data sources, in the Nussknacker case it reads from Kafka streams. Flink can process millions of events per second and process them without losing even one. Flink offers the unmatched richness of Scala/Java API for performing filtering, enrichment and sophisticated aggregations.
The Nussknacker scenario design information is used to request Flink to post Flink runtime metrics to Influx DB.
- InfluxDB - a high-performing time series database, metrics generated by Flink are stored here.
- Grafana - open source analytics and interactive visualization web application, process metrics stored in InfluxDB are visualized as a Grafana dashboard.