Nussknacker provides a drag and drop visual authoring tool (Nussknacker Designer) allowing to define decision algorithms – we call them scenarios – without the need to write code. This document is intended for those who will use Nussknacker Designer to configure processing scenarios. Nussknacker is a low-code platform; prior knowledge of SQL, JSON and concepts like variables and data types will help master authoring of the data processing scenarios in Nussknacker.
Please try the Demo to quickly understand how to move around Nussknacker Designer, create a simple scenario and see SpEL in action.
Nussknacker scenario diagram
Nussknacker scenario represents a decision algorithm in a form of a graph. The decisions are based on data, which can be anything from clicks on a website, bank transactions, to readings from sensors. These data are processed by Nussknacker, according to the scenario (algorithm). The results generated by Nussknacker are also data; this time they communicate the result or decisions.
Every scenario starts with a source of data because we have to know what kind of data we want to work with. The rest of the scenario is a sequence (Directed Acyclic Graph or DAG to be more precise) of different nodes:
- flow control functions: filter, switch, split etc.
- data enrichments from external sources (JDBC, OpenAPI)
- aggregates in different types of time windows (available with Flink engine)
- custom, tailor-made components, which extend default functionality
- and more
The nodes affect the data records as they flow through the scenario. In a typical scenario, you first check if a particular situation (data record) is of interest to you (you filter out the ones that aren't). Then you fetch additional information needed to make the decision (enrich the event) and add some conditional logic based on that information (choice). If you want to explore more than one alternative, you can at any point split the flow into parallel paths. At the end of every scenario is a sink node (or nodes if there are parallel paths which haven't been merged).
In the Streaming processing mode the data records processed by scenario are called events. They are read from Kafka topics and processed by the engine of choice: Flink or Lite. Events enter the scenario "via" a source node. The nodes process events; once the node finishes processing of the event it hands it over to the next node in the processing flow. If there is a split node, the event gets "multiplied" and now two or more events "flow" in parallel through branches of the scenario. There are also other nodes which can be "produce" events; for example the for-each node or time aggregate nodes. Finally, some nodes may terminate the event - for example the filter node. The important takeaways here is that a single event which entered the scenario may result in zero, one or many events leaving the scenario (being written to Kafka topic).
In the Request-Response processing mode it is the request data record which enters the scenario. The best and easiest way to understand how this request will be processed by the Nussknacker's scenario is to think of it as of a Streaming mode with just one event entering the scenario. All the considerations from the previous paragraph apply. The most important trait of a Request-Response scenario is that it's synchronous: some other computer system sends a request to Nussknacker and waits for a response. That request is the input to the scenario and the output - the decision - is the response. Since the other system is waiting for the response, there has to be exactly one. The natural question to ask is what happens if there are nodes in the scenario which "produce" additional data records - for-each or split. The topic of how to handle such situations is covered here.
Configuring Nussknacker nodes to a large degree is about using SpEL; knowledge of how to write valid expressions in SpEL is an important part of using Nussknacker.
SpEL Spring Expression Language is a powerful expression language that supports querying and manipulating data objects. What exactly does the term expression mean and why SpEL is an expression language? In programming language terminology, an expression is a combination of values and functions that are combined to create a new value. SpEL allows to write expressions only; therefore it is an expression language. Couple examples:
|a list of integers from 1 to 4
|a map (name-value collection)
|2 > 1
|2 > 1 ? 'a' : 'b'
|42 + 2
|'AA' + 'BB'
SpEL is used in Nussknacker to access data processed by a node and expand node's configuration capabilities. Some examples:
- create boolean expression (for example in filters) based on logical or relational (equal, greater than, etc) operators
- access, query and manipulate fields of the data record
- format data records written to sinks
- provide helper functions like date and time, access to system variables
- and many more.
The SpEL Cheat Sheet page provides an exhaustive list of examples of how to write expressions with SpEL.
Every SpEL expression returns a value of one of the predefined SpEL data types, like integer, double or boolean, map, etc. Data types in Nussknacker can be a confusing aspect at the beginning, as depending on the context in which data are processed or displayed, different data type schemes are in use - please refer to the SpEL Cheat Sheet page for more information.
In some contexts data type conversions may be necessary - conversion functions are described here.
Nussknacker uses variables as containers for data. Variables have to be declared; a
record-variable component are used for this. Once declared, a hash sign
"#" is used to refer to a variable from a SpEL expression. Variables are attributes of the data record, they do not exist by themselves.
There are three predefined variables:
In the Streaming processing mode the
#input variable is associated with the event which originally came from the Kafka topic. In the Request-Response processing mode the
#input variable carries the request data of REST call which invoked Nussknacker scenario. Both in the Streaming and Request-Response cases some nodes not only terminate the input events, but also create new ones. As the result, the #input data record is no longer available after such a node, while the newly created data record (and the variable associated with it) is available "downstream".
If the event which arrived to some node originally came from the Kafka topic, the metadata associated with this event are available in
#inputMeta variable. The following meta information fields are available in
- topic. Consult Kafka documentation for the exact meaning of those fields.
#meta variable carries meta information about the currently executed scenario. The following meta information elements are available:
- processName - name of the Nussknacker scenario
Check Basic Nodes page for examples how to use variables.