Documentation

Trigger & Ingestion Nodes

Trigger nodes act as the entry points for your workflows. A workflow remains idle until a trigger node receives an event or is executed. Every workflow must have exactly one trigger node to function.

The platform provides three primary methods for triggering a workflow execution.


Datasource Trigger

The Datasource Trigger connects your workflow directly to a configured data intake channel, such as an RTSP stream, an MQTT topic, or an API webhook.

When the connected datasource receives new data (e.g., a new video frame or a JSON payload), it automatically triggers the workflow and passes the parsed data into the execution context.

Configuration

  • Select Datasource: A dropdown to select which pre-configured datasource should trigger this workflow.
  • Filter Criteria: Optional JSON criteria to only trigger the workflow if the incoming data matches specific conditions (e.g., {"device_id": "camera-01"}).

Typical Use Case

This is the standard trigger for production pipelines, ensuring real-time processing of incoming sensor or camera data.


Generator

The Generator node triggers the workflow automatically at a fixed time interval. It does not require external data input, making it ideal for polling or scheduled maintenance tasks.

Configuration

  • Interval: The time in seconds between executions (e.g., 60 for every minute).
  • Start Time: Optional ISO-8601 timestamp for when the generator should begin. If left empty, it starts immediately upon deployment.
  • Repeat Count: The number of times to run. Set to 0 for infinite repetition.
  • Enabled: A toggle to quickly pause or resume the schedule.

Typical Use Case

Use the Generator for checking the status of an external API periodically or running a daily data aggregation script.


Manual Trigger

The Manual Trigger allows developers to execute a workflow on demand using a simulated data payload. It is primarily used during the design and testing phases.

Configuration

  • Test Payload: A JSON code editor where you define the exact data structure that the downstream nodes will receive when the trigger is fired.

Typical Use Case

Used in the Visual Editor to test logic paths and verify model outputs without waiting for real data to arrive from a physical datasource.