Workflows

Workflows are the execution backbone of SolutionEngine.

A workflow is a node graph that defines how incoming data is transformed, validated, enriched, and delivered.

Workflows are event-driven. They start when a trigger fires, run as isolated executions, and stop when there are no more nodes to process.

Use workflows for production automation, not just prototyping. A good workflow is observable, testable, and safe to redeploy.


What a Workflow Contains

Every workflow definition includes:

  • Workflow metadata: name, project scope, status, version
  • Node definitions: configuration for each processing step
  • Edges: explicit paths between nodes
  • Runtime references: models, buckets, datasources, and environment bindings
  • Optional grouping: organizational grouping within a project

At runtime, this definition is loaded and executed in sequence according to node connections and branching conditions.


Lifecycle and States

A workflow can move through these operational states:

  • stopped: Defined but not executing
  • active: Deployed and ready to process triggers
  • error: Entered a failure state due to runtime or configuration issues

Typical lifecycle:

  1. Create workflow in a project
  2. Build graph and configure nodes
  3. Validate with manual or test triggers
  4. Deploy to target environment
  5. Monitor runs and logs
  6. Update and redeploy

Treat updates as versioned changes. Validate in a non-production environment before redeploying to production.


Triggering and Execution Model

A workflow execution starts from a trigger node. Common trigger patterns:

  • Datasource trigger: external input (camera, MQTT, webhook, stream)
  • Manual trigger: controlled test execution from editor
  • Generator/time trigger: interval-based execution

Each trigger event creates an independent execution context.

Execution behavior:

  • Data moves from node to node through edges
  • Each node reads input, applies logic, emits output
  • Branches execute based on node conditions
  • Failures are recorded with node-level context
  • Execution completes when the queue is drained

This model enables deterministic processing and predictable troubleshooting.


How to Connect Nodes in the Editor

Use this flow when designing graphs in the visual editor:

  1. Place exactly one trigger node as the entry point.
  2. Connect trigger output to validation or transformation nodes first.
  3. Add logic nodes (Condition, Switch, Filter) before expensive inference calls.
  4. Connect inference outputs to transformation nodes for normalization.
  5. End each branch with delivery or persistence nodes.

Connection design rules:

  • Keep one clear direction of flow from left to right.
  • Avoid hidden coupling between distant branches.
  • Keep branch depth manageable so failures are easy to trace.
  • Use node descriptions to document assumptions at branch points.

If you need exact per-node configuration fields, continue in the Nodes section.


Data Object Flow

The runtime data object is the shared payload moving through the graph.

Typical payload shape:

{
  "timestamp": 1707567890123,
  "source": "datasource_id",
  "dataType": "image",
  "payload": {
    "frame": "base64_or_reference"
  },
  "metadata": {
    "projectId": "1707456789123456",
    "workflowId": "1707456789000001"
  }
}

Design guidance:

  • Keep a stable payload contract between node groups
  • Add metadata early for traceability
  • Use explicit field names for branch conditions
  • Persist critical outputs to buckets for replay and audits

Node Categories in Practice

Workflows combine categories instead of using isolated node types.

Trigger and Ingestion

  • Generator
  • Datasource Trigger
  • Manual Trigger

Logic and Control

  • Condition
  • Switch
  • Iterator
  • Delay/Gate controls

Transformation and Data Shaping

  • Expression
  • Filter
  • Context/Variables
  • Mapping and merge operations

AI and Inference

  • Run Model
  • Vision/model post-processing nodes
  • AI-agent related nodes for LLM-driven flows

Network and Delivery

  • HTTP Request/Response
  • MQTT Publish
  • Bucket save nodes (media, metadata, timeseries)
  • Log and preview nodes

A production workflow usually combines all five layers.


Deployment to Environments

Deployment binds a workflow to an execution target.

Typical process:

  1. Select target environment (cloud or edge)
  2. Deploy workflow definition
  3. Validate heartbeat and runtime status
  4. Observe first execution logs

Deployment checks should confirm:

  • Referenced models are available
  • Datasource connectivity is valid
  • Bucket permissions and paths are correct
  • Required secrets/environment variables exist

Failure Handling and Recovery

For robust workflows, design for failure explicitly.

Recommended patterns:

  • Guard conditions before expensive model calls
  • Branch-specific fallback paths
  • Retry logic for external HTTP/MQTT calls
  • Dead-letter style persistence to bucket on failure
  • Log key identifiers at critical nodes (request IDs, datasource IDs)

Operational recovery playbook:

  1. Inspect failed node in execution logs
  2. Validate input payload shape at previous node
  3. Re-run with Manual Trigger using captured payload
  4. Patch workflow and redeploy
  5. Verify with a controlled test event

Workflow Design Standards

Use these standards to keep large graphs maintainable:

  • Name nodes by intent, not by tool name
  • Keep branch depth shallow where possible
  • Separate validation, inference, and delivery blocks
  • Prefer explicit transformation steps over hidden side effects
  • Store intermediate artifacts needed for incident analysis
  • Document assumptions in node descriptions

Example Production Pattern

Datasource Trigger
  -> Pre-Validation
  -> Run Model
  -> Confidence Filter
      -> Save Media (Bucket)
      -> Save Metadata (Bucket)
      -> HTTP Alert

Why this works:

  • Early validation prevents noisy failures
  • Confidence gating improves output quality
  • Dual persistence supports analytics and audits
  • External alerting creates real-time operational response

Related Pages