AI & Inference Nodes
Inference nodes are the core intelligence unit of the SolutionEngine platform. They allow you to integrate trained models directly into your data pipelines to perform tasks like object detection, image classification, or text inference.
Run Model
The Run Model node dynamically loads and executes a machine learning model against incoming workflow data.
To use this node, you must first have a model configured in the Model Library.
Configuration
- Model Source: Select whether the model is provided natively by the platform ("Built-in") or imported by you ("Custom").
- Model ID / Connection ID: Select the specific model to run. You must also select a Connection Preset if the model requires specific external connectivity or authorization.
- Model Config: A JSON object for model-specific overrides (e.g., overriding the default confidence threshold for a detection model).
- Input Data Path: The precise path in the workflow payload where the target data resides. For example, if your camera stream outputs
data.frame, you must set this value to ensure the model processes the raw image bytes. - Output Data Path: The destination key for the model's predictions.
Data Flow
- The node waits for an upstream process to deliver an execution payload.
- It extracts the data specified by the Input Data Path.
- The backend routes this data to the designated inference worker (often utilizing GPU acceleration if available).
- The model executes, generating a prediction object (e.g., bounding boxes or classification labels).
- The prediction object is appended to the original payload at the Output Data Path and sent to downstream nodes.
Typical Use Case
Passing an RTSP frame through an Object Detection model (e.g., YOLOv8) to identify the coordinates of vehicles, then passing those coordinates downstream to a Script node that calculates their speed.