Run Model Node
The Run Model node executes a configured model and writes predictions into the workflow payload.
Configuration
- Model Source: Built-in or custom model registry.
- Model ID: The model to execute.
- Input Data Path: Payload key for inference input, for example data.frame.
- Output Data Path: Payload key for prediction output.
- Model Config: Optional JSON overrides such as threshold values.
Input and Output Contract
Input example:
{
"data": {
"frame": "base64_or_reference"
}
}
Output example:
{
"data": {
"frame": "base64_or_reference",
"prediction": {
"labels": ["vehicle"],
"scores": [0.94],
"boxes": [[120, 88, 380, 310]]
}
}
}
Best Practices
- Validate required input fields before calling inference.
- Keep prediction output at a stable path for downstream nodes.
- Apply confidence filtering before delivery actions.
- Persist critical predictions to bucket storage for audits.
