Model Library
The Model Library is the model control plane for each project.
Every model used in workflow inference must exist here first, whether it is built-in or imported.
The library centralizes model availability, metadata, and runtime configuration so workflows remain consistent across environments.
Treat the Model Library as production inventory. If a model is not in the library with correct metadata, workflows should not depend on it.
What the Model Library Manages
The library is responsible for:
- Model registration and identity
- Model metadata and configuration schema
- Availability for workflow nodes
- Compatibility with deployment environments
- Consistent selection in model execution nodes
In practice, this is where model governance starts before deployment.
Model Types
Built-In Models
Built-in models are provided by the platform and available immediately.
Use built-in models when:
- You need a fast baseline
- Your use case matches common vision/text tasks
- You want lower onboarding complexity
Custom Models
Custom models are imported into your project and managed alongside built-ins.
Use custom models when:
- You need domain-specific accuracy
- You require architecture/version control for internal ML assets
- Built-in model behavior is insufficient for production constraints
Model Lifecycle
A practical model lifecycle in SolutionEngine:
- Select source model (built-in or external import)
- Register/import into project library
- Validate metadata and configuration schema
- Test in non-production workflow path
- Deploy workflow using target model
- Monitor output quality and runtime behavior
- Replace or retire model safely when needed
This lifecycle prevents model drift from silently breaking downstream workflows.
Importing Custom Models
Custom import is currently centered around Kaggle-based model onboarding.
High-level flow:
- Provide model source URL
- Platform downloads artifacts
- Wrapper/metadata preparation is generated
- Model is added to library for node selection
For implementation details, continue with Model Import.
Model Configuration in Workflows
Models are consumed through model execution nodes (for example, Run Model nodes).
Node-level configuration usually includes:
- Selected model
- Input mapping
- Inference parameters
- Output formatting choices
Design recommendations:
- Keep input mapping explicit
- Record inference thresholds in node config
- Normalize output shape before branching logic
Environment Readiness Checks
Before deploying workflows that depend on models, validate:
- Model exists in project library
- Required artifacts are present
- Target environment has access to required files
- Runtime dependencies are compatible
Do not assume a model available in one environment is automatically healthy in all environments. Validate per environment.
Operational Best Practices
- Name models with clear version intent
- Keep model metadata complete and consistent
- Separate experimentation models from production models
- Use staged rollout workflows before full cutover
- Track output quality trends after model updates
These practices make model-driven workflows maintainable at scale.
Troubleshooting Model Issues
When model inference fails, check in this order:
- Model selection in node configuration
- Input payload shape and data type
- Missing artifacts or broken references
- Environment compatibility/runtime constraints
- Output parsing assumptions in downstream nodes
Common symptoms:
- Node runtime errors: model artifact or dependency mismatch
- Empty outputs: input mapping mismatch
- Poor quality output: threshold/config mismatch or wrong model selection
