AI Agents

In SolutionEngine, an agent is a reusable LLM configuration you create inside a project and then reference from workflows.

AI agents in SolutionEngine are not a separate runtime product. They are project-scoped resources that store the model, prompt, and optional tools an LLM-powered workflow should use.

Today, an agent in the app consists of:

  • a name and description
  • an LLM provider
  • a model from that provider
  • system instructions
  • a maximum reasoning step limit
  • optional attached agent tools

Create agents first, then reference them from workflows through the AI Agent node.


What You Can Do With Agents

  • Reuse one agent configuration across multiple workflows in the same project.
  • Choose from the currently available LLM providers exposed by the backend.
  • Discover provider models dynamically, including local Ollama models.
  • Attach built-in agent tools such as HTTP requests or terminal access.
  • Control behavior with a system prompt and a maximum step limit.

Current Agent Configuration

When creating or editing an agent in the app, the current UI lets you configure the following:

Identity

  • Name: display name for the agent
  • Description: optional short description of its role

Model Setup

  • Provider: selected from the providers returned by the backend
  • Model: selected from models discovered for that provider
  • Provider-specific fields: for example API key fields or an Ollama base URL

Behavior

  • System Prompt: the main instruction set for the agent
  • Max Reasoning Steps: upper bound for the number of agent steps during execution

Tools

  • Attached Agent Tools: optional built-in tools you add from the capability selector

Supported Provider Flow

The current app fetches providers and models dynamically instead of hardcoding them in the docs.

What this means in practice:

  • the list of providers comes from the backend
  • some providers require an API key to discover models
  • Ollama requires a base URL before model discovery works
  • the first available model can be selected after discovery

If your provider returns no models, verify credentials or connection settings in the agent form before saving.


How To Create An Agent

Follow this sequence in the current app:

  1. Open your project.
  2. Go to AI Agents and choose Create Agent.
  3. Select an LLM provider.
  4. Wait for model discovery, then choose a model.
  5. Fill in the system prompt.
  6. Set the maximum reasoning steps.
  7. Attach agent tools if the agent needs them.
  8. Save the agent.

Recommended first setup:

  • start with one provider
  • choose one model that is already reachable
  • write a narrow system prompt
  • attach only the tools the agent truly needs

Do not describe tools in the prompt that are not actually attached to the agent. The saved tool configuration is what defines the agent's available capabilities.


How Agents Are Used In Workflows

Agents are consumed through the AI Agent Node in a workflow.

Typical flow:

Trigger -> Prepare Input -> AI Agent Node -> Process Result -> Output

A workflow does not define the provider and tools every time. Instead, it points to an existing agent and passes input into it.

This keeps prompt and tool configuration reusable and easier to maintain.


When To Use An Agent

Use an agent when:

  • the task benefits from natural language instructions
  • tool use may be needed during reasoning
  • you want one reusable LLM configuration for multiple workflows

Prefer a deterministic workflow without an agent when:

  • fixed logic already solves the task
  • the action path must be fully predictable
  • the use case does not need language-model reasoning

Related Pages