Skip to Content

Agents

Agents are AI assistants that can chat with users and execute tools.

Defining Agents

agent "assistant" { model = models.anthropic.claude_sonnet_4 personality = "Friendly, helpful, and concise" role = "General purpose assistant" tools = [ builtins.http.get, tools.weather ] }

Attributes

AttributeTypeDescription
modelreferenceModel reference (e.g., models.anthropic.claude_sonnet_4)
personalitystringPersonality traits for the agent
rolestringDescription of the agent’s purpose
toolslistTools available to the agent (optional)

Tools

Agents can use three types of tools:

Built-in Tools

tools = [ builtins.http.get, # HTTP GET builtins.http.post, # HTTP POST builtins.http.put, # HTTP PUT builtins.http.patch, # HTTP PATCH builtins.http.delete, # HTTP DELETE ]

Custom Tools

Reference tools defined in tool blocks:

tools = [ tools.weather, tools.create_todo ]

External Plugin Tools

Reference tools from loaded plugins:

tools = [ plugins.slack.send_message, plugins.github.create_issue ]

MCP Server Tools

Reference tools from a declared mcp "name" server through the mcp.<name>.* namespace. Use .all to expose every tool the server provides:

tools = [ mcp.filesystem.read_text_file, # single tool mcp.remote_api.all, # every tool from that server ]

See MCP Tools for how to declare consumer-side MCP servers.

Mission-Scoped Agents

Agents can be defined inside a mission block, making them available only to that mission. This is useful for specialized agents that don’t make sense as global definitions.

mission "research" { commander { model = models.anthropic.claude_sonnet_4 } agent "specialist" { model = models.anthropic.claude_opus_4 personality = "Deep domain expert" role = "Research specialist with access to specialized tools" tools = [plugins.shell.exec] } agents = [agents.global_helper, agents.specialist] task "gather" { objective = "Research the topic" agents = [agents.specialist] } }

Mission-scoped agents use the same syntax and attributes as global agents. They must be listed in the mission’s agents = [...] to be available, and can be assigned at the task level.

Rules:

  • A scoped agent name must not conflict with any global agent name
  • Two different missions can each define an agent with the same name (they are independently scoped)
  • Multiple scoped agents per mission are supported

Example: Specialized Agents

agent "coder" { model = models.anthropic.claude_sonnet_4 personality = "Precise and methodical" role = "Software development assistant" tools = [builtins.http.get] } agent "researcher" { model = models.openai.gpt_4o personality = "Curious and thorough" role = "Research and information gathering" tools = [builtins.http.get] } agent "writer" { model = models.anthropic.claude_sonnet_4 personality = "Creative and articulate" role = "Content writing and editing" tools = [] # No tools, just conversation }

Built-in Tools

All agents automatically have access to result tools for handling large data:

ToolPurpose
result_infoGet type/size of a stored large result
result_itemsGet items from a large array
result_getNavigate large objects with dot paths
result_keysGet keys of a large object
result_chunkGet chunks of large text

When a tool returns a result larger than the configured threshold (default: ~16,000 tokens), it’s automatically stored and a sample is shown. The agent can use these tools to access the full data without overwhelming context.

In mission context, result_to_dataset is also available to promote arrays to datasets.

Tool Response Limits

You can configure the maximum token count for tool call responses before they’re truncated and stored for paged access:

agent "data_processor" { model = models.anthropic.claude_sonnet_4 role = "Processes large datasets" tools = [builtins.http.get] tool_response { max_tokens = 32000 # override the default 16,000 token limit } }
AttributeTypeDefaultDescription
max_tokensnumber16000Approximate max token count before a tool response is truncated/sampled. Hard maximum: 64000.

When a response exceeds max_tokens, it’s stored in memory and the LLM receives a preview with metadata. The agent can then use result_* tools to access the full data.

The same setting is available on the mission commander:

mission "example" { commander { model = models.anthropic.claude_sonnet_4 tool_response { max_tokens = 32000 } } }
Last updated on