Otto Docs
Integrations

Built-in Tools Reference

Otto's AI agent has access to a set of built-in tools that it calls autonomously while working on tasks. These tools handle communication, file operations, email, memory, document processing, subag...

Otto's AI agent has access to a set of built-in tools that it calls autonomously while working on tasks. These tools handle communication, file operations, email, memory, document processing, subagent dispatch, and skill script execution.

In addition, any tools provided by configured MCP servers are available alongside these built-in tools.


Agent Tools

ask_human

Sends a question or message to a human user and optionally pauses the task to wait for a response.

ParameterTypeDescription
querystringThe question or message to send
human_namestring (optional)Name of the specific person to contact
wait_for_answerboolean (default: true)Whether to pause the task and wait for a response

When wait_for_answer is true, the task enters a "waiting_input" state and the agent resumes only after the user replies. When false, the message is delivered as a notification without pausing.

Messages are routed through Otto's communication system to the user's preferred channel (Slack DM, web UI notification, etc.).

Key file: backend/app/tools/human_tool.py


send_email

Sends an email via SMTP with support for multiple recipients, CC, HTML body, and file attachments.

ParameterTypeDescription
tostringComma-separated recipient email addresses
subjectstringEmail subject line
bodystringEmail body (HTML supported)
ccstring (optional)Comma-separated CC addresses
attachment_filenameslist[string] (optional)Files from the sandbox to attach

Attachments are loaded from the agent's sandbox storage. Maximum file size per attachment: 10 MB. The tool retries up to 3 times with a 5-second delay between attempts.

Requires SMTP configuration. See the email integration guide for setup.

Key file: backend/app/tools/email_tool.py


read_file

Reads a text file from the agent's sandbox storage.

ParameterTypeDescription
filenamestringName of the file to read

Restricted to .txt, .md, and .json file types.

Key file: backend/app/tools/file_tools.py


write_file

Writes content to a text file in the agent's sandbox storage.

ParameterTypeDescription
filenamestringName of the file to write
contentstringContent to write

Restricted to .txt, .md, and .json file types. The storage quota is checked before each write.

Key file: backend/app/tools/file_tools.py


list_directory

Lists files and directories within the agent's sandbox storage.

ParameterTypeDescription
pathstring (default: "")Subdirectory path to list

Returns file names, sizes, and modification timestamps.

Key file: backend/app/tools/file_tools.py


attach_file

Marks a file from the sandbox as an attachment to be delivered to the user when the task completes.

ParameterTypeDescription
filenamestringName of the file to attach

The tool outputs a sentinel marker that downstream delivery systems detect. Attached files are sent to the user via their preferred channel (Slack upload, web UI download, etc.).

Key file: backend/app/tools/file_tools.py


wait

Pauses agent execution for a specified duration. Useful for waiting on external processes or respecting rate limits.

ParameterTypeDescription
secondsintegerDuration to wait (1-300 seconds)

Key file: backend/app/tools/file_tools.py


read_document

Converts PDF and Word (.docx) documents to Markdown text so the agent can read and analyze them.

ParameterTypeDescription
filenamestringName of the document file

PDFs are first converted to DOCX via pdf2docx, then to Markdown via pandoc. DOCX files are converted directly to Markdown. File access is restricted to the agent's sandbox.

Key file: backend/app/tools/document_tools.py


convert_markdown_to_docx

Converts Markdown content to a Word (.docx) document.

ParameterTypeDescription
markdown_contentstringMarkdown text to convert
output_filenamestringName for the output .docx file

Uses pandoc for conversion. Supports an optional Word template file (word.docx) for custom styling. The output is written to the agent's sandbox.

Key file: backend/app/tools/file_tools.py


Searches Otto's persistent vector memory store using semantic similarity.

ParameterTypeDescription
querystringSearch query
project_idstring (optional)Scope search to a specific project

Results are ranked by cosine similarity with a boost for keyword tag matches. Memory is scoped by project, so the agent retrieves context relevant to the current channel or workspace.

Key file: backend/app/tools/memory_tools.py


memory_save

Saves content to Otto's persistent vector memory store for future retrieval.

ParameterTypeDescription
contentstringInformation to remember
tagslist[string] (optional)Keywords for boosting future search matches

Use this to store decisions, preferences, and learned context that should persist across tasks.

Key file: backend/app/tools/memory_tools.py


dispatch_subagent

Dispatches a focused subtask to a background worker agent. Non-blocking -- the parent agent is automatically paused after dispatch and resumed when all subtasks complete.

ParameterTypeDescription
task_descriptionstringWhat the subtask should accomplish
contextstringBackground context for the subtask
agent_typestringAgent specialization: general, research, or tool-specialist
max_iterationsintegerMaximum iterations (1-15, default 10)
token_budgetinteger (optional)Token limit for the subtask

Results from all subtasks are collected and returned to the parent agent when they complete.

Key file: backend/app/tools/subagent_tools.py


run_skill_script

Executes a script bundled inside a skill's scripts/ directory.

ParameterTypeDescription
skill_namestringName of the skill containing the script
script_namestringName of the script file to run
argsstring (optional)Arguments to pass to the script

Scripts run in a sandboxed subprocess with a minimal set of environment variables. The default timeout is 120 seconds (configurable via OTTO_SCRIPT_TIMEOUT). Scripts must be declared in the skill's YAML frontmatter and require a shebang line.

Key file: backend/app/tools/script_tool.py


Disabled Tools

These tools are defined in the codebase but are not currently wired into the agent:

  • finished -- A task completion signal tool. Defined in backend/app/tools/finished_tool.py but not active in the agent graph.
  • convert_markdown_to_pptx -- Defined in backend/app/tools/file_tools.py but disabled in favor of the Gamma MCP integration for presentation creation.

System Tools in the Docker Container

The Otto backend Docker image includes several command-line utilities that the agent's tools depend on or that can be used by skill scripts.

Included System Tools

ToolPurpose
pandocUniversal document converter. Used by read_document and convert_markdown_to_docx for format conversion between Markdown, DOCX, HTML, and other formats.
node / npxNode.js 20.x runtime. Required for launching MCP servers that use the stdio transport (e.g., npx -y tavily-mcp@latest).
pythonPython 3.11 with all backend dependencies. Available to skill scripts.

Available Python Packages

The container includes Python packages that the agent's tools use internally:

PackagePurpose
pdf2docxPDF to DOCX conversion (used by read_document)
python-docxWord document manipulation
openpyxlExcel spreadsheet creation
python-pptxPowerPoint generation (available but tool is currently disabled)
simsimdFast cosine similarity for vector memory search

Adding New System Tools

To add command-line tools to the Docker image:

  1. Edit the Dockerfile in the backend directory. Add packages to the apt-get install command in the final stage:
RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq5 \
    pandoc \
    curl \
    your-new-tool \
    && rm -rf /var/lib/apt/lists/*
  1. Optionally add a verification step:
RUN your-new-tool --version
  1. Rebuild the image:
docker compose build backend

New tools are available to skill scripts (via run_skill_script) and can be used in custom MCP servers.

Testing System Tools

Test tools inside a running container:

docker exec otto-backend pandoc --version
docker exec otto-backend node --version
docker exec otto-backend python --version

How Tools Work with the Agent

Otto's agent is built on a LangGraph state graph. At startup, all built-in tools and MCP tools are assembled and bound to the agent's LLM. During task execution:

  1. The LLM receives the task description, system prompt (including any matched skills), and the list of available tools.
  2. The LLM decides which tool to call and generates the parameters.
  3. The tool executes and returns its result to the LLM.
  4. The LLM processes the result and decides the next action -- call another tool, ask a human, dispatch a subagent, or complete the task.

Tool results above a configurable character threshold are automatically summarized to manage context window usage. This is controlled by MEDIUM_RESULT_THRESHOLD (default 4,000 characters) and LARGE_RESULT_THRESHOLD (default 12,000 characters).

The agent can execute up to MAX_PARALLEL_TOOL_CALLS (default 5) tools simultaneously when the LLM requests parallel tool calls.

Skill-Based Tool Restrictions

When a skill is matched to a task and that skill declares an allowed-tools list, only those tools (plus ask_human, which is always allowed) are made available. Pattern matching with fnmatch syntax is supported (e.g., gamma_* to allow all Gamma MCP tools). See the Skills Writing Guide for details.