Otto Docs
Getting Started

Configuration

All Otto configuration lives in a single `.env` file at the project root. The interactive wizard (`./otto configure`) handles the most common settings, but you can edit the file directly for full c...

All Otto configuration lives in a single .env file at the project root. The interactive wizard (./otto configure) handles the most common settings, but you can edit the file directly for full control.

After changing any values in .env, restart Otto for them to take effect:

./otto restart

LLM Provider

Otto requires one LLM provider. You choose during initial setup, but you can switch at any time.

Gemini offers strong performance at a competitive price. Get an API key at aistudio.google.com/apikey.

USE_GEMINI=true
GOOGLE_API_KEY=AIza...your-key
GEMINI_MODEL=gemini-3.1-pro-preview-customtools

The GEMINI_MODEL default is gemini-3.1-pro-preview-customtools. You can use any model available through the Gemini API.

OpenAI

Get an API key at platform.openai.com/api-keys.

USE_GEMINI=false
OPENAI_API_KEY=sk-...your-key
OPENAI_MODEL=gpt-5-mini

The OPENAI_MODEL default is gpt-5-mini. Set OPEN_AI_BASE if you need to point at an OpenAI-compatible proxy:

OPEN_AI_BASE=https://your-proxy.example.com/v1

Ollama (Local, Free)

Run an LLM on your own hardware with no API costs. Install Ollama from ollama.com, then pull a model:

ollama pull llama3.2

Configure Otto to use it:

USE_OLLAMA=true
OLLAMA_BASE_URL=http://host.docker.internal:11434
OLLAMA_MODEL=llama3.2

The OLLAMA_BASE_URL uses host.docker.internal so the Docker containers can reach Ollama running on your host machine. If Ollama runs elsewhere on your network, use that machine's IP instead.

Note: Local model quality varies significantly. For production use, Gemini or OpenAI is recommended.

Embeddings

Otto automatically uses the embedding model that matches your LLM provider. You can override this if needed:

VariableDefaultDescription
EMBEDDING_MODELmodels/gemini-embedding-001 (Gemini) or text-embedding-3-small (OpenAI)Embedding model for vector memory
OLLAMA_EMBED_MODELnomic-embed-text:latestEmbedding model when using Ollama

Web Search (Tavily)

Tavily gives Otto the ability to search the web for current information. The free tier includes 1,000 searches per month. Get a key at tavily.com.

TAVILY_API_KEY=tvly-...your-key

When configured, web search is available as an MCP tool that Otto can invoke during task execution.

Slack Integration

Connect Otto to your Slack workspace so it can read channels, send messages, and receive tasks via Slack.

Setup

  1. Create a Slack App at api.slack.com/apps
  2. Under OAuth & Permissions, add these bot token scopes:
    • channels:history
    • channels:read
    • chat:write
    • users:read
    • users:read.email
    • files:read
  3. Install the app to your workspace and copy the Bot User OAuth Token

Environment Variables

SLACK_MCP_XOXB_TOKEN=xoxb-...your-bot-token
SLACK_SIGNING_SECRET=abc123...your-signing-secret
VariableRequiredDescription
SLACK_MCP_XOXB_TOKENYesBot User OAuth Token (starts with xoxb-)
SLACK_SIGNING_SECRETYesApp signing secret for verifying Slack requests
SLACK_BOT_USER_IDNoBot user ID, used to detect self-mentions

Email Integration

Otto can send emails via SMTP and monitor an inbox via IMAP.

Sending Email (SMTP)

SMTP_HOST=smtp.gmail.com
SMTP_PORT=465
SMTP_USER=otto@yourcompany.com
SMTP_PASS=your-app-password

For Gmail, use an App Password rather than your account password.

Receiving Email (IMAP)

When IMAP is configured, Otto polls the inbox at a regular interval and can process incoming messages.

IMAP_HOST=imap.gmail.com
IMAP_PORT=993
IMAP_CHECK=60
VariableDefaultDescription
SMTP_HOST--SMTP server hostname
SMTP_PORT465SMTP server port
SMTP_USER--Email address / username
SMTP_PASS--Email password or app password
IMAP_HOST--IMAP server hostname
IMAP_PORT993IMAP server port
IMAP_CHECK60Poll interval in seconds

Storage

By default, Otto stores files on the local filesystem. For cloud deployments, you can use Google Cloud Storage or Google Drive.

STORAGE_BACKEND=local
LOCAL_STORAGE_PATH=/tmp/otto-files

Google Cloud Storage

STORAGE_BACKEND=gcs
GCS_BUCKET_NAME=your-bucket-name
GOOGLE_SERVICE_ACCOUNT_JSON='{"type":"service_account",...}'

Google Drive

STORAGE_BACKEND=google_drive
GOOGLE_DRIVE_FOLDER_ID=1a2b3c...your-folder-id
GOOGLE_SERVICE_ACCOUNT_FILE=/path/to/credentials.json
VariableDefaultDescription
STORAGE_BACKENDlocallocal, gcs, or google_drive
LOCAL_STORAGE_PATH/tmp/otto-filesPath for local file storage
GCS_BUCKET_NAME--GCS bucket name
GOOGLE_DRIVE_FOLDER_ID--Google Drive root folder ID
GOOGLE_SERVICE_ACCOUNT_JSON--Service account credentials as JSON string
GOOGLE_SERVICE_ACCOUNT_FILE--Path to service account JSON key file
STORAGE_QUOTA_GB0 (unlimited)Maximum storage in GB

Agent Customization

Adjust Otto's personality and behavior without modifying code.

SPECIALIZATION=marketing assistant
ADDITIONAL_INSTRUCTIONS=Always respond in a professional tone. Prioritize data-driven recommendations.
VariableDefaultDescription
SPECIALIZATIONGeneral AssistanceAgent persona / focus area (e.g., "security analyst", "marketing assistant")
ADDITIONAL_INSTRUCTIONS--Extra text appended to the agent's system prompt
OTTO_PERSONA_PATH--Path to a custom persona file (SOUL.md)
OTTO_PROJECTS_DIR--Directory containing project-specific persona files

Skills System

Otto loads skills from the skills/ directory. These settings control how skills are discovered and matched to incoming tasks.

VariableDefaultDescription
SKILLS_REFRESH_INTERVAL60Seconds between skill directory rescans
OTTO_SKILL_THRESHOLD0.35Minimum similarity score for a skill to match (0.0 - 1.0)
OTTO_MAX_SKILLS3Maximum number of skills loaded per task
OTTO_SCRIPT_TIMEOUT120Timeout in seconds for skill script execution

Performance Tuning

These settings control how the agent processes tool calls and manages large outputs.

VariableDefaultDescription
MAX_PARALLEL_TOOL_CALLS5Maximum concurrent tool calls per agent step
MEDIUM_RESULT_THRESHOLD4000Character count that triggers medium-length summarization
LARGE_RESULT_THRESHOLD12000Character count that triggers aggressive summarization
ENABLE_INTENT_PREPROCESSINGfalseExperimental: semantic intent matching for tool selection

Memory

Otto maintains a vector memory store for cross-session knowledge. These settings control conversation compaction, which keeps context windows manageable during long-running tasks.

VariableDefaultDescription
OTTO_MEMORY_DB<LOCAL_STORAGE_PATH>/otto_memory.dbPath to the vector memory SQLite database
COMPACTION_THRESHOLD12Message count before conversation compaction triggers
MAX_RECENT_MESSAGES8Number of recent messages to keep uncompacted

LangSmith Tracing

LangSmith provides visibility into Otto's LLM calls, tool usage, and agent execution traces. Useful for debugging and optimizing agent behavior.

LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=lsv2_...your-key
LANGCHAIN_PROJECT=otto-production
VariableDefaultDescription
LANGCHAIN_API_KEY--LangSmith API key
LANGCHAIN_TRACING_V2falseEnable tracing
LANGCHAIN_PROJECTdefaultProject name for grouping traces
LANGCHAIN_ENDPOINThttps://api.smith.langchain.comLangSmith API endpoint

Otto syncs LANGCHAIN_* and LANGSMITH_* variants on startup, so you only need to set one set of variables.

Export traces for offline analysis:

make export-traces DAYS=7

Database

Otto uses SQLite by default with no configuration needed. For production deployments with higher concurrency, you can point at PostgreSQL.

VariableDefaultDescription
DATABASE_URLsqlite:///./otto.dbDatabase connection string
REDIS_URLredis://localhost:6379/0Redis connection URL
REDIS_PASSWORD--Redis authentication password (recommended for production)
REDIS_TYPEauto-detectedstandard or upstash (auto-detected from URL)

Upstash Redis

For serverless or cloud deployments, Otto supports Upstash Redis as an alternative to a self-managed Redis instance:

UPSTASH_REDIS_REST_URL=https://your-instance.upstash.io
UPSTASH_REDIS_REST_TOKEN=AX...your-token

Deployment and Networking

These settings control how Otto is exposed and how cross-origin requests are handled.

VariableDefaultDescription
DEPLOYMENT_MODEself-hostedself-hosted or hosted (controls auth middleware)
DEPLOYMENT_TYPEcloud_frontendcloud_frontend or onprem_frontend (controls CORS)
FRONTEND_URLhttps://otto.yourcompany.comFrontend URL, used for CORS and Slack links
BACKEND_API_URL--Backend URL (logged on startup for debugging)
ALLOWED_ORIGINS--Comma-separated CORS origins (fallback)
CLOUDFLARE_TUNNEL_TOKEN--Enable remote access via Cloudflare Tunnel

Docker Image Tags

Pin specific versions when you need reproducible deployments:

BACKEND_IMAGE_TAG=v1.2.0
FRONTEND_IMAGE_TAG=v1.2.0

Both default to latest.

MCP (Model Context Protocol)

Otto can use any MCP-compatible tool server. Tavily web search is pre-configured; additional servers can be added via JSON configuration.

VariableDefaultDescription
MCP_CONFIG_JSON--Full MCP server configuration as a JSON string
MCP_SERVERS--File path to mcp-config.json (alternative to JSON string)

Example MCP configuration for Tavily:

{
  "mcpServers": {
    "tavily-mcp": {
      "transport": "stdio",
      "command": "npx",
      "args": ["-y", "tavily-mcp@latest"],
      "env": {
        "TAVILY_API_KEY": "${TAVILY_API_KEY}"
      }
    }
  }
}

Additional MCP servers can also be configured dynamically at runtime through the Otto API and are persisted in Redis.

All Environment Variables

For the complete list of every environment variable Otto supports, see the Environment Variables Reference or backend/.env.example in the repository. The settings on this page cover all variables that affect day-to-day operation. Variables not listed here are either Docker/infrastructure internals or multi-tenant hosting options that do not apply to self-hosted deployments.