Otto Docs
Deployment

Production Deployment

This guide covers deploying Otto on your own infrastructure for production use.

This guide covers deploying Otto on your own infrastructure for production use.

Prerequisites

Hardware Requirements

ResourceMinimumRecommended
CPU2 cores4+ cores
RAM4 GB8 GB
Storage20 GB50 GB+
OSLinux, macOS, or Windows with DockerLinux (Ubuntu 22.04+ or Debian 12+)

Otto's resource usage scales primarily with the number of concurrent agent tasks. The ARQ worker is configured for up to 4 simultaneous jobs by default.

Software Requirements

  • Docker and Docker Compose (v2+)
  • curl (used by the CLI for health checks and updates)

No other runtime dependencies are required. All services run in containers.

Installation

Step 1: Clone the Repository

git clone https://github.com/agent-otto/otto.git
cd otto

Step 2: Configure

Run the interactive setup wizard:

./otto configure

The wizard walks through each configuration section:

  1. LLM Provider (required) -- Choose Gemini, OpenAI, or Ollama and provide your API key
  2. Web Search -- Tavily API key for web search capability
  3. Specialization -- Focus area for the agent (e.g., "marketing assistant")
  4. Slack Integration -- Bot token and signing secret
  5. Email Integration -- SMTP and IMAP settings
  6. Redis Password -- Recommended for production
  7. LangSmith Tracing -- Optional observability

You can re-run the wizard at any time to change settings. Pressing Enter at any prompt keeps the current value. See Configuration for a detailed explanation of every setting.

Step 3: Start Otto

./otto start

This pulls the latest container images from the GitHub Container Registry, creates the necessary data directories, and starts all services. On first run, expect the image pull to take a few minutes depending on your connection speed.

Otto will be running at http://localhost:3000.

Architecture Overview

A production Otto deployment runs four services managed by Docker Compose:

                  +------------------+
                  |    Frontend      |
                  |  (Next.js/nginx) |
                  |    port 3000     |
                  +--------+---------+
                           |
                  +--------v---------+
                  |    Backend API   |
                  |    (FastAPI)     |
                  |    port 8000     |
                  +----+--------+---+
                       |        |
              +--------v--+  +--v-----------+
              |   Redis   |  |  ARQ Worker  |
              | (queues + |  | (async task  |
              |  state)   |  |  processor)  |
              +-----------+  +--------------+

Frontend -- A statically exported Next.js application served by nginx. Handles the web UI and reverse-proxies API requests to the backend.

Backend API -- FastAPI server handling REST API requests, user management, file uploads, Slack webhooks, and MCP server management.

ARQ Worker -- Async task processor that runs agent tasks using LangGraph. Handles task execution, subtask delegation, project summarization, and scheduled tasks. Configured for up to 4 concurrent jobs with a 30-minute timeout.

Redis -- Provides task queues (interactive and background priority), LangGraph checkpoint storage, conversation state caching, and distributed locks for subagent coordination.

All services share a Docker network. The frontend nginx proxies API paths to the backend using Docker's internal DNS. Redis is not exposed to the host in production mode.

Reverse Proxy Setup

In production, Otto should sit behind a reverse proxy that handles TLS termination. The frontend listens on port 3000 and already proxies API requests to the backend internally, so you only need to expose a single port.

Option A: nginx

server {
    listen 443 ssl http2;
    server_name otto.yourcompany.com;
 
    ssl_certificate     /etc/ssl/certs/otto.pem;
    ssl_certificate_key /etc/ssl/private/otto.key;
 
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
 
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
 
        # WebSocket support (if needed for future features)
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
 
    # Increase timeouts for long-running API calls
    proxy_read_timeout 300s;
    proxy_send_timeout 300s;
}
 
server {
    listen 80;
    server_name otto.yourcompany.com;
    return 301 https://$host$request_uri;
}

After configuring nginx, update your .env to reflect the public URL:

FRONTEND_URL=https://otto.yourcompany.com

Option B: Cloudflare Tunnel

Cloudflare Tunnels provide HTTPS access without managing certificates or opening inbound ports. Otto includes built-in support for this.

  1. Create a tunnel in your Cloudflare dashboard and obtain a tunnel token.

  2. Add the token to your .env:

    CLOUDFLARE_TUNNEL_TOKEN=your-token-here
  3. Start Otto with the tunnel profile:

    docker compose -f docker-compose.yml --profile tunnel up -d

    This starts an additional cloudflared container that establishes an outbound connection to Cloudflare's network and routes traffic to your Otto frontend.

  4. In Cloudflare, configure the tunnel's public hostname to route to http://frontend:3000.

SSL/TLS Considerations

  • Always run Otto behind HTTPS in production. The backend sets session cookies and handles API keys that must be encrypted in transit.
  • If using a self-signed certificate for internal deployments, configure your clients to trust the CA.
  • Let's Encrypt with certbot is a solid free option for publicly accessible deployments.
  • The built-in nginx in the frontend container already sets security headers (X-Frame-Options, X-Content-Type-Options, X-XSS-Protection) and enables gzip compression.

Data Persistence

All persistent data lives under the data/ directory in the project root, bind-mounted into containers. Nothing is stored inside the containers themselves, so you can freely recreate them without data loss.

Storage Layout

otto/
  data/
    mcp/                    # MCP server configuration
      mcp-config.json       # Server definitions (mounted into containers)
    sqlite/                 # Primary database
      otto.db               # Users, tasks, logs, notifications, projects, schedules
      otto.db-shm           # WAL shared memory
      otto.db-wal           # Write-ahead log
    redis/                  # Redis persistence
      appendonly.aof         # Append-only file (primary)
      dump.rdb               # RDB snapshots (secondary)
    sandbox/                # Agent working directory for file operations
    tunnel/                 # Cloudflare tunnel credentials (optional)
    uploads/                # Files uploaded by users
    team_memory/            # Cross-session agent memory
      otto_memory.db        # SQLite with vector embeddings
  logs/                     # Application log files
  skills/                   # Skill definitions (user-customizable)

What Each Store Contains

SQLite (data/sqlite/otto.db) -- All structured application data: user accounts, tasks and their conversation logs, output artifacts, notifications, Slack channel projects and summaries, scheduled tasks, and channel memberships. Runs in WAL mode for concurrent access and crash recovery.

Redis (data/redis/) -- Task queues (interactive and background priority), LangGraph agent execution checkpoints, conversation state cache (24-hour TTL), MCP configuration, pending message buffers for project summarization, and distributed locks. Persisted via AOF (fsync every second) with RDB snapshots as a secondary mechanism.

Team Memory (data/team_memory/otto_memory.db) -- A separate SQLite database storing vector embeddings (768-dimensional) that enable the agent to recall information across sessions. Uses cosine similarity search.

Sandbox (data/sandbox/) -- Working directory where the agent reads and writes files during task execution.

Uploads (data/uploads/) -- Files uploaded by users through the web UI.

Backup and Restore

Creating a Backup

Otto must be running when you create a backup (the script needs access to the Redis container).

./otto backup

This creates a timestamped directory under backups/:

backups/2026-02-19_14-30-00/
  sqlite_data.tar.gz    # SQLite database files
  redis_dump.rdb        # Redis snapshot
  .env                  # Environment configuration

The backup script triggers a Redis BGSAVE before copying the dump file to ensure consistency.

Restoring from a Backup

./otto restore backups/2026-02-19_14-30-00

The restore process:

  1. Validates the backup directory exists
  2. Stops all running Otto containers
  3. Restores SQLite data from the archive
  4. Restores the Redis dump file
  5. Restores the .env file

After a restore completes, you must manually restart Otto:

./otto start

Backup Strategy Recommendations

  • Run backups daily via cron: 0 2 * * * cd /path/to/otto && ./otto backup
  • Copy backup directories to off-host storage (S3, NFS, or another server)
  • Test restores periodically to verify backup integrity
  • Keep at least 7 days of backups

Monitoring and Health Checks

Health Endpoint

The backend exposes a health check at /api/health:

curl -s http://localhost:8000/api/health | python3 -m json.tool

The response includes service status, deployment type, and version information.

Container Health Checks

Docker Compose configures a built-in health check for the backend container:

  • Endpoint: http://localhost:8000/api/health
  • Interval: 30 seconds
  • Timeout: 10 seconds
  • Retries: 3

You can see the health status of all containers with:

./otto status

Log Monitoring

View live logs from all services:

./otto logs

Or target a specific service:

./otto logs otto-backend    # API server
./otto logs arq-worker      # Task worker
./otto logs redis           # Redis

LangSmith Tracing (Optional)

For detailed observability into agent execution, enable LangSmith tracing through the configuration wizard (./otto configure, section 7). This sends trace data for every LLM call and tool invocation to the LangSmith platform, which is useful for debugging agent behavior and monitoring token usage.

Updating Otto

To update to the latest version:

./otto update

This command performs the following steps:

  1. Downloads the latest CLI script, Compose files, and configuration templates from GitHub
  2. Sets file permissions
  3. Updates default skills (without overwriting user-modified skills)
  4. Ensures the otto symlink in /usr/local/bin is current
  5. Merges any new configuration options into your .env while preserving your values
  6. Pulls the latest Docker images from the container registry
  7. Recreates all containers with the new images

Your data is preserved across updates. The settings sync step adds new configuration options with their defaults but never removes or changes your existing values.

Pinning Image Versions

By default, Otto pulls the latest tag. To pin to a specific version, set these in your .env:

BACKEND_IMAGE_TAG=v1.2.0
FRONTEND_IMAGE_TAG=v1.2.0

When pinned, ./otto update will still download the latest CLI and configuration files, but the Docker images will remain at the pinned version.

Security Considerations

Redis Authentication

By default, Redis runs without a password. For production, set a Redis password through the configuration wizard or directly in .env:

REDIS_PASSWORD=your-strong-password-here

The CLI can auto-generate a strong random password during configuration. The password is applied to both the Redis server and all client connections automatically.

Network Exposure

In production mode, only two ports are exposed to the host:

PortServicePurpose
3000Frontend (nginx)Web UI and API proxy
8000Backend (FastAPI)Direct API access

Redis (6379) is not exposed. If you put a reverse proxy in front of Otto, you can further restrict by binding only to localhost:

In your docker-compose.yml, change the port mappings to:

ports:
  - "127.0.0.1:3000:3000"  # Only accessible via localhost
  - "127.0.0.1:8000:8000"

API Keys and Secrets

  • The .env file contains all secrets and is excluded from version control via .gitignore.
  • Never commit .env to your repository.
  • The configuration wizard masks secret values when displaying them (showing only the first and last 4 characters).
  • If you need to upload secrets to GitHub Actions for CI/CD, use the included helper: ./scripts/gh-secrets-upload.sh.

Container Restart Policy

All production containers are configured with restart: unless-stopped, meaning they will automatically restart after a crash or host reboot (assuming Docker is configured to start on boot).

File Permissions

The data/ directory and its subdirectories are created with default permissions. On shared servers, consider restricting access:

chmod 700 data/
chmod 600 .env

Troubleshooting

Services Will Not Start

Check that Docker is running and that no other services are using ports 3000 or 8000:

docker ps
lsof -i :3000
lsof -i :8000

View container logs for error details:

./otto logs

If containers start but immediately exit, check for configuration errors:

docker compose -f docker-compose.yml logs --tail=50

Backend Reports Unhealthy

curl -v http://localhost:8000/api/health

Common causes:

  • Missing or invalid LLM API key -- re-run ./otto configure
  • Redis not reachable -- check that the Redis container is running
  • Port conflict on 8000 -- stop any other service using that port

Tasks Not Processing

The ARQ worker handles task execution. Check its logs:

./otto logs arq-worker

Common causes:

  • Worker crashed and is restarting -- check for Python errors in the logs
  • Redis connection lost -- verify the Redis container is healthy
  • Job timeout -- the default is 30 minutes; long-running tasks may need investigation

Frontend Cannot Reach the Backend

The frontend's built-in nginx proxies API paths to the backend. If the UI loads but shows connection errors:

  1. Verify the backend is running: curl http://localhost:8000/api/health
  2. Check that both containers are on the same Docker network: docker network ls
  3. Look at the frontend nginx logs: docker logs otto-frontend

Redis Connection Issues

If you set a Redis password after initial setup, make sure the password is applied consistently:

# Verify Redis is accepting connections
docker compose -f docker-compose.yml exec redis redis-cli ping
# Should return: PONG
 
# If using a password:
docker compose -f docker-compose.yml exec redis redis-cli -a your-password ping

Resetting Everything

To completely reset Otto to a clean state:

./otto stop
make clean
./otto start

This wipes all databases and state. You will need to go through the onboarding flow again.

Using a Local LLM (Ollama)

For deployments where data must not leave your network:

  1. Install Ollama on the host machine:

    curl -fsSL https://ollama.com/install.sh | sh
    ollama pull llama3.2
  2. Configure Otto to use it:

    ./otto configure

    Select "Ollama (local)" as the LLM provider. The default URL http://host.docker.internal:11434 allows the Docker containers to reach Ollama running on the host.

Note that local models vary in quality. For best results with Ollama, use the largest model your hardware can support.