Environment Variables Reference
> Complete reference for all environment variables consumed by the Otto backend, ARQ workers, and Docker infrastructure.
Complete reference for all environment variables consumed by the Otto backend, ARQ workers, and Docker infrastructure.
Quick Start: Minimum Required
| Variable | Purpose |
|---|---|
REDIS_URL | Redis connection (queues, checkpoints, cache) |
OPENAI_API_KEY or GOOGLE_API_KEY or USE_OLLAMA=true | At least one LLM provider |
Everything else has sensible defaults and is optional unless you enable specific integrations.
LLM Provider
One LLM provider is required. If multiple are configured, precedence is: Gemini > Ollama > OpenAI.
| Variable | Default | Required | Description |
|---|---|---|---|
OPENAI_API_KEY | -- | Yes (if using OpenAI) | OpenAI API key for LLM and embeddings |
OPENAI_MODEL | gpt-5-mini | No | OpenAI model name for the main agent |
OPEN_AI_BASE | https://api.openai.com/v1 | No | Custom OpenAI-compatible API base URL |
USE_GEMINI | false | No | Set to true to use Google Gemini as LLM provider |
GEMINI_MODEL | gemini-3.1-pro-preview-customtools | No | Gemini model name |
GOOGLE_API_KEY | -- | Yes (if USE_GEMINI=true) | Google API key for Gemini LLM and embeddings |
USE_OLLAMA | false | No | Set to true to use Ollama as local LLM provider |
OLLAMA_MODEL | gpt-oss:latest | No | Ollama model name for the main agent |
OLLAMA_BASE_URL | http://localhost:11434 | No | Ollama server base URL |
MISTRAL_API_KEY | -- | No | Mistral API key (passed through docker-compose only) |
Notes:
OPENAI_API_KEYis also used for embeddings when OpenAI is the active provider.GOOGLE_API_KEYis only needed whenUSE_GEMINI=true.OPEN_AI_BASEallows using OpenAI-compatible APIs (e.g., Azure OpenAI, local proxies).
Embeddings
| Variable | Default | Required | Description |
|---|---|---|---|
EMBEDDING_MODEL | models/gemini-embedding-001 (Gemini) or text-embedding-3-small (OpenAI) | No | Override the embedding model name; default depends on active LLM provider |
OLLAMA_EMBED_MODEL | nomic-embed-text:latest | No | Embedding model for Ollama provider |
The embedding provider is automatically selected based on the active LLM provider. These variables override the default model name only.
Database and Redis
| Variable | Default | Required | Description |
|---|---|---|---|
DATABASE_URL | sqlite:///./otto.db | No | Database connection string (SQLite or PostgreSQL) |
REDIS_URL | redis://localhost:6379/0 | Yes | Redis connection URL for caching, queues, and checkpoints |
REDIS_TYPE | Auto-detected from URL | No | Redis implementation type: standard or upstash |
REDIS_PASSWORD | -- | No | Redis authentication password. Used in docker-compose to configure both the Redis server and connection URLs |
UPSTASH_REDIS_REST_URL | -- | No | Upstash Redis REST URL (alternative to standard Redis) |
UPSTASH_REDIS_REST_TOKEN | -- | No | Upstash Redis REST token |
Shared between services: REDIS_URL, DATABASE_URL, and REDIS_PASSWORD must match between the backend API and ARQ worker processes. Docker-compose handles this automatically.
Deployment and Networking
| Variable | Default | Required | Description |
|---|---|---|---|
DEPLOYMENT_MODE | self-hosted | No | self-hosted or hosted. Controls auth middleware, user limits, and Slack behavior |
DEPLOYMENT_TYPE | cloud_frontend | No | cloud_frontend or onprem_frontend. Controls CORS origin configuration |
FRONTEND_URL | https://otto.yourcompany.com | No | Frontend URL, used for CORS and links in Slack messages |
BACKEND_API_URL | -- | No | Backend API URL (logged on startup for debugging) |
ALLOWED_ORIGINS | -- | No | Comma-separated CORS origins. Fallback when DEPLOYMENT_TYPE is neither cloud_frontend nor onprem_frontend |
DEMO_MODE | -- | No | Enables demo mode (docker-compose only) |
CLOUDFLARE_TUNNEL_TOKEN | -- | No | Cloudflare tunnel token for the tunnel service in docker-compose |
Authentication behavior by mode:
DEPLOYMENT_MODE | Authentication | Swagger UI |
|---|---|---|
self-hosted | None (all requests pass through) | Available at /docs |
hosted | Firebase ID token required via Authorization: Bearer <token> | Disabled |
Storage
| Variable | Default | Required | Description |
|---|---|---|---|
STORAGE_BACKEND | local | No | Storage backend: local, gcs, or google_drive |
LOCAL_STORAGE_PATH | /tmp/otto-files | No | File system path for local storage and sandbox |
GCS_BUCKET_NAME | -- | Yes (if STORAGE_BACKEND=gcs) | Google Cloud Storage bucket name |
GOOGLE_DRIVE_FOLDER_ID | -- | Yes (if STORAGE_BACKEND=google_drive) | Google Drive root folder ID |
GOOGLE_SERVICE_ACCOUNT_JSON | -- | Conditional | Google service account credentials as JSON or base64-encoded JSON |
GOOGLE_SERVICE_ACCOUNT_FILE | -- | Conditional | Path to Google service account JSON key file (alternative to GOOGLE_SERVICE_ACCOUNT_JSON) |
STORAGE_QUOTA_GB | 0 (unlimited) | No | Maximum storage quota in GB. 0 disables the limit |
Notes:
GOOGLE_SERVICE_ACCOUNT_JSONandGOOGLE_SERVICE_ACCOUNT_FILEare mutually exclusive. Provide one or the other for GCS/Drive backends.LOCAL_STORAGE_PATHis also used as the sandbox root for file tools and skill scripts.
Memory and Vector Store
| Variable | Default | Required | Description |
|---|---|---|---|
OTTO_MEMORY_DB | <LOCAL_STORAGE_PATH>/otto_memory.db | No | Path to the SQLite vector memory database |
COMPACTION_THRESHOLD | 12 | No | Message count before conversation compaction triggers |
MAX_RECENT_MESSAGES | 8 | No | Number of recent messages to keep uncompacted |
Slack Integration
| Variable | Default | Required | Description |
|---|---|---|---|
SLACK_MCP_XOXB_TOKEN | -- | Yes (if using Slack) | Slack bot user OAuth token (xoxb-...). Primary token used across all Slack functionality |
SLACK_SIGNING_SECRET | -- | Yes (if using Slack) | Slack app signing secret for request HMAC verification |
SLACK_BOT_USER_ID | -- | No | Slack bot user ID for detecting self-mentions in channels |
SLACK_BOT_TOKEN | -- | No | Legacy; docker-compose only. Python code reads SLACK_MCP_XOXB_TOKEN instead |
SLACK_MCP_XOXC_TOKEN | -- | No | Slack user cookie token (MCP Slack tool, legacy) |
SLACK_MCP_XOXD_TOKEN | -- | No | Slack user session token (MCP Slack tool, legacy) |
Notes:
SLACK_MCP_XOXB_TOKENmust be set on both the backend and worker. Docker-compose shares it automatically.- Slack endpoints use HMAC signature verification via
SLACK_SIGNING_SECRET, independent of Firebase auth.
Email Integration
| Variable | Default | Required | Description |
|---|---|---|---|
SMTP_HOST | -- | Yes (if sending email) | SMTP server hostname (e.g., smtp.gmail.com) |
SMTP_PORT | 465 | Yes (if sending email) | SMTP server port |
SMTP_USER | -- | Yes (if sending/receiving email) | SMTP/IMAP username (usually the email address) |
SMTP_PASS | -- | Yes (if sending/receiving email) | SMTP/IMAP password or app password |
IMAP_HOST | -- | Yes (if receiving email) | IMAP server hostname (e.g., imap.gmail.com) |
IMAP_PORT | 993 | No | IMAP server port |
IMAP_CHECK | 60 | No | Email check interval in seconds |
Note: SMTP_USER and SMTP_PASS are shared between SMTP (outbound) and IMAP (inbound) connections.
MCP (Model Context Protocol)
| Variable | Default | Required | Description |
|---|---|---|---|
MCP_CONFIG_JSON | -- | No | Full MCP server configuration as a JSON string (preferred in production) |
MCP_SERVERS | -- | No | File path to mcp-config.json (legacy alternative to MCP_CONFIG_JSON) |
TAVILY_API_KEY | -- | No | Tavily API key, passed through to MCP server environment |
MCP_CONFIG_JSON format example:
Agent Customization
| Variable | Default | Required | Description |
|---|---|---|---|
SPECIALIZATION | General Assistance | No | Agent specialization/persona (e.g., Marketing Assistant) |
ADDITIONAL_INSTRUCTIONS | -- | No | Extra instructions appended to the agent system prompt |
OTTO_PERSONA_PATH | -- | No | Path to a persona/SOUL.md file for custom agent persona |
OTTO_PROJECTS_DIR | -- | No | Directory containing project-specific persona files |
Note: When a SOUL.md file is loaded from OTTO_PERSONA_PATH, its specialization and instructions fields override SPECIALIZATION and ADDITIONAL_INSTRUCTIONS respectively.
Skills System
| Variable | Default | Required | Description |
|---|---|---|---|
SKILLS_REFRESH_INTERVAL | 60 | No | Seconds between skill directory rescans |
OTTO_SKILL_THRESHOLD | 0.35 | No | Minimum similarity score for a skill to match user input |
OTTO_MAX_SKILLS | 3 | No | Maximum number of skills to load per task |
OTTO_SCRIPT_TIMEOUT | 120 | No | Timeout in seconds for skill script execution |
Performance Tuning
| Variable | Default | Required | Description |
|---|---|---|---|
MAX_PARALLEL_TOOL_CALLS | 5 | No | Maximum tool calls the agent executes in parallel per turn |
MEDIUM_RESULT_THRESHOLD | 4000 | No | Character count triggering medium-length result truncation |
LARGE_RESULT_THRESHOLD | 12000 | No | Character count triggering large-result truncation (first 20 lines only) |
ENABLE_INTENT_PREPROCESSING | false | No | Enable semantic intent preprocessing for tool matching (experimental) |
Hosted / Multi-Tenant
These variables only apply when DEPLOYMENT_MODE=hosted.
| Variable | Default | Required | Description |
|---|---|---|---|
MAX_USERS | 0 (unlimited) | No | Maximum number of users allowed. 0 means unlimited |
TIER | self-hosted | No | Deployment tier label returned in the team info endpoint |
FIREBASE_PROJECT_ID | -- | Yes (if DEPLOYMENT_MODE=hosted) | Firebase project ID for authentication middleware |
OTTO_VERSION | dev | No | Application version string reported in the health endpoint |
LangSmith / LangChain Tracing
Both LANGCHAIN_* and LANGSMITH_* prefixes are supported. They are synced automatically on startup -- setting one sets both.
| Variable | Default | Required | Description |
|---|---|---|---|
LANGCHAIN_API_KEY | -- | No | LangSmith API key. Enables tracing when set |
LANGCHAIN_TRACING_V2 | false | No | Enable LangChain tracing v2 |
LANGCHAIN_ENDPOINT | https://api.smith.langchain.com | No | LangSmith API endpoint URL |
LANGCHAIN_PROJECT | default | No | LangSmith project name for trace grouping |
LANGSMITH_API_KEY | Synced from LANGCHAIN_API_KEY | No | Alias for LANGCHAIN_API_KEY |
LANGSMITH_TRACING | false | No | Alias for LANGCHAIN_TRACING_V2 |
LANGSMITH_ENDPOINT | Synced from LANGCHAIN_ENDPOINT | No | Alias for LANGCHAIN_ENDPOINT |
LANGSMITH_PROJECT | Synced from LANGCHAIN_PROJECT | No | Alias for LANGCHAIN_PROJECT |
Docker / Infrastructure
These variables are consumed by docker-compose or the otto CLI, not by Python application code directly.
| Variable | Default | Required | Description |
|---|---|---|---|
PYTHONPATH | /app | No | Python module search path inside the container |
FASTAPI_ENV | development | No | FastAPI environment hint (dev compose only) |
BACKEND_IMAGE_TAG | latest | No | Docker image tag for the backend container |
FRONTEND_IMAGE_TAG | latest | No | Docker image tag for the frontend container |
GITHUB_TOKEN | -- | No | GitHub PAT for authenticated image pulls during otto update |
OTTO_MEMORY_DB_PATH | /data/team_memory/otto_memory.db | No | Docker-level path for the team memory DB volume mount. Python reads OTTO_MEMORY_DB instead |
Variables That Must Match Across Services
When running the backend API and ARQ worker as separate processes (the default in Docker), these variables must have identical values in both:
| Variable | Why |
|---|---|
REDIS_URL | Both use the same Redis for queues and checkpoints |
DATABASE_URL | Both read/write the same task database |
OPENAI_API_KEY / GOOGLE_API_KEY | Worker makes LLM calls |
OPENAI_MODEL / GEMINI_MODEL / OLLAMA_MODEL | Model must match between API and worker |
USE_GEMINI / USE_OLLAMA | Provider selection must match |
SLACK_MCP_XOXB_TOKEN | Worker sends Slack notifications on task completion |
STORAGE_BACKEND / LOCAL_STORAGE_PATH | Worker reads/writes files |
MCP_CONFIG_JSON / MCP_SERVERS | Worker initializes MCP tool servers |
Docker-compose passes these from a single .env file to both services automatically.