Files
homeai/homeai-llm/docker/docker-compose.yml
Aodhan Collins 7978eaea14 Add self-deploying setup scripts for all sub-projects (P1-P8)
- Root setup.sh orchestrator with per-phase dispatch (./setup.sh p1..p8 | all | status)
- Makefile convenience targets (make infra, make llm, make status, etc.)
- scripts/common.sh: shared bash library for OS detection, Docker helpers,
  service management (launchd/systemd), package install, env management
- .env.example + .gitignore: shared config template and secret exclusions

P1 (homeai-infra): full implementation
- docker-compose.yml: Uptime Kuma, code-server, n8n
- Note: Home Assistant, Portainer, Gitea are pre-existing instances
- setup.sh: Docker install, homeai network, container health checks

P2 (homeai-llm): full implementation
- Ollama native install with CUDA/ROCm/Metal auto-detection
- launchd plist (macOS) + systemd service (Linux) for auto-start
- scripts/pull-models.sh: idempotent model puller from manifest
- scripts/benchmark.sh: tokens/sec measurement per model
- Open WebUI on port 3030 (avoids Gitea :3000 conflict)

P3-P8: working stubs with prerequisite checks and TODO sections

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 21:10:53 +00:00

46 lines
1.4 KiB
YAML

---
# homeai-llm/docker/docker-compose.yml
# P2 — Open WebUI
#
# Ollama runs NATIVELY (not in Docker) for GPU acceleration.
# This compose file only starts the Open WebUI frontend.
#
# Prerequisites:
# - Ollama installed and running on the host at port 11434
# - `homeai` Docker network exists (created by P1 setup)
#
# Usage:
# docker compose -f docker/docker-compose.yml up -d
name: homeai-llm
services:
# ─── Open WebUI ──────────────────────────────────────────────────────────────
open-webui:
container_name: homeai-open-webui
image: ghcr.io/open-webui/open-webui:main
restart: unless-stopped
ports:
- "3030:8080" # Exposed on 3030 to avoid conflict with Gitea (3000)
volumes:
- ${DATA_DIR:-~/homeai-data}/open-webui:/app/backend/data
environment:
# Connect to Ollama on the host
- OLLAMA_BASE_URL=http://host.docker.internal:11434
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY:-changeme_random_32_char}
- ENABLE_SIGNUP=true
- DEFAULT_MODELS=llama3.3:70b
extra_hosts:
- "host.docker.internal:host-gateway" # Linux compat
networks:
- homeai
labels:
- homeai.service=open-webui
- homeai.url=http://localhost:3030
networks:
homeai:
external: true
name: homeai