Add self-deploying setup scripts for all sub-projects (P1-P8)

- Root setup.sh orchestrator with per-phase dispatch (./setup.sh p1..p8 | all | status)
- Makefile convenience targets (make infra, make llm, make status, etc.)
- scripts/common.sh: shared bash library for OS detection, Docker helpers,
  service management (launchd/systemd), package install, env management
- .env.example + .gitignore: shared config template and secret exclusions

P1 (homeai-infra): full implementation
- docker-compose.yml: Uptime Kuma, code-server, n8n
- Note: Home Assistant, Portainer, Gitea are pre-existing instances
- setup.sh: Docker install, homeai network, container health checks

P2 (homeai-llm): full implementation
- Ollama native install with CUDA/ROCm/Metal auto-detection
- launchd plist (macOS) + systemd service (Linux) for auto-start
- scripts/pull-models.sh: idempotent model puller from manifest
- scripts/benchmark.sh: tokens/sec measurement per model
- Open WebUI on port 3030 (avoids Gitea :3000 conflict)

P3-P8: working stubs with prerequisite checks and TODO sections

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Aodhan Collins
2026-03-04 21:10:53 +00:00
parent 38247d7cc4
commit 7978eaea14
23 changed files with 2525 additions and 0 deletions

View File

@@ -0,0 +1,26 @@
[Unit]
Description=Ollama AI inference server (HomeAI)
Documentation=https://ollama.com
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=%i
ExecStart=/usr/local/bin/ollama serve
Restart=always
RestartSec=5
# Environment
Environment=OLLAMA_HOST=0.0.0.0:11434
Environment=OLLAMA_MODELS=/usr/share/ollama/.ollama/models
# Limits
LimitNOFILE=65536
# CUDA GPU support
# Uncomment and set if you have multiple GPUs:
# Environment=CUDA_VISIBLE_DEVICES=0
[Install]
WantedBy=default.target