Add self-deploying setup scripts for all sub-projects (P1-P8)

- Root setup.sh orchestrator with per-phase dispatch (./setup.sh p1..p8 | all | status)
- Makefile convenience targets (make infra, make llm, make status, etc.)
- scripts/common.sh: shared bash library for OS detection, Docker helpers,
  service management (launchd/systemd), package install, env management
- .env.example + .gitignore: shared config template and secret exclusions

P1 (homeai-infra): full implementation
- docker-compose.yml: Uptime Kuma, code-server, n8n
- Note: Home Assistant, Portainer, Gitea are pre-existing instances
- setup.sh: Docker install, homeai network, container health checks

P2 (homeai-llm): full implementation
- Ollama native install with CUDA/ROCm/Metal auto-detection
- launchd plist (macOS) + systemd service (Linux) for auto-start
- scripts/pull-models.sh: idempotent model puller from manifest
- scripts/benchmark.sh: tokens/sec measurement per model
- Open WebUI on port 3030 (avoids Gitea :3000 conflict)

P3-P8: working stubs with prerequisite checks and TODO sections

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Aodhan Collins
2026-03-04 21:10:53 +00:00
parent 38247d7cc4
commit 7978eaea14
23 changed files with 2525 additions and 0 deletions

View File

@@ -0,0 +1,21 @@
# Ollama model manifest
# One model per line. Lines starting with # are ignored.
# Format: <model>:<tag> or just <model> for latest
#
# Pull all models: bash scripts/pull-models.sh
# Pull specific: ollama pull <model>
# ─── Primary (main conversation) ───────────────────────────────────────────────
llama3.3:70b
# ─── Alternative primary ───────────────────────────────────────────────────────
qwen2.5:72b
# ─── Fast / low-latency (voice pipeline, quick tasks) ─────────────────────────
qwen2.5:7b
# ─── Code generation ───────────────────────────────────────────────────────────
qwen2.5-coder:32b
# ─── Embeddings (mem0 memory store) ────────────────────────────────────────────
nomic-embed-text