Complete P2 (LLM) and P3 (voice pipeline) implementation
P2 — homeai-llm: - Fix ollama launchd plist path for Apple Silicon (/opt/homebrew/bin/ollama) - Add Modelfiles for local GGUF models: llama3.3:70b, qwen3:32b, codestral:22b (registered via `ollama create` — no re-download needed) P3 — homeai-voice: - Wyoming STT: wyoming-faster-whisper, large-v3 model, port 10300 - Wyoming TTS: custom Kokoro ONNX server (wyoming_kokoro_server.py), port 10301 Voice af_heart; models at ~/models/kokoro/ - Wake word: openWakeWord daemon (hey_jarvis), notifies OpenClaw at /wake - launchd plists for all three services + load-all-launchd.sh helper - Smoke test: wyoming/test-pipeline.sh — 3/3 passing HA Wyoming integration pending manual UI config (STT 10.0.0.200:10300, TTS 10.0.0.200:10301). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
47
TODO.md
47
TODO.md
@@ -9,30 +9,26 @@
|
||||
|
||||
### P1 · homeai-infra
|
||||
|
||||
- [ ] Install Docker Desktop for Mac, enable launch at login
|
||||
- [ ] Create shared `homeai` Docker network
|
||||
- [ ] Create `~/server/docker/` directory structure
|
||||
- [ ] Write compose files: Home Assistant, Portainer, Uptime Kuma, Gitea, code-server, n8n
|
||||
- [ ] Write `.env.secrets.example` and `Makefile`
|
||||
- [ ] `make up-all` — bring all services up
|
||||
- [ ] Home Assistant onboarding — generate long-lived access token
|
||||
- [ ] Write `~/server/.env.services` with all service URLs
|
||||
- [x] Install Docker Desktop for Mac, enable launch at login
|
||||
- [x] Create shared `homeai` Docker network
|
||||
- [x] Create `~/server/docker/` directory structure
|
||||
- [x] Write compose files: Uptime Kuma, code-server, n8n (HA, Portainer, Gitea are pre-existing on 10.0.0.199)
|
||||
- [x] `docker compose up -d` — bring all services up
|
||||
- [x] Home Assistant onboarding — long-lived access token generated, stored in `.env`
|
||||
- [ ] Install Tailscale, verify all services reachable on Tailnet
|
||||
- [ ] Gitea: create admin account, initialise all 8 sub-project repos, configure SSH
|
||||
- [ ] Gitea: initialise all 8 sub-project repos, configure SSH
|
||||
- [ ] Uptime Kuma: add monitors for all services, configure mobile alerts
|
||||
- [ ] Verify all containers survive a cold reboot
|
||||
|
||||
### P2 · homeai-llm
|
||||
|
||||
- [ ] Install Ollama natively via brew
|
||||
- [ ] Write and load launchd plist (`com.ollama.ollama.plist`)
|
||||
- [ ] Write `ollama-models.txt` with model manifest
|
||||
- [ ] Run `scripts/pull-models.sh` — pull all models
|
||||
- [x] Install Ollama natively via brew
|
||||
- [x] Write and load launchd plist (`com.homeai.ollama.plist`) — `/opt/homebrew/bin/ollama`
|
||||
- [x] Register local GGUF models via Modelfiles (no download): llama3.3:70b, qwen3:32b, codestral:22b
|
||||
- [x] Deploy Open WebUI via Docker compose (port 3030)
|
||||
- [x] Verify Open WebUI connected to Ollama, all models available
|
||||
- [ ] Run `scripts/benchmark.sh` — record results in `benchmark-results.md`
|
||||
- [ ] Deploy Open WebUI via Docker compose (port 3030)
|
||||
- [ ] Verify Open WebUI connected to Ollama, all models available
|
||||
- [ ] Add Ollama + Open WebUI to Uptime Kuma monitors
|
||||
- [ ] Add `OLLAMA_URL` and `OPEN_WEBUI_URL` to `.env.services`
|
||||
|
||||
---
|
||||
|
||||
@@ -40,20 +36,19 @@
|
||||
|
||||
### P3 · homeai-voice
|
||||
|
||||
- [ ] Compile Whisper.cpp with Metal support
|
||||
- [ ] Download Whisper models (`large-v3`, `medium.en`) to `~/models/whisper/`
|
||||
- [ ] Install `wyoming-faster-whisper`, test STT from audio file
|
||||
- [ ] Install Kokoro TTS, test output to audio file
|
||||
- [ ] Install Wyoming-Kokoro adapter, verify Wyoming protocol
|
||||
- [ ] Write + load launchd plists for Wyoming STT (10300) and TTS (10301)
|
||||
- [ ] Connect Home Assistant Wyoming integration (STT + TTS)
|
||||
- [x] Install `wyoming-faster-whisper` — model: faster-whisper-large-v3 (auto-downloaded)
|
||||
- [x] Install Kokoro ONNX TTS — models at `~/models/kokoro/`
|
||||
- [x] Write Wyoming-Kokoro adapter server (`homeai-voice/tts/wyoming_kokoro_server.py`)
|
||||
- [x] Write + load launchd plists for Wyoming STT (10300) and TTS (10301)
|
||||
- [x] Install openWakeWord + pyaudio — model: hey_jarvis
|
||||
- [x] Write + load openWakeWord launchd plist (`com.homeai.wakeword`)
|
||||
- [x] Write `wyoming/test-pipeline.sh` — smoke test (3/3 passing)
|
||||
- [~] Connect Home Assistant Wyoming integration (STT + TTS) — awaiting HA UI config
|
||||
- [ ] Create HA Voice Assistant pipeline
|
||||
- [ ] Test HA Assist via browser: type query → hear spoken response
|
||||
- [ ] Install openWakeWord, test wake detection with USB mic
|
||||
- [ ] Write + load openWakeWord launchd plist
|
||||
- [ ] Install Chatterbox TTS (MPS build), test with sample `.wav`
|
||||
- [ ] Install Qwen3-TTS via MLX (fallback)
|
||||
- [ ] Write `wyoming/test-pipeline.sh` — end-to-end smoke test
|
||||
- [ ] Train custom wake word using character name
|
||||
- [ ] Add Wyoming STT/TTS to Uptime Kuma monitors
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user