feat: Music Assistant, Claude primary LLM, model tag in chat, setup.sh rewrite

- Deploy Music Assistant on Pi (10.0.0.199:8095) with host networking for
  Chromecast mDNS discovery, Spotify + SMB library support
- Switch primary LLM from Ollama to Claude Sonnet 4 (Anthropic API),
  local models remain as fallback
- Add model info tag under each assistant message in dashboard chat,
  persisted in conversation JSON
- Rewrite homeai-agent/setup.sh: loads .env, injects API keys into plists,
  symlinks plists to ~/Library/LaunchAgents/, smoke tests services
- Update install_service() in common.sh to use symlinks instead of copies
- Open UFW ports on Pi for Music Assistant (8095, 8097, 8927)
- Add ANTHROPIC_API_KEY to openclaw + bridge launchd plists

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Aodhan Collins
2026-03-18 22:21:28 +00:00
parent 60eb89ea42
commit 117254d560
17 changed files with 1399 additions and 361 deletions

View File

@@ -1,37 +1,38 @@
# P4: homeai-agent — AI Agent, Skills & Automation
> Phase 3 | Depends on: P1 (HA), P2 (Ollama), P3 (Wyoming/TTS), P5 (character JSON)
---
## Goal
OpenClaw running as the primary AI agent: receives voice/text input, loads character persona, calls tools (skills), manages memory (mem0), dispatches responses (TTS, HA actions, VTube expressions). n8n handles scheduled/automated workflows.
> Phase 4 | Depends on: P1 (HA), P2 (Ollama), P3 (Wyoming/TTS), P5 (character JSON)
> Status: **COMPLETE** (all skills implemented)
---
## Architecture
```
Voice input (text from P3 Wyoming STT)
Voice input (text from Wyoming STT via HA pipeline)
OpenClaw API (port 8080)
loads character JSON from P5
System prompt construction
Ollama LLM (P2) — llama3.3:70b
↓ response + tool calls
Skill dispatcher
├── home_assistant.py → HA REST API (P1)
├── memory.py → mem0 (local)
├── vtube_studio.py → VTube WS (P7)
├── comfyui.py → ComfyUI API (P8)
├── music.py → Music Assistant (Phase 7)
── weather.py → HA sensor data
OpenClaw HTTP Bridge (port 8081)
resolves character, loads memories, checks mode
System prompt construction (profile + memories)
checks active-mode.json for model routing
OpenClaw CLI → LLM (Ollama local or cloud API)
↓ response + tool calls via exec
Skill dispatcher (CLIs on PATH)
├── ha-ctl → Home Assistant REST API
├── memory-ctl → JSON memory files
├── monitor-ctl → service health checks
├── character-ctl → character switching
├── routine-ctl → scenes, scripts, multi-step routines
── music-ctl → media player control
├── workflow-ctl → n8n workflow triggering
├── gitea-ctl → Gitea repo/issue queries
├── calendar-ctl → HA calendar + voice reminders
├── mode-ctl → public/private LLM routing
├── gaze-ctl → image generation
└── vtube-ctl → VTube Studio expressions
↓ final response text
TTS dispatch:
├── Chatterbox (voice clone, if active)
└── Kokoro (via Wyoming, fallback)
TTS dispatch (via active-tts-voice.json):
├── Kokoro (local, Wyoming)
└── ElevenLabs (cloud API)
Audio playback to appropriate room
```
@@ -40,296 +41,148 @@ OpenClaw API (port 8080)
## OpenClaw Setup
### Installation
```bash
# Confirm OpenClaw supports Ollama — check repo for latest install method
pip install openclaw
# or
git clone https://github.com/<openclaw-repo>/openclaw
pip install -e .
```
**Key question:** Verify OpenClaw's Ollama/OpenAI-compatible backend support before installation. If OpenClaw doesn't support local Ollama natively, use a thin adapter layer pointing its OpenAI endpoint at `http://localhost:11434/v1`.
### Config — `~/.openclaw/config.yaml`
```yaml
version: 1
llm:
provider: ollama # or openai-compatible
base_url: http://localhost:11434/v1
model: llama3.3:70b
fast_model: qwen2.5:7b # used for quick intent classification
character:
active: aria
config_dir: ~/.openclaw/characters/
memory:
provider: mem0
store_path: ~/.openclaw/memory/
embedding_model: nomic-embed-text
embedding_url: http://localhost:11434/v1
api:
host: 0.0.0.0
port: 8080
tts:
primary: chatterbox # when voice clone active
fallback: kokoro-wyoming # Wyoming TTS endpoint
wyoming_tts_url: tcp://localhost:10301
wake:
endpoint: /wake # openWakeWord POSTs here to trigger listening
```
- **Runtime:** Node.js global install at `/opt/homebrew/bin/openclaw` (v2026.3.2)
- **Config:** `~/.openclaw/openclaw.json`
- **Gateway:** port 8080, mode local, launchd: `com.homeai.openclaw`
- **Default model:** `ollama/qwen3.5:35b-a3b` (MoE, 35B total, 3B active, 26.7 tok/s)
- **Cloud models (public mode):** `anthropic/claude-sonnet-4-20250514`, `openai/gpt-4o`
- **Critical:** `commands.native: true` in config (enables exec tool for CLI skills)
- **Critical:** `contextWindow: 32768` for large models (prevents GPU OOM)
---
## Skills
## Skills (13 total)
All skills live in `~/.openclaw/skills/` (symlinked from `homeai-agent/skills/`).
All skills follow the same pattern:
- `~/.openclaw/skills/<name>/SKILL.md` — metadata + agent instructions
- `~/.openclaw/skills/<name>/<tool>` — executable Python CLI (stdlib only)
- Symlinked to `/opt/homebrew/bin/` for PATH access
- Agent invokes via `exec` tool
- Documented in `~/.openclaw/workspace/TOOLS.md`
### `home_assistant.py`
### Existing Skills (4)
Wraps the HA REST API for common smart home actions.
| Skill | CLI | Description |
|-------|-----|-------------|
| home-assistant | `ha-ctl` | Smart home device control |
| image-generation | `gaze-ctl` | Image generation via ComfyUI/GAZE |
| voice-assistant | (none) | Voice pipeline handling |
| vtube-studio | `vtube-ctl` | VTube Studio expression control |
**Functions:**
- `turn_on(entity_id, **kwargs)` — lights, switches, media players
- `turn_off(entity_id)`
- `toggle(entity_id)`
- `set_light(entity_id, brightness=None, color_temp=None, rgb_color=None)`
- `run_scene(scene_id)`
- `get_state(entity_id)` → returns state + attributes
- `list_entities(domain=None)` → returns entity list
### New Skills (9) — Added 2026-03-17
Uses `HA_URL` and `HA_TOKEN` from `.env.services`.
| Skill | CLI | Description |
|-------|-----|-------------|
| memory | `memory-ctl` | Store/search/recall memories |
| service-monitor | `monitor-ctl` | Service health checks |
| character | `character-ctl` | Character switching |
| routine | `routine-ctl` | Scenes and multi-step routines |
| music | `music-ctl` | Media player control |
| workflow | `workflow-ctl` | n8n workflow management |
| gitea | `gitea-ctl` | Gitea repo/issue/PR queries |
| calendar | `calendar-ctl` | Calendar events and voice reminders |
| mode | `mode-ctl` | Public/private LLM routing |
### `memory.py`
Wraps mem0 for persistent long-term memory.
**Functions:**
- `remember(text, category=None)` — store a memory
- `recall(query, limit=5)` — semantic search over memories
- `forget(memory_id)` — delete a specific memory
- `list_recent(n=10)` — list most recent memories
mem0 uses `nomic-embed-text` via Ollama for embeddings.
### `weather.py`
Pulls weather data from Home Assistant sensors (local weather station or HA weather integration).
**Functions:**
- `get_current()` → temp, humidity, conditions
- `get_forecast(days=3)` → forecast array
### `timer.py`
Simple timer/reminder management.
**Functions:**
- `set_timer(duration_seconds, label=None)` → fires HA notification/TTS on expiry
- `set_reminder(datetime_str, message)` → schedules future TTS playback
- `list_timers()`
- `cancel_timer(timer_id)`
### `music.py` (stub — completed in Phase 7)
```python
def play(query: str): ... # "play jazz" → Music Assistant
def pause(): ...
def skip(): ...
def set_volume(level: int): ... # 0-100
```
### `vtube_studio.py` (implemented in P7)
Stub in P4, full implementation in P7:
```python
def trigger_expression(event: str): ... # "thinking", "happy", etc.
def set_parameter(name: str, value: float): ...
```
### `comfyui.py` (implemented in P8)
Stub in P4, full implementation in P8:
```python
def generate(workflow: str, params: dict) -> str: ... # returns image path
```
See `SKILLS_GUIDE.md` for full user documentation.
---
## mem0 — Long-Term Memory
## HTTP Bridge
### Setup
**File:** `openclaw-http-bridge.py` (runs in homeai-voice-env)
**Port:** 8081, launchd: `com.homeai.openclaw-bridge`
```bash
pip install mem0ai
```
### Config
```python
from mem0 import Memory
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.3:70b",
"ollama_base_url": "http://localhost:11434",
}
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text",
"ollama_base_url": "http://localhost:11434",
}
},
"vector_store": {
"provider": "chroma",
"config": {
"collection_name": "homeai_memory",
"path": "~/.openclaw/memory/chroma",
}
}
}
memory = Memory.from_config(config)
```
> **Decision point:** Start with Chroma (local file-based). If semantic recall quality is poor, migrate to Qdrant (Docker container).
### Backup
Daily cron (via launchd) commits mem0 data to Gitea:
```bash
#!/usr/bin/env bash
cd ~/.openclaw/memory
git add .
git commit -m "mem0 backup $(date +%Y-%m-%d)"
git push origin main
```
---
## n8n Workflows
n8n runs in Docker (deployed in P1). Workflows exported as JSON and stored in `homeai-agent/workflows/`.
### Starter Workflows
**`morning-briefing.json`**
- Trigger: time-based (e.g., 7:30 AM on weekdays)
- Steps: fetch weather → fetch calendar events → compose briefing → POST to OpenClaw TTS → speak aloud
**`notification-router.json`**
- Trigger: HA webhook (new notification)
- Steps: classify urgency → if high: TTS immediately; if low: queue for next interaction
**`memory-backup.json`**
- Trigger: daily schedule
- Steps: commit mem0 data to Gitea
### n8n ↔ OpenClaw Integration
OpenClaw exposes a webhook endpoint that n8n can call to trigger TTS or run a skill:
```
POST http://localhost:8080/speak
{
"text": "Good morning. It is 7:30 and the weather is...",
"room": "all"
}
```
---
## API Surface (OpenClaw)
Key endpoints consumed by other projects:
### Endpoints
| Endpoint | Method | Description |
|---|---|---|
| `/chat` | POST | Send text, get response (+ fires skills) |
| `/wake` | POST | Wake word trigger from openWakeWord |
| `/speak` | POST | TTS only — no LLM, just speak text |
| `/skill/<name>` | POST | Call a specific skill directly |
| `/memory` | GET/POST | Read/write memories |
|----------|--------|-------------|
| `/api/agent/message` | POST | Send message → LLM → response |
| `/api/tts` | POST | Text-to-speech (Kokoro or ElevenLabs) |
| `/api/stt` | POST | Speech-to-text (Wyoming/Whisper) |
| `/wake` | POST | Wake word notification |
| `/status` | GET | Health check |
---
### Request Flow
## Directory Layout
1. Resolve character: explicit `character_id` > `satellite_id` mapping > default
2. Build system prompt: profile fields + metadata + personal/general memories
3. Write TTS config to `active-tts-voice.json`
4. Load mode from `active-mode.json`, resolve model (private → local, public → cloud)
5. Call OpenClaw CLI with `--model` flag if public mode
6. Detect/re-prompt if model promises action but doesn't call exec tool
7. Return response
```
homeai-agent/
├── skills/
│ ├── home_assistant.py
│ ├── memory.py
│ ├── weather.py
│ ├── timer.py
│ ├── music.py # stub
│ ├── vtube_studio.py # stub
│ └── comfyui.py # stub
├── workflows/
│ ├── morning-briefing.json
│ ├── notification-router.json
│ └── memory-backup.json
└── config/
├── config.yaml.example
└── mem0-config.py
```
### Timeout Strategy
| State | Timeout |
|-------|---------|
| Model warm (loaded in VRAM) | 120s |
| Model cold (loading) | 180s |
---
## Interface Contracts
## Daemons
**Consumes:**
- Ollama API: `http://localhost:11434/v1`
- HA API: `$HA_URL` with `$HA_TOKEN`
- Wyoming TTS: `tcp://localhost:10301`
- Character JSON: `~/.openclaw/characters/<active>.json` (from P5)
**Exposes:**
- OpenClaw HTTP API: `http://localhost:8080` — consumed by P3 (voice), P7 (visual triggers), P8 (image skill)
**Add to `.env.services`:**
```dotenv
OPENCLAW_URL=http://localhost:8080
```
| Daemon | Plist | Purpose |
|--------|-------|---------|
| `com.homeai.openclaw` | `launchd/com.homeai.openclaw.plist` | OpenClaw gateway (port 8080) |
| `com.homeai.openclaw-bridge` | `launchd/com.homeai.openclaw-bridge.plist` | HTTP bridge (port 8081) |
| `com.homeai.reminder-daemon` | `launchd/com.homeai.reminder-daemon.plist` | Voice reminder checker (60s interval) |
---
## Implementation Steps
## Data Files
- [ ] Confirm OpenClaw installation method and Ollama compatibility
- [ ] Install OpenClaw, write `config.yaml` pointing at Ollama and HA
- [ ] Verify OpenClaw responds to a basic text query via `/chat`
- [ ] Write `home_assistant.py` skill — test lights on/off via voice
- [ ] Write `memory.py` skill — test store and recall
- [ ] Write `weather.py` skill — verify HA weather sensor data
- [ ] Write `timer.py` skill — test set/fire a timer
- [ ] Write skill stubs: `music.py`, `vtube_studio.py`, `comfyui.py`
- [ ] Set up mem0 with Chroma backend, test semantic recall
- [ ] Write and test memory backup launchd job
- [ ] Deploy n8n via Docker (P1 task if not done)
- [ ] Build morning briefing n8n workflow
- [ ] Symlink `homeai-agent/skills/``~/.openclaw/skills/`
- [ ] Verify full voice → agent → HA action flow (with P3 pipeline)
| File | Purpose |
|------|---------|
| `~/homeai-data/memories/personal/*.json` | Per-character memories |
| `~/homeai-data/memories/general.json` | Shared general memories |
| `~/homeai-data/characters/*.json` | Character profiles (schema v2) |
| `~/homeai-data/satellite-map.json` | Satellite → character mapping |
| `~/homeai-data/active-tts-voice.json` | Current TTS engine/voice |
| `~/homeai-data/active-mode.json` | Public/private mode state |
| `~/homeai-data/routines/*.json` | Local routine definitions |
| `~/homeai-data/reminders.json` | Pending voice reminders |
| `~/homeai-data/conversations/*.json` | Chat conversation history |
---
## Success Criteria
## Environment Variables (OpenClaw Plist)
- [ ] "Turn on the living room lights" → lights turn on via HA
- [ ] "Remember that I prefer jazz in the mornings" → mem0 stores it; "What do I like in the mornings?" → recalls it
- [ ] Morning briefing n8n workflow fires on schedule and speaks via TTS
- [ ] OpenClaw `/status` returns healthy
- [ ] OpenClaw survives Mac Mini reboot (launchd or Docker — TBD based on OpenClaw's preferred run method)
| Variable | Purpose |
|----------|---------|
| `HASS_TOKEN` / `HA_TOKEN` | Home Assistant API token |
| `HA_URL` | Home Assistant URL |
| `GAZE_API_KEY` | Image generation API key |
| `N8N_API_KEY` | n8n automation API key |
| `GITEA_TOKEN` | Gitea API token |
| `ANTHROPIC_API_KEY` | Claude API key (public mode) |
| `OPENAI_API_KEY` | OpenAI API key (public mode) |
---
## Implementation Status
- [x] OpenClaw installed and configured
- [x] HTTP bridge with character resolution and memory injection
- [x] ha-ctl — smart home control
- [x] gaze-ctl — image generation
- [x] vtube-ctl — VTube Studio expressions
- [x] memory-ctl — memory store/search/recall
- [x] monitor-ctl — service health checks
- [x] character-ctl — character switching
- [x] routine-ctl — scenes and multi-step routines
- [x] music-ctl — media player control
- [x] workflow-ctl — n8n workflow triggering
- [x] gitea-ctl — Gitea integration
- [x] calendar-ctl — calendar + voice reminders
- [x] mode-ctl — public/private LLM routing
- [x] Bridge mode routing (active-mode.json → --model flag)
- [x] Cloud providers in openclaw.json (Anthropic, OpenAI)
- [x] Dashboard /api/mode endpoint
- [x] Reminder daemon (com.homeai.reminder-daemon)
- [x] TOOLS.md updated with all skills
- [ ] Set N8N_API_KEY (requires generating in n8n UI)
- [ ] Set GITEA_TOKEN (requires generating in Gitea UI)
- [ ] Set ANTHROPIC_API_KEY / OPENAI_API_KEY for public mode
- [ ] End-to-end voice test of each skill

View File

@@ -0,0 +1,386 @@
# OpenClaw Skills — User Guide
> All skills are invoked by voice or chat. Say a natural command and the AI agent will route it to the right tool automatically.
---
## Quick Reference
| Skill | CLI | What it does |
|-------|-----|-------------|
| Home Assistant | `ha-ctl` | Control lights, switches, sensors, climate |
| Image Generation | `gaze-ctl` | Generate images via ComfyUI/GAZE |
| Memory | `memory-ctl` | Store and recall things about you |
| Service Monitor | `monitor-ctl` | Check if services are running |
| Character Switcher | `character-ctl` | Switch AI personalities |
| Routines & Scenes | `routine-ctl` | Create and trigger multi-step automations |
| Music | `music-ctl` | Play, pause, skip, volume control |
| n8n Workflows | `workflow-ctl` | Trigger automation workflows |
| Gitea | `gitea-ctl` | Query repos, commits, issues |
| Calendar & Reminders | `calendar-ctl` | View calendar, set voice reminders |
| Public/Private Mode | `mode-ctl` | Route to local or cloud LLMs |
---
## Phase A — Core Skills
### Memory (`memory-ctl`)
The agent can remember things about you and recall them later. Memories persist across conversations and are visible in the dashboard.
**Voice examples:**
- "Remember that my favorite color is blue"
- "I take my coffee black"
- "What do you know about me?"
- "Forget that I said I like jazz"
**CLI usage:**
```bash
memory-ctl add personal "User's favorite color is blue" --category preference
memory-ctl add general "Living room speaker is a Sonos" --category fact
memory-ctl search "coffee"
memory-ctl list --type personal
memory-ctl delete <memory_id>
```
**Categories:** `preference`, `fact`, `routine`
**How it works:** Memories are stored as JSON in `~/homeai-data/memories/`. Personal memories are per-character (each character has their own relationship with you). General memories are shared across all characters.
---
### Service Monitor (`monitor-ctl`)
Ask the assistant if everything is healthy, check specific services, or see what models are loaded.
**Voice examples:**
- "Is everything running?"
- "What models are loaded?"
- "Is Home Assistant up?"
- "Show me the Docker containers"
**CLI usage:**
```bash
monitor-ctl status # Full health check (all services)
monitor-ctl check ollama # Single service
monitor-ctl ollama # Models loaded, VRAM usage
monitor-ctl docker # Docker container status
```
**Services checked:** Ollama, OpenClaw Bridge, OpenClaw Gateway, Wyoming STT, Wyoming TTS, Dashboard, n8n, Uptime Kuma, Home Assistant, Gitea
---
### Character Switcher (`character-ctl`)
Switch between AI personalities on the fly. Each character has their own voice, personality, and memories.
**Voice examples:**
- "Talk to Aria"
- "Switch to Sucy"
- "Who can I talk to?"
- "Who am I talking to?"
- "Tell me about Aria"
**CLI usage:**
```bash
character-ctl list # See all characters
character-ctl active # Who is the current default
character-ctl switch "Aria" # Switch (fuzzy name matching)
character-ctl info "Sucy" # Character profile
character-ctl map homeai-kitchen.local aria_123 # Map a satellite to a character
```
**How it works:** Switching updates the default character in `satellite-map.json` and writes the TTS voice config. The new character takes effect on the next request.
---
## Phase B — Home Assistant Extensions
### Routines & Scenes (`routine-ctl`)
Create and trigger Home Assistant scenes and multi-step routines by voice.
**Voice examples:**
- "Activate movie mode"
- "Run the bedtime routine"
- "What scenes do I have?"
- "Create a morning routine"
**CLI usage:**
```bash
routine-ctl list-scenes # HA scenes
routine-ctl list-scripts # HA scripts
routine-ctl trigger "movie_mode" # Activate scene/script
routine-ctl create-scene "cozy" --entities '[{"entity_id":"light.lamp","state":"on","brightness":80}]'
routine-ctl create-routine "bedtime" --steps '[
{"type":"ha","cmd":"off \"All Lights\""},
{"type":"delay","seconds":2},
{"type":"tts","text":"Good night!"}
]'
routine-ctl run "bedtime" # Execute routine
routine-ctl list-routines # List local routines
routine-ctl delete-routine "bedtime" # Remove routine
```
**Step types:**
| Type | Description | Fields |
|------|-------------|--------|
| `scene` | Trigger an HA scene | `target` (scene name) |
| `ha` | Run an ha-ctl command | `cmd` (e.g. `off "Lamp"`) |
| `delay` | Wait between steps | `seconds` |
| `tts` | Speak text aloud | `text` |
**Storage:** Routines are saved as JSON in `~/homeai-data/routines/`.
---
### Music Control (`music-ctl`)
Control music playback through Home Assistant media players — works with Spotify, Music Assistant, Chromecast, and any HA media player.
**Voice examples:**
- "Play some jazz"
- "Pause the music"
- "Next song"
- "What's playing?"
- "Turn the volume to 50"
- "Play Bohemian Rhapsody on the kitchen speaker"
- "Shuffle on"
**CLI usage:**
```bash
music-ctl players # List available players
music-ctl play "jazz" # Search and play
music-ctl play # Resume paused playback
music-ctl pause # Pause
music-ctl next # Skip to next
music-ctl prev # Go to previous
music-ctl volume 50 # Set volume (0-100)
music-ctl now-playing # Current track info
music-ctl shuffle on # Enable shuffle
music-ctl play "rock" --player media_player.kitchen # Target specific player
```
**How it works:** All commands go through HA's `media_player` services. The `--player` flag defaults to the first active (playing/paused) player. Multi-room audio works through Snapcast zones, which appear as separate `media_player` entities.
**Prerequisites:** At least one media player configured in Home Assistant (Spotify integration, Music Assistant, or Chromecast).
---
## Phase C — External Service Skills
### n8n Workflows (`workflow-ctl`)
List and trigger n8n automation workflows by voice.
**Voice examples:**
- "Run the backup workflow"
- "What workflows do I have?"
- "Did the last workflow succeed?"
**CLI usage:**
```bash
workflow-ctl list # All workflows
workflow-ctl trigger "backup" # Trigger by name (fuzzy match)
workflow-ctl trigger "abc123" --data '{"key":"val"}' # Trigger with data
workflow-ctl status <execution_id> # Check execution result
workflow-ctl history --limit 5 # Recent executions
```
**Setup required:**
1. Generate an API key in n8n: Settings → API → Create API Key
2. Set `N8N_API_KEY` in the OpenClaw launchd plist
3. Restart OpenClaw: `launchctl kickstart -k gui/501/com.homeai.openclaw`
---
### Gitea (`gitea-ctl`)
Query your self-hosted Gitea repositories, commits, issues, and pull requests.
**Voice examples:**
- "What repos do I have?"
- "Show recent commits for homeai"
- "Any open issues?"
- "Create an issue for the TTS bug"
**CLI usage:**
```bash
gitea-ctl repos # List all repos
gitea-ctl commits aodhan/homeai --limit 5 # Recent commits
gitea-ctl issues aodhan/homeai --state open # Open issues
gitea-ctl prs aodhan/homeai # Pull requests
gitea-ctl create-issue aodhan/homeai "Bug title" --body "Description here"
```
**Setup required:**
1. Generate a token in Gitea: Settings → Applications → Generate Token
2. Set `GITEA_TOKEN` in the OpenClaw launchd plist
3. Restart OpenClaw
---
### Calendar & Reminders (`calendar-ctl`)
Read calendar events from Home Assistant and set voice reminders that speak via TTS when due.
**Voice examples:**
- "What's on my calendar today?"
- "What's coming up this week?"
- "Remind me in 30 minutes to check the oven"
- "Remind me at 5pm to call mum"
- "What reminders do I have?"
- "Cancel that reminder"
**CLI usage:**
```bash
calendar-ctl today # Today's events
calendar-ctl upcoming --days 3 # Next 3 days
calendar-ctl add "Dentist" --start 2026-03-18T14:00:00 --end 2026-03-18T15:00:00
calendar-ctl remind "Check the oven" --at "in 30 minutes"
calendar-ctl remind "Call mum" --at "at 5pm"
calendar-ctl remind "Team standup" --at "tomorrow 9am"
calendar-ctl reminders # List pending
calendar-ctl cancel-reminder <id> # Cancel
```
**Supported time formats:**
| Format | Example |
|--------|---------|
| Relative | `in 30 minutes`, `in 2 hours` |
| Absolute | `at 5pm`, `at 17:00`, `at 5:30pm` |
| Tomorrow | `tomorrow 9am`, `tomorrow at 14:00` |
| Combined | `in 1 hour 30 minutes` |
**How reminders work:** A background daemon (`com.homeai.reminder-daemon`) checks `~/homeai-data/reminders.json` every 60 seconds. When a reminder is due, it POSTs to the TTS bridge and speaks the reminder aloud. Fired reminders are automatically cleaned up after 24 hours.
**Prerequisites:** Calendar entity configured in Home Assistant (Google Calendar, CalDAV, or local calendar integration).
---
## Phase D — Public/Private Mode
### Mode Controller (`mode-ctl`)
Route AI requests to local LLMs (private, no data leaves the machine) or cloud LLMs (public, faster/more capable) with per-category overrides.
**Voice examples:**
- "Switch to public mode"
- "Go private"
- "What mode am I in?"
- "Use Claude for coding"
- "Keep health queries private"
**CLI usage:**
```bash
mode-ctl status # Current mode and overrides
mode-ctl private # All requests → local Ollama
mode-ctl public # All requests → cloud LLM
mode-ctl set-provider anthropic # Use Claude (default)
mode-ctl set-provider openai # Use GPT-4o
mode-ctl override coding public # Always use cloud for coding
mode-ctl override health private # Always keep health local
mode-ctl list-overrides # Show all category rules
```
**Default category rules:**
| Always Private | Always Public | Follows Global Mode |
|---------------|--------------|-------------------|
| Personal finance | Web search | General chat |
| Health | Coding help | Smart home |
| Passwords | Complex reasoning | Music |
| Private conversations | Translation | Calendar |
**How it works:** The HTTP bridge reads `~/homeai-data/active-mode.json` before each request. Based on the mode and any category overrides, it passes `--model` to the OpenClaw CLI to route to either `ollama/qwen3.5:35b-a3b` (private) or `anthropic/claude-sonnet-4-20250514` / `openai/gpt-4o` (public).
**Setup required for public mode:**
1. Set `ANTHROPIC_API_KEY` and/or `OPENAI_API_KEY` in the OpenClaw launchd plist
2. Restart OpenClaw: `launchctl kickstart -k gui/501/com.homeai.openclaw`
**Dashboard:** The mode can also be toggled via the dashboard API at `GET/POST /api/mode`.
---
## Administration
### Adding API Keys
All API keys are stored in the OpenClaw launchd plist at:
```
~/gitea/homeai/homeai-agent/launchd/com.homeai.openclaw.plist
```
After editing, deploy and restart:
```bash
cp ~/gitea/homeai/homeai-agent/launchd/com.homeai.openclaw.plist ~/Library/LaunchAgents/
launchctl kickstart -k gui/501/com.homeai.openclaw
```
### Environment Variables
| Variable | Purpose | Required for |
|----------|---------|-------------|
| `HASS_TOKEN` | Home Assistant API token | ha-ctl, routine-ctl, music-ctl, calendar-ctl |
| `HA_URL` | Home Assistant URL | Same as above |
| `GAZE_API_KEY` | Image generation API key | gaze-ctl |
| `N8N_API_KEY` | n8n automation API key | workflow-ctl |
| `GITEA_TOKEN` | Gitea API token | gitea-ctl |
| `ANTHROPIC_API_KEY` | Claude API key | mode-ctl (public mode) |
| `OPENAI_API_KEY` | OpenAI API key | mode-ctl (public mode) |
### Skill File Locations
```
~/.openclaw/skills/
├── home-assistant/ ha-ctl → /opt/homebrew/bin/ha-ctl
├── image-generation/ gaze-ctl → /opt/homebrew/bin/gaze-ctl
├── memory/ memory-ctl → /opt/homebrew/bin/memory-ctl
├── service-monitor/ monitor-ctl → /opt/homebrew/bin/monitor-ctl
├── character/ character-ctl → /opt/homebrew/bin/character-ctl
├── routine/ routine-ctl → /opt/homebrew/bin/routine-ctl
├── music/ music-ctl → /opt/homebrew/bin/music-ctl
├── workflow/ workflow-ctl → /opt/homebrew/bin/workflow-ctl
├── gitea/ gitea-ctl → /opt/homebrew/bin/gitea-ctl
├── calendar/ calendar-ctl → /opt/homebrew/bin/calendar-ctl
├── mode/ mode-ctl → /opt/homebrew/bin/mode-ctl
├── voice-assistant/ (no CLI)
└── vtube-studio/ vtube-ctl → /opt/homebrew/bin/vtube-ctl
```
### Data File Locations
| File | Purpose |
|------|---------|
| `~/homeai-data/memories/personal/*.json` | Per-character memories |
| `~/homeai-data/memories/general.json` | Shared general memories |
| `~/homeai-data/characters/*.json` | Character profiles |
| `~/homeai-data/satellite-map.json` | Satellite → character mapping |
| `~/homeai-data/active-tts-voice.json` | Current TTS voice config |
| `~/homeai-data/active-mode.json` | Public/private mode state |
| `~/homeai-data/routines/*.json` | Local routine definitions |
| `~/homeai-data/reminders.json` | Pending voice reminders |
| `~/homeai-data/conversations/*.json` | Chat conversation history |
### Creating a New Skill
Every skill follows the same pattern:
1. Create directory: `~/.openclaw/skills/<name>/`
2. Write `SKILL.md` with YAML frontmatter (`name`, `description`) + usage docs
3. Create Python CLI (stdlib only: `urllib.request`, `json`, `os`, `sys`, `re`, `datetime`)
4. `chmod +x` the CLI and symlink to `/opt/homebrew/bin/`
5. Add env vars to the OpenClaw launchd plist if needed
6. Add a section to `~/.openclaw/workspace/TOOLS.md`
7. Restart OpenClaw: `launchctl kickstart -k gui/501/com.homeai.openclaw`
8. Test: `openclaw agent --message "test prompt" --agent main`
### Daemons
| Daemon | Plist | Purpose |
|--------|-------|---------|
| `com.homeai.reminder-daemon` | `homeai-agent/launchd/com.homeai.reminder-daemon.plist` | Fires TTS reminders when due |
| `com.homeai.openclaw` | `homeai-agent/launchd/com.homeai.openclaw.plist` | OpenClaw gateway |
| `com.homeai.openclaw-bridge` | `homeai-agent/launchd/com.homeai.openclaw-bridge.plist` | HTTP bridge (voice pipeline) |
| `com.homeai.preload-models` | `homeai-llm/scripts/preload-models.sh` | Keeps models warm in VRAM |

View File

@@ -37,6 +37,8 @@
<string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
<key>ELEVENLABS_API_KEY</key>
<string>sk_ec10e261c6190307a37aa161a9583504dcf25a0cabe5dbd5</string>
<key>ANTHROPIC_API_KEY</key>
<string>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</string>
</dict>
</dict>
</plist>

View File

@@ -30,6 +30,18 @@
<string>eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJmZGQ1NzZlYWNkMTU0ZTY2ODY1OTkzYTlhNTIxM2FmNyIsImlhdCI6MTc3MjU4ODYyOCwiZXhwIjoyMDg3OTQ4NjI4fQ.CTAU1EZgpVLp_aRnk4vg6cQqwS5N-p8jQkAAXTxFmLY</string>
<key>GAZE_API_KEY</key>
<string>e63401f17e4845e1059f830267f839fe7fc7b6083b1cb1730863318754d799f4</string>
<key>N8N_URL</key>
<string>http://localhost:5678</string>
<key>N8N_API_KEY</key>
<string></string>
<key>GITEA_URL</key>
<string>http://10.0.0.199:3000</string>
<key>GITEA_TOKEN</key>
<string></string>
<key>ANTHROPIC_API_KEY</key>
<string>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</string>
<key>OPENAI_API_KEY</key>
<string></string>
</dict>
<key>RunAtLoad</key>

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.homeai.reminder-daemon</string>
<key>ProgramArguments</key>
<array>
<string>/Users/aodhan/homeai-voice-env/bin/python3</string>
<string>/Users/aodhan/gitea/homeai/homeai-agent/reminder-daemon.py</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/homeai-reminder-daemon.log</string>
<key>StandardErrorPath</key>
<string>/tmp/homeai-reminder-daemon-error.log</string>
<key>ThrottleInterval</key>
<integer>10</integer>
</dict>
</plist>

View File

@@ -48,6 +48,7 @@ TIMEOUT_WARM = 120 # Model already loaded in VRAM
TIMEOUT_COLD = 180 # Model needs loading first (~10-20s load + inference)
OLLAMA_PS_URL = "http://localhost:11434/api/ps"
VTUBE_BRIDGE_URL = "http://localhost:8002"
DEFAULT_MODEL = "anthropic/claude-sonnet-4-20250514"
def _vtube_fire_and_forget(path: str, data: dict):
@@ -83,6 +84,31 @@ CHARACTERS_DIR = Path("/Users/aodhan/homeai-data/characters")
SATELLITE_MAP_PATH = Path("/Users/aodhan/homeai-data/satellite-map.json")
MEMORIES_DIR = Path("/Users/aodhan/homeai-data/memories")
ACTIVE_TTS_VOICE_PATH = Path("/Users/aodhan/homeai-data/active-tts-voice.json")
ACTIVE_MODE_PATH = Path("/Users/aodhan/homeai-data/active-mode.json")
# Cloud provider model mappings for mode routing
CLOUD_MODELS = {
"anthropic": "anthropic/claude-sonnet-4-20250514",
"openai": "openai/gpt-4o",
}
def load_mode() -> dict:
"""Load the public/private mode configuration."""
try:
with open(ACTIVE_MODE_PATH) as f:
return json.load(f)
except Exception:
return {"mode": "private", "cloud_provider": "anthropic", "overrides": {}}
def resolve_model(mode_data: dict) -> str | None:
"""Resolve which model to use based on mode. Returns None for default (private/local)."""
mode = mode_data.get("mode", "private")
if mode == "private":
return None # Use OpenClaw default (ollama/qwen3.5:35b-a3b)
provider = mode_data.get("cloud_provider", "anthropic")
return CLOUD_MODELS.get(provider, CLOUD_MODELS["anthropic"])
def clean_text_for_tts(text: str) -> str:
@@ -505,10 +531,13 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
self._send_json_response(200, {"status": "ok", "message": "Wake word received"})
@staticmethod
def _call_openclaw(message: str, agent: str, timeout: int) -> str:
def _call_openclaw(message: str, agent: str, timeout: int, model: str = None) -> str:
"""Call OpenClaw CLI and return stdout."""
cmd = ["/opt/homebrew/bin/openclaw", "agent", "--message", message, "--agent", agent]
if model:
cmd.extend(["--model", model])
result = subprocess.run(
["/opt/homebrew/bin/openclaw", "agent", "--message", message, "--agent", agent],
cmd,
capture_output=True,
text=True,
timeout=timeout,
@@ -587,6 +616,15 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
if system_prompt:
message = f"System Context: {system_prompt}\n\nUser Request: {message}"
# Load mode and resolve model routing
mode_data = load_mode()
model_override = resolve_model(mode_data)
active_model = model_override or DEFAULT_MODEL
if model_override:
print(f"[OpenClaw Bridge] Mode: PUBLIC → {model_override}")
else:
print(f"[OpenClaw Bridge] Mode: PRIVATE ({active_model})")
# Check if model is warm to set appropriate timeout
warm = is_model_warm()
timeout = TIMEOUT_WARM if warm else TIMEOUT_COLD
@@ -597,7 +635,7 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
# Call OpenClaw CLI (use full path for launchd compatibility)
try:
response_text = self._call_openclaw(message, agent, timeout)
response_text = self._call_openclaw(message, agent, timeout, model=model_override)
# Re-prompt if the model promised to act but didn't call a tool.
# Detect "I'll do X" / "Let me X" responses that lack any result.
@@ -607,11 +645,11 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
"You just said you would do something but didn't actually call the exec tool. "
"Do NOT explain what you will do — call the tool NOW using exec and return the result."
)
response_text = self._call_openclaw(followup, agent, timeout)
response_text = self._call_openclaw(followup, agent, timeout, model=model_override)
# Signal avatar: idle (TTS handler will override to 'speaking' if voice is used)
_vtube_fire_and_forget("/expression", {"event": "idle"})
self._send_json_response(200, {"response": response_text})
self._send_json_response(200, {"response": response_text, "model": active_model})
except subprocess.TimeoutExpired:
self._send_json_response(504, {"error": f"OpenClaw command timed out after {timeout}s (model was {'warm' if warm else 'cold'})"})
except subprocess.CalledProcessError as e:

90
homeai-agent/reminder-daemon.py Executable file
View File

@@ -0,0 +1,90 @@
#!/usr/bin/env python3
"""
HomeAI Reminder Daemon — checks ~/homeai-data/reminders.json every 60s
and fires TTS via POST http://localhost:8081/api/tts when reminders are due.
"""
import json
import os
import time
import urllib.request
from datetime import datetime
REMINDERS_FILE = os.path.expanduser("~/homeai-data/reminders.json")
TTS_URL = "http://localhost:8081/api/tts"
CHECK_INTERVAL = 60 # seconds
def load_reminders():
try:
with open(REMINDERS_FILE) as f:
return json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
return {"reminders": []}
def save_reminders(data):
with open(REMINDERS_FILE, "w") as f:
json.dump(data, f, indent=2)
def fire_tts(message):
"""Speak reminder via the OpenClaw bridge TTS endpoint."""
try:
payload = json.dumps({"text": f"Reminder: {message}"}).encode()
req = urllib.request.Request(
TTS_URL,
data=payload,
headers={"Content-Type": "application/json"},
method="POST"
)
urllib.request.urlopen(req, timeout=30)
print(f"[{datetime.now().isoformat()}] TTS fired: {message}")
return True
except Exception as e:
print(f"[{datetime.now().isoformat()}] TTS error: {e}")
return False
def check_reminders():
data = load_reminders()
now = datetime.now()
changed = False
for r in data.get("reminders", []):
if r.get("fired"):
continue
try:
due = datetime.fromisoformat(r["due_at"])
except (KeyError, ValueError):
continue
if now >= due:
print(f"[{now.isoformat()}] Reminder due: {r.get('message', '?')}")
fire_tts(r["message"])
r["fired"] = True
changed = True
if changed:
# Clean up fired reminders older than 24h
cutoff = (now.timestamp() - 86400) * 1000
data["reminders"] = [
r for r in data["reminders"]
if not r.get("fired") or int(r.get("id", "0")) > cutoff
]
save_reminders(data)
def main():
print(f"[{datetime.now().isoformat()}] Reminder daemon started (check every {CHECK_INTERVAL}s)")
while True:
try:
check_reminders()
except Exception as e:
print(f"[{datetime.now().isoformat()}] Error: {e}")
time.sleep(CHECK_INTERVAL)
if __name__ == "__main__":
main()

230
homeai-agent/setup.sh Normal file → Executable file
View File

@@ -1,17 +1,20 @@
#!/usr/bin/env bash
# homeai-agent/setup.sh — P4: OpenClaw agent + skills + mem0
# homeai-agent/setup.sh — OpenClaw agent, HTTP bridge, skills, reminder daemon
#
# Components:
# - OpenClaw — AI agent runtime (port 8080)
# - skills/ — home_assistant, memory, weather, timer, music stubs
# - mem0long-term memory (Chroma backend)
# - n8n workflows — morning briefing, notification router, memory backup
# - OpenClaw gateway — AI agent runtime (port 8080)
# - OpenClaw HTTP bridge — HA ↔ OpenClaw translator (port 8081)
# - 13 skillshome-assistant, image-generation, voice-assistant,
# vtube-studio, memory, service-monitor, character,
# routine, music, workflow, gitea, calendar, mode
# - Reminder daemon — fires TTS when reminders are due
#
# Prerequisites:
# - P1 (homeai-infra) — Home Assistant running, HA_TOKEN set
# - P2 (homeai-llm) — Ollama running with llama3.3:70b + nomic-embed-text
# - P3 (homeai-voice) — Wyoming TTS running (for voice output)
# - P5 (homeai-character) — aria.json character config exists
# - Ollama running (port 11434)
# - Home Assistant reachable (HA_TOKEN set in .env)
# - Wyoming TTS running (port 10301)
# - homeai-voice-env venv exists (for bridge + reminder daemon)
# - At least one character JSON in ~/homeai-data/characters/
set -euo pipefail
@@ -19,47 +22,196 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
source "${REPO_DIR}/scripts/common.sh"
log_section "P4: Agent (OpenClaw + skills + mem0)"
log_section "P4: Agent (OpenClaw + HTTP Bridge + Skills)"
detect_platform
# ─── Prerequisite check ────────────────────────────────────────────────────────
# ─── Load environment ────────────────────────────────────────────────────────
ENV_FILE="${REPO_DIR}/.env"
if [[ -f "$ENV_FILE" ]]; then
log_info "Loading .env..."
load_env "$ENV_FILE"
else
log_warn "No .env found at ${ENV_FILE} — API keys may be missing"
fi
# ─── Prerequisite checks ────────────────────────────────────────────────────
log_info "Checking prerequisites..."
for service in "http://localhost:11434:Ollama(P2)" "http://localhost:8123:HomeAssistant(P1)"; do
url="${service%%:*}"; name="${service##*:}"
if ! curl -sf "$url" -o /dev/null 2>/dev/null; then
require_command node "brew install node"
require_command openclaw "npm install -g openclaw"
VOICE_ENV="${HOME}/homeai-voice-env"
if [[ ! -d "$VOICE_ENV" ]]; then
die "homeai-voice-env not found at $VOICE_ENV — run homeai-voice/setup.sh first"
fi
# Check key services (non-fatal)
for check in "http://localhost:11434:Ollama" "http://localhost:10301:Wyoming-TTS"; do
url="${check%%:*}"; name="${check##*:}"
if curl -sf "$url" -o /dev/null 2>/dev/null; then
log_success "$name reachable"
else
log_warn "$name not reachable at $url"
fi
done
load_env_services
if [[ -z "${HA_TOKEN:-}" ]]; then
log_warn "HA_TOKEN not set in ~/.env.services — needed for home_assistant skill"
# Check required env vars
MISSING_KEYS=()
[[ -z "${HA_TOKEN:-}" ]] && MISSING_KEYS+=("HA_TOKEN")
[[ -z "${ANTHROPIC_API_KEY:-}" ]] && MISSING_KEYS+=("ANTHROPIC_API_KEY")
if [[ ${#MISSING_KEYS[@]} -gt 0 ]]; then
log_warn "Missing env vars: ${MISSING_KEYS[*]} — set these in ${ENV_FILE}"
fi
# ─── TODO: Implementation ──────────────────────────────────────────────────────
# ─── Ensure data directories ─────────────────────────────────────────────────
DATA_DIR="${HOME}/homeai-data"
for dir in characters memories memories/personal conversations routines; do
mkdir -p "${DATA_DIR}/${dir}"
done
log_success "Data directories verified"
# ─── OpenClaw config ─────────────────────────────────────────────────────────
OPENCLAW_DIR="${HOME}/.openclaw"
OPENCLAW_CONFIG="${OPENCLAW_DIR}/openclaw.json"
if [[ ! -f "$OPENCLAW_CONFIG" ]]; then
die "OpenClaw config not found at $OPENCLAW_CONFIG — run: openclaw doctor --fix"
fi
log_success "OpenClaw config exists at $OPENCLAW_CONFIG"
# Verify Anthropic provider is configured
if ! grep -q '"anthropic"' "$OPENCLAW_CONFIG" 2>/dev/null; then
log_warn "Anthropic provider not found in openclaw.json — add it for Claude support"
fi
# ─── Install skills ──────────────────────────────────────────────────────────
SKILLS_SRC="${SCRIPT_DIR}/skills"
SKILLS_DEST="${OPENCLAW_DIR}/skills"
if [[ -d "$SKILLS_SRC" ]]; then
log_info "Syncing skills..."
mkdir -p "$SKILLS_DEST"
for skill_dir in "$SKILLS_SRC"/*/; do
skill_name="$(basename "$skill_dir")"
dest="${SKILLS_DEST}/${skill_name}"
if [[ -L "$dest" ]]; then
log_info " ${skill_name} (symlinked)"
elif [[ -d "$dest" ]]; then
# Replace copy with symlink
rm -rf "$dest"
ln -s "$skill_dir" "$dest"
log_step "${skill_name} → symlinked"
else
ln -s "$skill_dir" "$dest"
log_step "${skill_name} → installed"
fi
done
log_success "Skills synced ($(ls -d "$SKILLS_DEST"/*/ 2>/dev/null | wc -l | tr -d ' ') total)"
else
log_warn "No skills directory at $SKILLS_SRC"
fi
# ─── Install launchd services (macOS) ────────────────────────────────────────
if [[ "$OS_TYPE" == "macos" ]]; then
log_info "Installing launchd agents..."
LAUNCHD_DIR="${SCRIPT_DIR}/launchd"
AGENTS_DIR="${HOME}/Library/LaunchAgents"
mkdir -p "$AGENTS_DIR"
# Inject API keys into plists that need them
_inject_plist_key() {
local plist="$1" key="$2" value="$3"
if [[ -n "$value" ]] && grep -q "<key>${key}</key>" "$plist" 2>/dev/null; then
# Use python for reliable XML-safe replacement
python3 -c "
import sys, re
with open('$plist') as f: content = f.read()
pattern = r'(<key>${key}</key>\s*<string>)[^<]*(</string>)'
content = re.sub(pattern, r'\g<1>${value}\g<2>', content)
with open('$plist', 'w') as f: f.write(content)
"
fi
}
# Update API keys in plist source files before linking
OPENCLAW_PLIST="${LAUNCHD_DIR}/com.homeai.openclaw.plist"
BRIDGE_PLIST="${LAUNCHD_DIR}/com.homeai.openclaw-bridge.plist"
if [[ -f "$OPENCLAW_PLIST" ]]; then
_inject_plist_key "$OPENCLAW_PLIST" "ANTHROPIC_API_KEY" "${ANTHROPIC_API_KEY:-}"
_inject_plist_key "$OPENCLAW_PLIST" "OPENAI_API_KEY" "${OPENAI_API_KEY:-}"
_inject_plist_key "$OPENCLAW_PLIST" "HA_TOKEN" "${HA_TOKEN:-}"
_inject_plist_key "$OPENCLAW_PLIST" "HASS_TOKEN" "${HA_TOKEN:-}"
_inject_plist_key "$OPENCLAW_PLIST" "GITEA_TOKEN" "${GITEA_TOKEN:-}"
_inject_plist_key "$OPENCLAW_PLIST" "N8N_API_KEY" "${N8N_API_KEY:-}"
fi
if [[ -f "$BRIDGE_PLIST" ]]; then
_inject_plist_key "$BRIDGE_PLIST" "ANTHROPIC_API_KEY" "${ANTHROPIC_API_KEY:-}"
_inject_plist_key "$BRIDGE_PLIST" "ELEVENLABS_API_KEY" "${ELEVENLABS_API_KEY:-}"
fi
# Symlink and load each plist
for plist in "$LAUNCHD_DIR"/*.plist; do
[[ ! -f "$plist" ]] && continue
plist_name="$(basename "$plist")"
plist_label="${plist_name%.plist}"
dest="${AGENTS_DIR}/${plist_name}"
# Unload if already running
launchctl bootout "gui/$(id -u)/${plist_label}" 2>/dev/null || true
# Symlink source → LaunchAgents
ln -sf "$(cd "$(dirname "$plist")" && pwd)/${plist_name}" "$dest"
# Load
launchctl bootstrap "gui/$(id -u)" "$dest" 2>/dev/null && \
log_success " ${plist_label} → loaded" || \
log_warn " ${plist_label} → failed to load (check: launchctl print gui/$(id -u)/${plist_label})"
done
fi
# ─── Smoke test ──────────────────────────────────────────────────────────────
log_info "Running smoke tests..."
sleep 2 # Give services a moment to start
# Check gateway
if curl -sf "http://localhost:8080" -o /dev/null 2>/dev/null; then
log_success "OpenClaw gateway responding on :8080"
else
log_warn "OpenClaw gateway not responding on :8080 — check: tail /tmp/homeai-openclaw.log"
fi
# Check bridge
if curl -sf "http://localhost:8081/status" -o /dev/null 2>/dev/null; then
log_success "HTTP bridge responding on :8081"
else
log_warn "HTTP bridge not responding on :8081 — check: tail /tmp/homeai-openclaw-bridge.log"
fi
# ─── Summary ─────────────────────────────────────────────────────────────────
print_summary "Agent Setup Complete" \
"OpenClaw gateway" "http://localhost:8080" \
"HTTP bridge" "http://localhost:8081" \
"OpenClaw config" "$OPENCLAW_CONFIG" \
"Skills directory" "$SKILLS_DEST" \
"Character data" "${DATA_DIR}/characters/" \
"Memory data" "${DATA_DIR}/memories/" \
"Reminder data" "${DATA_DIR}/reminders.json" \
"Gateway log" "/tmp/homeai-openclaw.log" \
"Bridge log" "/tmp/homeai-openclaw-bridge.log"
cat <<'EOF'
┌─────────────────────────────────────────────────────────────────┐
P4: homeai-agent — NOT YET IMPLEMENTED │
│ │
│ OPEN QUESTION: Which OpenClaw version/fork to use? │
│ Decide before implementing. See homeai-agent/PLAN.md. │
│ │
Implementation steps: │
1. Install OpenClaw (pip install or git clone) │
│ 2. Create ~/.openclaw/config.yaml from config/config.yaml.example │
│ 3. Create skills: home_assistant, memory, weather, timer, music│
│ 4. Install mem0 + Chroma backend │
│ 5. Create systemd/launchd service for OpenClaw (port 8080) │
│ 6. Import n8n workflows from workflows/ │
│ 7. Smoke test: POST /chat "turn on living room lights" │
│ │
│ Interface contracts: │
│ OPENCLAW_URL=http://localhost:8080 │
└─────────────────────────────────────────────────────────────────┘
To reload a service after editing its plist:
launchctl bootout gui/$(id -u)/com.homeai.<service>
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.homeai.<service>.plist
To test the agent:
curl -X POST http://localhost:8081/api/agent/message \
-H 'Content-Type: application/json' \
-d '{"message":"say hello","agent":"main"}'
EOF
log_info "P4 is not yet implemented. See homeai-agent/PLAN.md for details."
exit 0