## Voice Pipeline (P3) - Replace openWakeWord daemon with Wyoming Satellite approach - Add Wyoming Satellite service on port 10700 for HA voice pipeline - Update setup.sh with cross-platform sed compatibility (macOS/Linux) - Add version field to Kokoro TTS voice info - Update launchd service loader to use Wyoming Satellite ## Home Assistant Integration (P4) - Add custom conversation agent component (openclaw_conversation) - Fix: Use IntentResponse instead of plain strings (HA API requirement) - Support both HTTP API and CLI fallback modes - Config flow for easy HA UI setup - Add OpenClaw bridge scripts (Python + Bash) - Add ha-ctl utility for HA entity control - Fix: Use context manager for token file reading - Add HA configuration examples and documentation ## Infrastructure - Add mem0 backup automation (launchd + script) - Add n8n workflow templates (morning briefing, notification router) - Add VS Code workspace configuration - Reorganize model files into categorized folders: - lmstudio-community/ - mlx-community/ - bartowski/ - mradermacher/ ## Documentation - Update PROJECT_PLAN.md with Wyoming Satellite architecture - Update TODO.md with completed Wyoming integration tasks - Add OPENCLAW_INTEGRATION.md for HA setup guide ## Testing - Verified Wyoming services running (STT:10300, TTS:10301, Satellite:10700) - Verified OpenClaw CLI accessibility - Confirmed cross-platform compatibility fixes
7.5 KiB
7.5 KiB
OpenClaw Integration for Home Assistant Voice Pipeline
This document describes how to integrate OpenClaw with Home Assistant's voice pipeline using the Wyoming protocol.
Architecture Overview
┌─────────────────────────────────────────────────────────────────────────┐
│ Voice Pipeline Flow │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ [Wyoming Satellite] [Home Assistant] [OpenClaw] │
│ │ │ │ │
│ │ 1. Wake word │ │ │
│ │ 2. Stream audio ───────>│ │ │
│ │ │ 3. Send to STT │ │
│ │ │ ────────────────> │ │
│ │ │ │ │
│ │ │ 4. Transcript │ │
│ │ │ <──────────────── │ │
│ │ │ │ │
│ │ │ 5. Conversation │ │
│ │ │ ────────────────> │ │
│ │ │ (via bridge) │ │
│ │ │ │ │
│ │ │ 6. Response │ │
│ │ │ <──────────────── │ │
│ │ │ │ │
│ │ 7. TTS audio <─────────│ │ │
│ │ │ │ │
│ [Speaker] │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Components
1. Wyoming Satellite (com.homeai.wyoming-satellite.plist)
- Port: 10700 (exposes satellite for HA to connect)
- Function: Handles audio I/O, wake word detection, streaming to HA
- Audio: Uses SoX (
rec/play) for macOS audio capture/playback - Note: Replaces the old
wakeword_daemon.py- wake word is now handled by HA's voice pipeline
2. Wyoming STT (com.homeai.wyoming-stt.plist)
- Port: 10300 (Whisper large-v3)
- Function: Speech-to-text transcription
3. Wyoming TTS (com.homeai.wyoming-tts.plist)
- Port: 10301 (Kokoro ONNX)
- Function: Text-to-speech synthesis
4. OpenClaw Bridge (openclaw_bridge.py)
- Function: Connects HA conversation agent to OpenClaw CLI
- Usage: Called via HA
shell_commandorcommand_lineintegration
Deprecated: Wake Word Daemon
The old com.homeai.wakeword.plist service has been disabled. It was trying to notify http://localhost:8080/wake which doesn't exist in OpenClaw. Wake word detection is now handled by the Wyoming satellite through Home Assistant's voice pipeline.
Home Assistant Configuration
Step 1: Add Wyoming Protocol Integration
- Go to Settings → Integrations → Add Integration
- Search for Wyoming Protocol
- Add the following services:
| Service | Host | Port |
|---|---|---|
| Speech-to-Text | 10.0.0.199 |
10300 |
| Text-to-Speech | 10.0.0.199 |
10301 |
| Satellite | 10.0.0.199 |
10700 |
Step 2: Configure Voice Assistant Pipeline
- Go to Settings → Voice Assistants
- Create a new pipeline:
- Name: "HomeAI with OpenClaw"
- Speech-to-Text: Wyoming (localhost:10300)
- Conversation Agent: Home Assistant (or custom below)
- Text-to-Speech: Wyoming (localhost:10301)
Step 3: Add OpenClaw Bridge to HA
Add to your configuration.yaml:
shell_command:
openclaw_chat: 'python3 /Users/aodhan/gitea/homeai/homeai-agent/skills/home-assistant/openclaw_bridge.py "{{ message }}" --raw'
Step 4: Create Automation for OpenClaw
Create an automation that routes voice commands to OpenClaw:
automation:
- alias: "Voice Command via OpenClaw"
trigger:
- platform: conversation
command:
- "ask jarvis *"
action:
- service: shell_command.openclaw_chat
data:
message: "{{ trigger.slots.command }}"
response_variable: openclaw_response
- service: tts.speak
data:
media_player_entity_id: media_player.living_room_speaker
message: "{{ openclaw_response }}"
Manual Testing
Test STT
# Check if STT is running
nc -z localhost 10300 && echo "STT OK"
Test TTS
# Check if TTS is running
nc -z localhost 10301 && echo "TTS OK"
Test Satellite
# Check if satellite is running
nc -z localhost 10700 && echo "Satellite OK"
Test OpenClaw Bridge
# Test the bridge directly
python3 homeai-agent/skills/home-assistant/openclaw_bridge.py "Turn on the living room lights"
Test Full Pipeline
- Load all services:
./homeai-voice/scripts/load-all-launchd.sh - Open HA Assist panel (Settings → Voice Assistants → Assist)
- Type or speak: "Turn on the study shelves light"
- You should hear the TTS response
Troubleshooting
Satellite not connecting to HA
- Check that the satellite is running:
launchctl list com.homeai.wyoming-satellite - Check logs:
tail -f /tmp/homeai-wyoming-satellite.log - Verify HA can reach the satellite: Test from HA container/host
No audio output
- Check SoX installation:
which play - Test SoX directly:
echo "test" | sayorplay /System/Library/Sounds/Glass.aiff - Check audio device permissions
OpenClaw not responding
- Verify OpenClaw is running:
pgrep -f openclaw - Test CLI directly:
openclaw agent --message "Hello" --agent main - Check OpenClaw config:
cat ~/.openclaw/openclaw.json
Wyoming version conflicts
- The satellite requires wyoming 1.4.1 but faster-whisper requires 1.8+
- We've patched this - both should work with wyoming 1.8.0
- If issues occur, reinstall:
pip install 'wyoming>=1.8' wyoming-satellite
File Locations
| File | Purpose |
|---|---|
~/.openclaw/openclaw.json |
OpenClaw configuration |
~/homeai-voice-env/ |
Python virtual environment |
~/Library/LaunchAgents/com.homeai.*.plist |
Launchd services |
/tmp/homeai-*.log |
Service logs |
Next Steps
- Test voice pipeline end-to-end
- Fine-tune wake word sensitivity
- Add custom intents for OpenClaw
- Implement conversation history/memory
- Add ESP32 satellite support (P6)