Files
homeai/homeai-agent/SKILLS_GUIDE.md
Aodhan Collins 117254d560 feat: Music Assistant, Claude primary LLM, model tag in chat, setup.sh rewrite
- Deploy Music Assistant on Pi (10.0.0.199:8095) with host networking for
  Chromecast mDNS discovery, Spotify + SMB library support
- Switch primary LLM from Ollama to Claude Sonnet 4 (Anthropic API),
  local models remain as fallback
- Add model info tag under each assistant message in dashboard chat,
  persisted in conversation JSON
- Rewrite homeai-agent/setup.sh: loads .env, injects API keys into plists,
  symlinks plists to ~/Library/LaunchAgents/, smoke tests services
- Update install_service() in common.sh to use symlinks instead of copies
- Open UFW ports on Pi for Music Assistant (8095, 8097, 8927)
- Add ANTHROPIC_API_KEY to openclaw + bridge launchd plists

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 22:21:28 +00:00

14 KiB

OpenClaw Skills — User Guide

All skills are invoked by voice or chat. Say a natural command and the AI agent will route it to the right tool automatically.


Quick Reference

Skill CLI What it does
Home Assistant ha-ctl Control lights, switches, sensors, climate
Image Generation gaze-ctl Generate images via ComfyUI/GAZE
Memory memory-ctl Store and recall things about you
Service Monitor monitor-ctl Check if services are running
Character Switcher character-ctl Switch AI personalities
Routines & Scenes routine-ctl Create and trigger multi-step automations
Music music-ctl Play, pause, skip, volume control
n8n Workflows workflow-ctl Trigger automation workflows
Gitea gitea-ctl Query repos, commits, issues
Calendar & Reminders calendar-ctl View calendar, set voice reminders
Public/Private Mode mode-ctl Route to local or cloud LLMs

Phase A — Core Skills

Memory (memory-ctl)

The agent can remember things about you and recall them later. Memories persist across conversations and are visible in the dashboard.

Voice examples:

  • "Remember that my favorite color is blue"
  • "I take my coffee black"
  • "What do you know about me?"
  • "Forget that I said I like jazz"

CLI usage:

memory-ctl add personal "User's favorite color is blue" --category preference
memory-ctl add general "Living room speaker is a Sonos" --category fact
memory-ctl search "coffee"
memory-ctl list --type personal
memory-ctl delete <memory_id>

Categories: preference, fact, routine

How it works: Memories are stored as JSON in ~/homeai-data/memories/. Personal memories are per-character (each character has their own relationship with you). General memories are shared across all characters.


Service Monitor (monitor-ctl)

Ask the assistant if everything is healthy, check specific services, or see what models are loaded.

Voice examples:

  • "Is everything running?"
  • "What models are loaded?"
  • "Is Home Assistant up?"
  • "Show me the Docker containers"

CLI usage:

monitor-ctl status              # Full health check (all services)
monitor-ctl check ollama        # Single service
monitor-ctl ollama              # Models loaded, VRAM usage
monitor-ctl docker              # Docker container status

Services checked: Ollama, OpenClaw Bridge, OpenClaw Gateway, Wyoming STT, Wyoming TTS, Dashboard, n8n, Uptime Kuma, Home Assistant, Gitea


Character Switcher (character-ctl)

Switch between AI personalities on the fly. Each character has their own voice, personality, and memories.

Voice examples:

  • "Talk to Aria"
  • "Switch to Sucy"
  • "Who can I talk to?"
  • "Who am I talking to?"
  • "Tell me about Aria"

CLI usage:

character-ctl list              # See all characters
character-ctl active            # Who is the current default
character-ctl switch "Aria"     # Switch (fuzzy name matching)
character-ctl info "Sucy"       # Character profile
character-ctl map homeai-kitchen.local aria_123  # Map a satellite to a character

How it works: Switching updates the default character in satellite-map.json and writes the TTS voice config. The new character takes effect on the next request.


Phase B — Home Assistant Extensions

Routines & Scenes (routine-ctl)

Create and trigger Home Assistant scenes and multi-step routines by voice.

Voice examples:

  • "Activate movie mode"
  • "Run the bedtime routine"
  • "What scenes do I have?"
  • "Create a morning routine"

CLI usage:

routine-ctl list-scenes                         # HA scenes
routine-ctl list-scripts                        # HA scripts
routine-ctl trigger "movie_mode"                # Activate scene/script
routine-ctl create-scene "cozy" --entities '[{"entity_id":"light.lamp","state":"on","brightness":80}]'
routine-ctl create-routine "bedtime" --steps '[
  {"type":"ha","cmd":"off \"All Lights\""},
  {"type":"delay","seconds":2},
  {"type":"tts","text":"Good night!"}
]'
routine-ctl run "bedtime"                       # Execute routine
routine-ctl list-routines                       # List local routines
routine-ctl delete-routine "bedtime"            # Remove routine

Step types:

Type Description Fields
scene Trigger an HA scene target (scene name)
ha Run an ha-ctl command cmd (e.g. off "Lamp")
delay Wait between steps seconds
tts Speak text aloud text

Storage: Routines are saved as JSON in ~/homeai-data/routines/.


Music Control (music-ctl)

Control music playback through Home Assistant media players — works with Spotify, Music Assistant, Chromecast, and any HA media player.

Voice examples:

  • "Play some jazz"
  • "Pause the music"
  • "Next song"
  • "What's playing?"
  • "Turn the volume to 50"
  • "Play Bohemian Rhapsody on the kitchen speaker"
  • "Shuffle on"

CLI usage:

music-ctl players                               # List available players
music-ctl play "jazz"                           # Search and play
music-ctl play                                  # Resume paused playback
music-ctl pause                                 # Pause
music-ctl next                                  # Skip to next
music-ctl prev                                  # Go to previous
music-ctl volume 50                             # Set volume (0-100)
music-ctl now-playing                           # Current track info
music-ctl shuffle on                            # Enable shuffle
music-ctl play "rock" --player media_player.kitchen  # Target specific player

How it works: All commands go through HA's media_player services. The --player flag defaults to the first active (playing/paused) player. Multi-room audio works through Snapcast zones, which appear as separate media_player entities.

Prerequisites: At least one media player configured in Home Assistant (Spotify integration, Music Assistant, or Chromecast).


Phase C — External Service Skills

n8n Workflows (workflow-ctl)

List and trigger n8n automation workflows by voice.

Voice examples:

  • "Run the backup workflow"
  • "What workflows do I have?"
  • "Did the last workflow succeed?"

CLI usage:

workflow-ctl list                                # All workflows
workflow-ctl trigger "backup"                    # Trigger by name (fuzzy match)
workflow-ctl trigger "abc123" --data '{"key":"val"}'  # Trigger with data
workflow-ctl status <execution_id>               # Check execution result
workflow-ctl history --limit 5                   # Recent executions

Setup required:

  1. Generate an API key in n8n: Settings → API → Create API Key
  2. Set N8N_API_KEY in the OpenClaw launchd plist
  3. Restart OpenClaw: launchctl kickstart -k gui/501/com.homeai.openclaw

Gitea (gitea-ctl)

Query your self-hosted Gitea repositories, commits, issues, and pull requests.

Voice examples:

  • "What repos do I have?"
  • "Show recent commits for homeai"
  • "Any open issues?"
  • "Create an issue for the TTS bug"

CLI usage:

gitea-ctl repos                                 # List all repos
gitea-ctl commits aodhan/homeai --limit 5       # Recent commits
gitea-ctl issues aodhan/homeai --state open     # Open issues
gitea-ctl prs aodhan/homeai                     # Pull requests
gitea-ctl create-issue aodhan/homeai "Bug title" --body "Description here"

Setup required:

  1. Generate a token in Gitea: Settings → Applications → Generate Token
  2. Set GITEA_TOKEN in the OpenClaw launchd plist
  3. Restart OpenClaw

Calendar & Reminders (calendar-ctl)

Read calendar events from Home Assistant and set voice reminders that speak via TTS when due.

Voice examples:

  • "What's on my calendar today?"
  • "What's coming up this week?"
  • "Remind me in 30 minutes to check the oven"
  • "Remind me at 5pm to call mum"
  • "What reminders do I have?"
  • "Cancel that reminder"

CLI usage:

calendar-ctl today                               # Today's events
calendar-ctl upcoming --days 3                   # Next 3 days
calendar-ctl add "Dentist" --start 2026-03-18T14:00:00 --end 2026-03-18T15:00:00
calendar-ctl remind "Check the oven" --at "in 30 minutes"
calendar-ctl remind "Call mum" --at "at 5pm"
calendar-ctl remind "Team standup" --at "tomorrow 9am"
calendar-ctl reminders                           # List pending
calendar-ctl cancel-reminder <id>                # Cancel

Supported time formats:

Format Example
Relative in 30 minutes, in 2 hours
Absolute at 5pm, at 17:00, at 5:30pm
Tomorrow tomorrow 9am, tomorrow at 14:00
Combined in 1 hour 30 minutes

How reminders work: A background daemon (com.homeai.reminder-daemon) checks ~/homeai-data/reminders.json every 60 seconds. When a reminder is due, it POSTs to the TTS bridge and speaks the reminder aloud. Fired reminders are automatically cleaned up after 24 hours.

Prerequisites: Calendar entity configured in Home Assistant (Google Calendar, CalDAV, or local calendar integration).


Phase D — Public/Private Mode

Mode Controller (mode-ctl)

Route AI requests to local LLMs (private, no data leaves the machine) or cloud LLMs (public, faster/more capable) with per-category overrides.

Voice examples:

  • "Switch to public mode"
  • "Go private"
  • "What mode am I in?"
  • "Use Claude for coding"
  • "Keep health queries private"

CLI usage:

mode-ctl status                                  # Current mode and overrides
mode-ctl private                                 # All requests → local Ollama
mode-ctl public                                  # All requests → cloud LLM
mode-ctl set-provider anthropic                  # Use Claude (default)
mode-ctl set-provider openai                     # Use GPT-4o
mode-ctl override coding public                  # Always use cloud for coding
mode-ctl override health private                 # Always keep health local
mode-ctl list-overrides                          # Show all category rules

Default category rules:

Always Private Always Public Follows Global Mode
Personal finance Web search General chat
Health Coding help Smart home
Passwords Complex reasoning Music
Private conversations Translation Calendar

How it works: The HTTP bridge reads ~/homeai-data/active-mode.json before each request. Based on the mode and any category overrides, it passes --model to the OpenClaw CLI to route to either ollama/qwen3.5:35b-a3b (private) or anthropic/claude-sonnet-4-20250514 / openai/gpt-4o (public).

Setup required for public mode:

  1. Set ANTHROPIC_API_KEY and/or OPENAI_API_KEY in the OpenClaw launchd plist
  2. Restart OpenClaw: launchctl kickstart -k gui/501/com.homeai.openclaw

Dashboard: The mode can also be toggled via the dashboard API at GET/POST /api/mode.


Administration

Adding API Keys

All API keys are stored in the OpenClaw launchd plist at:

~/gitea/homeai/homeai-agent/launchd/com.homeai.openclaw.plist

After editing, deploy and restart:

cp ~/gitea/homeai/homeai-agent/launchd/com.homeai.openclaw.plist ~/Library/LaunchAgents/
launchctl kickstart -k gui/501/com.homeai.openclaw

Environment Variables

Variable Purpose Required for
HASS_TOKEN Home Assistant API token ha-ctl, routine-ctl, music-ctl, calendar-ctl
HA_URL Home Assistant URL Same as above
GAZE_API_KEY Image generation API key gaze-ctl
N8N_API_KEY n8n automation API key workflow-ctl
GITEA_TOKEN Gitea API token gitea-ctl
ANTHROPIC_API_KEY Claude API key mode-ctl (public mode)
OPENAI_API_KEY OpenAI API key mode-ctl (public mode)

Skill File Locations

~/.openclaw/skills/
├── home-assistant/    ha-ctl         → /opt/homebrew/bin/ha-ctl
├── image-generation/  gaze-ctl       → /opt/homebrew/bin/gaze-ctl
├── memory/            memory-ctl     → /opt/homebrew/bin/memory-ctl
├── service-monitor/   monitor-ctl    → /opt/homebrew/bin/monitor-ctl
├── character/         character-ctl  → /opt/homebrew/bin/character-ctl
├── routine/           routine-ctl    → /opt/homebrew/bin/routine-ctl
├── music/             music-ctl      → /opt/homebrew/bin/music-ctl
├── workflow/          workflow-ctl   → /opt/homebrew/bin/workflow-ctl
├── gitea/             gitea-ctl      → /opt/homebrew/bin/gitea-ctl
├── calendar/          calendar-ctl   → /opt/homebrew/bin/calendar-ctl
├── mode/              mode-ctl       → /opt/homebrew/bin/mode-ctl
├── voice-assistant/   (no CLI)
└── vtube-studio/      vtube-ctl      → /opt/homebrew/bin/vtube-ctl

Data File Locations

File Purpose
~/homeai-data/memories/personal/*.json Per-character memories
~/homeai-data/memories/general.json Shared general memories
~/homeai-data/characters/*.json Character profiles
~/homeai-data/satellite-map.json Satellite → character mapping
~/homeai-data/active-tts-voice.json Current TTS voice config
~/homeai-data/active-mode.json Public/private mode state
~/homeai-data/routines/*.json Local routine definitions
~/homeai-data/reminders.json Pending voice reminders
~/homeai-data/conversations/*.json Chat conversation history

Creating a New Skill

Every skill follows the same pattern:

  1. Create directory: ~/.openclaw/skills/<name>/
  2. Write SKILL.md with YAML frontmatter (name, description) + usage docs
  3. Create Python CLI (stdlib only: urllib.request, json, os, sys, re, datetime)
  4. chmod +x the CLI and symlink to /opt/homebrew/bin/
  5. Add env vars to the OpenClaw launchd plist if needed
  6. Add a section to ~/.openclaw/workspace/TOOLS.md
  7. Restart OpenClaw: launchctl kickstart -k gui/501/com.homeai.openclaw
  8. Test: openclaw agent --message "test prompt" --agent main

Daemons

Daemon Plist Purpose
com.homeai.reminder-daemon homeai-agent/launchd/com.homeai.reminder-daemon.plist Fires TTS reminders when due
com.homeai.openclaw homeai-agent/launchd/com.homeai.openclaw.plist OpenClaw gateway
com.homeai.openclaw-bridge homeai-agent/launchd/com.homeai.openclaw-bridge.plist HTTP bridge (voice pipeline)
com.homeai.preload-models homeai-llm/scripts/preload-models.sh Keeps models warm in VRAM