Files
homeai/homeai-agent/skills/voice-assistant/SKILL.md
Aodhan Collins c3dda280ea Add OpenClaw skills: home-assistant and voice-assistant
- home-assistant: controls lights, switches, media players, climate etc
  via HA REST API at 10.0.0.199:8123; includes service/domain reference
- voice-assistant: voice-specific response style guide for TTS output
  (concise, no markdown, natural speech)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-06 00:29:36 +00:00

1.4 KiB
Raw Blame History

name, description
name description
voice-assistant Handle voice assistant requests received via the wake-word pipeline. Use when a request arrives tagged as a voice command or comes through the wake-word webhook (/wake endpoint). Respond concisely — responses will be spoken aloud via TTS (Kokoro). Avoid markdown, lists, or formatting that doesn't work in speech. Keep replies to 12 sentences unless detail is requested.

Voice Assistant Skill

Context

This assistant runs on a Mac Mini (LINDBLUM, 10.0.0.200). Requests may arrive:

  • Via the /wake HTTP webhook (wake word detected by openWakeWord)
  • Via Home Assistant Wyoming voice pipeline
  • Via direct text input

Response style for voice

  • Speak naturally, as if in conversation
  • Keep it short — 12 sentences by default
  • No bullet points, headers, or markdown
  • Say numbers as words when appropriate ("twenty-two degrees" not "22°C")
  • Use the character's personality (defined in system prompt)

TTS pipeline

Responses are rendered by Kokoro ONNX (port 10301, voice: af_heart) and played back through the requesting room's speaker.

Smart home integration

For device control requests, use the home-assistant skill. HA is at 10.0.0.199:8123.

Wake word webhook

POST to http://localhost:8080/wake triggers this context:

{"wake_word": "hey_jarvis", "score": 0.87}

After wake, wait for the transcribed utterance from the STT pipeline (Whisper large-v3, port 10300).