- Root setup.sh orchestrator with per-phase dispatch (./setup.sh p1..p8 | all | status) - Makefile convenience targets (make infra, make llm, make status, etc.) - scripts/common.sh: shared bash library for OS detection, Docker helpers, service management (launchd/systemd), package install, env management - .env.example + .gitignore: shared config template and secret exclusions P1 (homeai-infra): full implementation - docker-compose.yml: Uptime Kuma, code-server, n8n - Note: Home Assistant, Portainer, Gitea are pre-existing instances - setup.sh: Docker install, homeai network, container health checks P2 (homeai-llm): full implementation - Ollama native install with CUDA/ROCm/Metal auto-detection - launchd plist (macOS) + systemd service (Linux) for auto-start - scripts/pull-models.sh: idempotent model puller from manifest - scripts/benchmark.sh: tokens/sec measurement per model - Open WebUI on port 3030 (avoids Gitea :3000 conflict) P3-P8: working stubs with prerequisite checks and TODO sections Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
61 lines
3.2 KiB
Bash
61 lines
3.2 KiB
Bash
#!/usr/bin/env bash
|
|
# homeai-visual/setup.sh — P7: VTube Studio bridge + Live2D expressions
|
|
#
|
|
# Components:
|
|
# - vtube_studio.py — WebSocket client skill for OpenClaw
|
|
# - lipsync.py — amplitude-based lip sync
|
|
# - auth.py — VTube Studio token management
|
|
#
|
|
# Prerequisites:
|
|
# - P4 (homeai-agent) — OpenClaw running
|
|
# - P5 (homeai-character) — aria.json with live2d_expressions set
|
|
# - macOS: VTube Studio installed (Mac App Store)
|
|
# - Linux: N/A — VTube Studio is macOS/Windows/iOS only
|
|
# Linux dev can test the skill code but not the VTube Studio side
|
|
|
|
set -euo pipefail
|
|
|
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
REPO_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
|
source "${REPO_DIR}/scripts/common.sh"
|
|
|
|
log_section "P7: VTube Studio Bridge"
|
|
detect_platform
|
|
|
|
if [[ "$OS_TYPE" == "linux" ]]; then
|
|
log_warn "VTube Studio is not available on Linux."
|
|
log_warn "This sub-project requires macOS (Mac Mini)."
|
|
fi
|
|
|
|
# ─── TODO: Implementation ──────────────────────────────────────────────────────
|
|
cat <<'EOF'
|
|
|
|
┌─────────────────────────────────────────────────────────────────┐
|
|
│ P7: homeai-visual — NOT YET IMPLEMENTED │
|
|
│ │
|
|
│ macOS only (VTube Studio is macOS/iOS/Windows) │
|
|
│ │
|
|
│ Implementation steps: │
|
|
│ 1. Install VTube Studio from Mac App Store │
|
|
│ 2. Enable WebSocket API in VTube Studio (Settings → port 8001) │
|
|
│ 3. Source/purchase Live2D model │
|
|
│ 4. Create expression hotkeys for 8 states │
|
|
│ 5. Implement skills/vtube_studio.py (WebSocket client) │
|
|
│ 6. Implement skills/lipsync.py (amplitude → MouthOpen param) │
|
|
│ 7. Implement skills/auth.py (token request + persistence) │
|
|
│ 8. Register vtube_studio skill with OpenClaw │
|
|
│ 9. Update aria.json live2d_expressions with hotkey IDs │
|
|
│ 10. Test all 8 expression states │
|
|
│ │
|
|
│ On Linux: implement Python skills, test WebSocket protocol │
|
|
│ with a mock server before connecting to real VTube Studio. │
|
|
│ │
|
|
│ Interface contracts: │
|
|
│ VTUBE_WS_URL=ws://localhost:8001 │
|
|
└─────────────────────────────────────────────────────────────────┘
|
|
|
|
EOF
|
|
|
|
log_info "P7 is not yet implemented. See homeai-visual/PLAN.md for details."
|
|
exit 0
|