Merge branch 'comms': memory v2, prompt styles, Dream/GAZE, Wyoming TTS fix
This commit is contained in:
@@ -10,6 +10,8 @@ DEEPSEEK_API_KEY=
|
||||
GEMINI_API_KEY=
|
||||
ELEVENLABS_API_KEY=
|
||||
GAZE_API_KEY=
|
||||
DREAM_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
|
||||
# ─── Data & Paths ──────────────────────────────────────────────────────────────
|
||||
DATA_DIR=${HOME}/homeai-data
|
||||
@@ -59,3 +61,5 @@ VTUBE_WS_URL=ws://localhost:8001
|
||||
# ─── P8: Images ────────────────────────────────────────────────────────────────
|
||||
COMFYUI_URL=http://localhost:8188
|
||||
|
||||
# ─── P9: Character Management ─────────────────────────────────────────────────
|
||||
DREAM_HOST=http://localhost:3000
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -45,3 +45,6 @@ homeai-esp32/esphome/secrets.yaml
|
||||
homeai-llm/benchmark-results.md
|
||||
homeai-character/characters/*.json
|
||||
!homeai-character/characters/.gitkeep
|
||||
|
||||
# MCP Files
|
||||
*.mcp*
|
||||
64
CLAUDE.md
64
CLAUDE.md
@@ -18,14 +18,14 @@ A self-hosted, always-on personal AI assistant running on a **Mac Mini M4 Pro (6
|
||||
| Storage | 1TB SSD |
|
||||
| Network | Gigabit Ethernet |
|
||||
|
||||
Primary LLM is Claude Sonnet 4 via Anthropic API. Local Ollama models available as fallback. All other inference (STT, TTS, image gen) runs locally.
|
||||
Primary LLMs are Claude 4.5/4.6 family via Anthropic API (Haiku for quick, Sonnet for standard, Opus for creative/RP). Local Ollama models available as fallback. All other inference (STT, TTS, image gen) runs locally.
|
||||
|
||||
---
|
||||
|
||||
## Core Stack
|
||||
|
||||
### AI & LLM
|
||||
- **Claude Sonnet 4** — primary LLM via Anthropic API (`anthropic/claude-sonnet-4-20250514`), used for all agent interactions
|
||||
- **Claude 4.5/4.6 family** — primary LLMs via Anthropic API, tiered per prompt style: Haiku 4.5 (quick commands), Sonnet 4.6 (standard/creative), Opus 4.6 (roleplay/storytelling)
|
||||
- **Ollama** — local LLM runtime (fallback models: Llama 3.3 70B, Qwen 3.5 35B-A3B, Qwen 2.5 7B)
|
||||
- **Model keep-warm daemon** — `preload-models.sh` runs as a loop, checks every 5 min, re-pins evicted models with `keep_alive=-1`. Keeps `qwen2.5:7b` (small/fast) and `$HOMEAI_MEDIUM_MODEL` (default: `qwen3.5:35b-a3b`) always loaded in VRAM. Medium model is configurable via env var for per-persona model assignment.
|
||||
- **Open WebUI** — browser-based chat interface, runs as Docker container
|
||||
@@ -53,13 +53,16 @@ Primary LLM is Claude Sonnet 4 via Anthropic API. Local Ollama models available
|
||||
- **OpenClaw** — primary AI agent layer; receives voice commands, calls tools, manages personality
|
||||
- **OpenClaw Skills** — 13 skills total: home-assistant, image-generation, voice-assistant, vtube-studio, memory, service-monitor, character, routine, music, workflow, gitea, calendar, mode
|
||||
- **n8n** — visual workflow automation (Docker), chains AI actions
|
||||
- **Character Memory System** — two-tier JSON-based memories (personal per-character + general shared), injected into LLM system prompt with budget truncation
|
||||
- **Public/Private Mode** — routes requests to local Ollama (private) or cloud LLMs (public) with per-category overrides via `active-mode.json`. Default primary model is Claude Sonnet 4.
|
||||
- **Character Memory System** — SQLite + sqlite-vec semantic search (personal per-character + general shared + follow-ups), injected into LLM system prompt with context-aware retrieval
|
||||
- **Prompt Styles** — 6 styles (quick, standard, creative, roleplayer, game-master, storyteller) with per-style model routing, temperature, and section stripping. JSON templates in `homeai-agent/prompt-styles/`
|
||||
- **Public/Private Mode** — routes requests to local Ollama (private) or cloud LLMs (public) with per-category overrides via `active-mode.json`. Default primary model is Claude Sonnet 4.6, with per-style model tiering (Haiku/Sonnet/Opus).
|
||||
|
||||
### Character & Personality
|
||||
- **Character Schema v2** — JSON spec with background, dialogue_style, appearance, skills, gaze_presets (v1 auto-migrated)
|
||||
- **Character Schema v2** — JSON spec with background, dialogue_style, appearance, skills, gaze_presets, dream_id, gaze_character, prompt style overrides (v1 auto-migrated)
|
||||
- **HomeAI Dashboard** — unified web app: character editor, chat, memory manager, service dashboard
|
||||
- **Dream** — character management service (http://10.0.0.101:3000), REST API for character CRUD with GAZE integration for cover images
|
||||
- **Character MCP Server** — LLM-assisted character creation via Fandom wiki/Wikipedia lookup (Docker)
|
||||
- **GAZE** — image generation service (http://10.0.0.101:5782), REST API for presets, characters, and job-based image generation
|
||||
- Character config stored as JSON files in `~/homeai-data/characters/`, consumed by bridge for system prompt construction
|
||||
|
||||
### Visual Representation
|
||||
@@ -97,7 +100,7 @@ ESP32-S3-BOX-3 (room)
|
||||
→ Bridge resolves character (satellite_id → character mapping)
|
||||
→ Bridge builds system prompt (profile + memories) and writes TTS config to state file
|
||||
→ Bridge checks active-mode.json for model routing (private=local, public=cloud)
|
||||
→ OpenClaw CLI → LLM generates response (Claude Sonnet 4 default, Ollama fallback)
|
||||
→ OpenClaw CLI → LLM generates response (Claude Haiku/Sonnet/Opus per style, Ollama fallback)
|
||||
→ Response dispatched:
|
||||
→ Wyoming TTS reads state file → routes to Kokoro (local) or ElevenLabs (cloud)
|
||||
→ Audio sent back to ESP32-S3-BOX-3 (spoken response)
|
||||
@@ -130,15 +133,32 @@ Each character is a JSON file in `~/homeai-data/characters/` with:
|
||||
- **Profile fields** — background, appearance, dialogue_style, skills array
|
||||
- **TTS config** — engine (kokoro/elevenlabs), kokoro_voice, elevenlabs_voice_id, elevenlabs_model, speed
|
||||
- **GAZE presets** — array of `{preset, trigger}` for image generation styles
|
||||
- **Dream link** — `dream_id` for syncing character data from Dream service
|
||||
- **GAZE link** — `gaze_character` for auto-assigned cover image and presets
|
||||
- **Prompt style config** — `default_prompt_style`, `prompt_style_overrides` for per-style tuning
|
||||
- **Custom prompt rules** — trigger/response overrides for specific contexts
|
||||
|
||||
### Memory System
|
||||
|
||||
Two-tier memory stored as JSON in `~/homeai-data/memories/`:
|
||||
- **Personal memories** (`personal/{character_id}.json`) — per-character, about user interactions
|
||||
- **General memories** (`general.json`) — shared operational knowledge (tool usage, device info, routines)
|
||||
SQLite + sqlite-vec database at `~/homeai-data/memories/memories.db`:
|
||||
- **Personal memories** — per-character, semantic/episodic/relational/opinion types
|
||||
- **General memories** — shared operational knowledge (character_id = "general")
|
||||
- **Follow-ups** — LLM-driven questions injected into system prompt, auto-resolve after 2 surfacings or 48h
|
||||
- **Privacy levels** — public, sensitive, local_only (local_only excluded from cloud model requests)
|
||||
- **Semantic search** — sentence-transformers all-MiniLM-L6-v2 embeddings (384 dims) for context-aware retrieval
|
||||
- Core module: `homeai-agent/memory_store.py` (imported by bridge + memory-ctl skill)
|
||||
|
||||
Memories are injected into the system prompt by the bridge with budget truncation (personal: 4000 chars, general: 3000 chars, newest first).
|
||||
### Prompt Styles
|
||||
|
||||
Six response styles in `homeai-agent/prompt-styles/`, each a JSON template with model, temperature, and instructions:
|
||||
- **quick** — Claude Haiku 4.5, low temp, brief responses, strips profile sections
|
||||
- **standard** — Claude Sonnet 4.6, balanced
|
||||
- **creative** — Claude Sonnet 4.6, higher temp, elaborative
|
||||
- **roleplayer** — Claude Opus 4.6, full personality injection
|
||||
- **game-master** — Claude Opus 4.6, narrative-focused
|
||||
- **storyteller** — Claude Opus 4.6, story-centric
|
||||
|
||||
Style selection: dashboard chat has a style picker; characters can set `default_prompt_style`; satellites use the global active style. Bridge resolves model per style → group → mode → default.
|
||||
|
||||
### TTS Voice Routing
|
||||
|
||||
@@ -160,11 +180,12 @@ This works for both ESP32/HA pipeline and dashboard chat.
|
||||
6. **Character system** — schema v2, dashboard editor, memory system, per-character TTS routing ✅
|
||||
7. **OpenClaw skills expansion** — 9 new skills (memory, monitor, character, routine, music, workflow, gitea, calendar, mode) + public/private mode routing ✅
|
||||
8. **Music Assistant** — deployed on Pi (10.0.0.199:8095), Spotify + SMB + Chromecast players ✅
|
||||
9. **Animated visual** — PNG/GIF character visual for the web assistant (initial visual layer)
|
||||
10. **Android app** — companion app for mobile access to the assistant
|
||||
11. **ComfyUI** — image generation online, character-consistent model workflows
|
||||
12. **Extended integrations** — Snapcast, code-server
|
||||
13. **Polish** — Authelia, Tailscale hardening, iOS widgets
|
||||
9. **Memory v2 + Prompt Styles + Dream/GAZE** — SQLite memory with semantic search, 6 prompt styles with model tiering, Dream character import, GAZE character linking ✅
|
||||
10. **Animated visual** — PNG/GIF character visual for the web assistant (initial visual layer)
|
||||
11. **Android app** — companion app for mobile access to the assistant
|
||||
12. **ComfyUI** — image generation online, character-consistent model workflows
|
||||
13. **Extended integrations** — Snapcast, code-server
|
||||
14. **Polish** — Authelia, Tailscale hardening, iOS widgets
|
||||
|
||||
### Stretch Goals
|
||||
- **Live2D / VTube Studio** — full Live2D model with WebSocket API bridge (requires learning Live2D tooling)
|
||||
@@ -180,7 +201,10 @@ This works for both ESP32/HA pipeline and dashboard chat.
|
||||
- OpenClaw workspace tools: `~/.openclaw/workspace/TOOLS.md`
|
||||
- OpenClaw config: `~/.openclaw/openclaw.json`
|
||||
- Character configs: `~/homeai-data/characters/`
|
||||
- Character memories: `~/homeai-data/memories/`
|
||||
- Character memories DB: `~/homeai-data/memories/memories.db`
|
||||
- Memory store module: `homeai-agent/memory_store.py`
|
||||
- Prompt style templates: `homeai-agent/prompt-styles/`
|
||||
- Active prompt style: `~/homeai-data/active-prompt-style.json`
|
||||
- Conversation history: `~/homeai-data/conversations/`
|
||||
- Active TTS state: `~/homeai-data/active-tts-voice.json`
|
||||
- Active mode state: `~/homeai-data/active-mode.json`
|
||||
@@ -194,6 +218,8 @@ This works for both ESP32/HA pipeline and dashboard chat.
|
||||
- Gitea repos root: `~/gitea/`
|
||||
- Music Assistant (Pi): `~/docker/selbina/music-assistant/` on 10.0.0.199
|
||||
- Skills user guide: `homeai-agent/SKILLS_GUIDE.md`
|
||||
- Dream service: `http://10.0.0.101:3000` (character management, REST API)
|
||||
- GAZE service: `http://10.0.0.101:5782` (image generation, REST API)
|
||||
|
||||
---
|
||||
|
||||
@@ -203,8 +229,10 @@ This works for both ESP32/HA pipeline and dashboard chat.
|
||||
- ESP32-S3-BOX-3 units are dumb satellites — all intelligence stays on Mac Mini
|
||||
- The character JSON schema (from Character Manager) should be treated as a versioned spec; pipeline components read from it, never hardcode personality values
|
||||
- OpenClaw skills are the primary extension mechanism — new capabilities = new skills
|
||||
- Primary LLM is Claude Sonnet 4 (Anthropic API); local Ollama models are available as fallback
|
||||
- Primary LLMs are Claude 4.5/4.6 family (Anthropic API) with per-style tiering; local Ollama models are available as fallback
|
||||
- Launchd plists are symlinked from repo source to ~/Library/LaunchAgents/ — edit source, then bootout/bootstrap to reload
|
||||
- Music Assistant runs on Pi (10.0.0.199), not Mac Mini — needs host networking for Chromecast mDNS discovery
|
||||
- VTube Studio API bridge should be a standalone OpenClaw skill with clear event interface
|
||||
- mem0 memory store should be backed up as part of regular Gitea commits
|
||||
- Memory DB (`memories.db`) should be backed up as part of regular Gitea commits
|
||||
- Dream characters can be linked to GAZE characters for cover image fallback and cross-referencing
|
||||
- Prompt style selection hierarchy: explicit user pick → character default → global active style
|
||||
|
||||
89
PORT_MAP.md
Normal file
89
PORT_MAP.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# HomeAI Port Map
|
||||
|
||||
All ports used across the HomeAI stack. Updated 2026-03-20.
|
||||
|
||||
**Host: LINDBLUM (10.0.0.101)** — Mac Mini M4 Pro
|
||||
|
||||
## Voice Pipeline
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 10300 | Wyoming STT (Whisper MLX) | TCP (Wyoming) | launchd `com.homeai.wyoming-stt` | 0.0.0.0 |
|
||||
| 10301 | Wyoming TTS (Kokoro) | TCP (Wyoming) | launchd `com.homeai.wyoming-tts` | 0.0.0.0 |
|
||||
| 10302 | Wyoming TTS (ElevenLabs) | TCP (Wyoming) | launchd `com.homeai.wyoming-elevenlabs` | 0.0.0.0 |
|
||||
| 10700 | Wyoming Satellite | TCP (Wyoming) | launchd `com.homeai.wyoming-satellite` | 0.0.0.0 |
|
||||
|
||||
## Agent / Orchestration
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 8080 | OpenClaw Gateway | HTTP | launchd `com.homeai.openclaw` | localhost |
|
||||
| 8081 | OpenClaw HTTP Bridge | HTTP | launchd `com.homeai.openclaw-bridge` | 0.0.0.0 |
|
||||
| 8002 | VTube Studio Bridge | HTTP | launchd `com.homeai.vtube-bridge` | localhost |
|
||||
|
||||
## LLM
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 11434 | Ollama | HTTP | launchd `com.homeai.ollama` | 0.0.0.0 |
|
||||
| 3030 | Open WebUI | HTTP | Docker `homeai-open-webui` | 0.0.0.0 |
|
||||
|
||||
## Dashboards / UIs
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 5173 | HomeAI Dashboard | HTTP | launchd `com.homeai.dashboard` | localhost |
|
||||
| 5174 | Desktop Assistant | HTTP | launchd `com.homeai.desktop-assistant` | localhost |
|
||||
|
||||
## Image Generation
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 5782 | GAZE API | HTTP | — | 10.0.0.101 |
|
||||
| 8188 | ComfyUI | HTTP | — | localhost |
|
||||
|
||||
## Visual
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 8001 | VTube Studio (WebSocket) | WS | External app | localhost |
|
||||
|
||||
## Infrastructure (Docker)
|
||||
|
||||
| Port | Service | Protocol | Managed By | Binds |
|
||||
|------|---------|----------|------------|-------|
|
||||
| 3001 | Uptime Kuma | HTTP | Docker `homeai-uptime-kuma` | 0.0.0.0 |
|
||||
| 5678 | n8n | HTTP | Docker `homeai-n8n` | 0.0.0.0 |
|
||||
| 8090 | code-server | HTTP | Docker `homeai-code-server` | 0.0.0.0 |
|
||||
|
||||
---
|
||||
|
||||
**Host: SELBINA (10.0.0.199)** — Raspberry Pi 5
|
||||
|
||||
| Port | Service | Protocol | Managed By |
|
||||
|------|---------|----------|------------|
|
||||
| 3000 | Gitea | HTTP | Docker |
|
||||
| 8095 | Music Assistant | HTTP | Docker (host networking) |
|
||||
| 8123 | Home Assistant | HTTPS | Docker |
|
||||
| 9443 | Portainer | HTTPS | Docker |
|
||||
|
||||
---
|
||||
|
||||
## Port Ranges Summary
|
||||
|
||||
```
|
||||
3000–3030 Web UIs (Gitea, Uptime Kuma, Open WebUI)
|
||||
5173–5174 Vite dev servers (dashboards)
|
||||
5678 n8n
|
||||
5782 GAZE API
|
||||
8001–8002 VTube Studio (app + bridge)
|
||||
8080–8081 OpenClaw (gateway + bridge)
|
||||
8090 code-server
|
||||
8095 Music Assistant
|
||||
8123 Home Assistant
|
||||
8188 ComfyUI
|
||||
9443 Portainer
|
||||
11434 Ollama
|
||||
10300–10302 Wyoming voice (STT + TTS)
|
||||
10700 Wyoming satellite
|
||||
```
|
||||
865
homeai-agent/memory_store.py
Normal file
865
homeai-agent/memory_store.py
Normal file
@@ -0,0 +1,865 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
HomeAI Memory Store — SQLite + Vector Search
|
||||
|
||||
Replaces flat JSON memory files with a structured SQLite database
|
||||
using sqlite-vec for semantic similarity search.
|
||||
|
||||
Used by:
|
||||
- openclaw-http-bridge.py (memory retrieval + follow-up injection)
|
||||
- memory-ctl skill (CLI memory management)
|
||||
- Dashboard API (REST endpoints via bridge)
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sqlite3
|
||||
import struct
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import sqlite_vec
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
DATA_DIR = Path(os.environ.get("DATA_DIR", os.path.expanduser("~/homeai-data")))
|
||||
MEMORIES_DIR = DATA_DIR / "memories"
|
||||
DB_PATH = MEMORIES_DIR / "memories.db"
|
||||
EMBEDDING_DIM = 384 # all-MiniLM-L6-v2
|
||||
|
||||
# Privacy keywords for rule-based classification
|
||||
PRIVACY_KEYWORDS = {
|
||||
"local_only": [
|
||||
"health", "illness", "sick", "doctor", "medical", "medication", "surgery",
|
||||
"salary", "bank", "financial", "debt", "mortgage", "tax",
|
||||
"depression", "anxiety", "therapy", "divorce", "breakup",
|
||||
],
|
||||
"sensitive": [
|
||||
"address", "phone", "email", "password", "birthday",
|
||||
],
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Embedding model (lazy-loaded singleton)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_embedder = None
|
||||
|
||||
|
||||
def _get_embedder():
|
||||
"""Lazy-load the sentence-transformers model."""
|
||||
global _embedder
|
||||
if _embedder is None:
|
||||
from sentence_transformers import SentenceTransformer
|
||||
_embedder = SentenceTransformer("all-MiniLM-L6-v2")
|
||||
return _embedder
|
||||
|
||||
|
||||
def get_embedding(text: str) -> list[float]:
|
||||
"""Compute a 384-dim embedding for the given text."""
|
||||
model = _get_embedder()
|
||||
vec = model.encode(text, normalize_embeddings=True)
|
||||
return vec.tolist()
|
||||
|
||||
|
||||
def _serialize_f32(vec: list[float]) -> bytes:
|
||||
"""Serialize a float list to little-endian bytes for sqlite-vec."""
|
||||
return struct.pack(f"<{len(vec)}f", *vec)
|
||||
|
||||
|
||||
def _deserialize_f32(blob: bytes) -> list[float]:
|
||||
"""Deserialize sqlite-vec float bytes back to a list."""
|
||||
n = len(blob) // 4
|
||||
return list(struct.unpack(f"<{n}f", blob))
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Database initialization
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_db: Optional[sqlite3.Connection] = None
|
||||
|
||||
|
||||
def init_db() -> sqlite3.Connection:
|
||||
"""Initialize the SQLite database with schema and sqlite-vec extension."""
|
||||
global _db
|
||||
if _db is not None:
|
||||
return _db
|
||||
|
||||
MEMORIES_DIR.mkdir(parents=True, exist_ok=True)
|
||||
db = sqlite3.connect(str(DB_PATH), check_same_thread=False)
|
||||
db.enable_load_extension(True)
|
||||
sqlite_vec.load(db)
|
||||
db.enable_load_extension(False)
|
||||
db.row_factory = sqlite3.Row
|
||||
|
||||
db.executescript("""
|
||||
CREATE TABLE IF NOT EXISTS memories (
|
||||
id TEXT PRIMARY KEY,
|
||||
character_id TEXT NOT NULL,
|
||||
content TEXT NOT NULL,
|
||||
memory_type TEXT NOT NULL DEFAULT 'semantic',
|
||||
category TEXT NOT NULL DEFAULT 'other',
|
||||
privacy_level TEXT NOT NULL DEFAULT 'standard',
|
||||
importance REAL NOT NULL DEFAULT 0.5,
|
||||
lifecycle_state TEXT NOT NULL DEFAULT 'active',
|
||||
follow_up_due TEXT,
|
||||
follow_up_context TEXT,
|
||||
source TEXT DEFAULT 'user_explicit',
|
||||
created_at TEXT NOT NULL,
|
||||
last_accessed TEXT,
|
||||
expires_at TEXT,
|
||||
previous_value TEXT,
|
||||
tags TEXT,
|
||||
surfaced_count INTEGER DEFAULT 0
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_character
|
||||
ON memories(character_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_lifecycle
|
||||
ON memories(lifecycle_state);
|
||||
CREATE INDEX IF NOT EXISTS idx_memories_type
|
||||
ON memories(memory_type);
|
||||
""")
|
||||
|
||||
# Create the vec0 virtual table for vector search
|
||||
# sqlite-vec requires this specific syntax
|
||||
db.execute(f"""
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS memory_embeddings USING vec0(
|
||||
id TEXT PRIMARY KEY,
|
||||
embedding float[{EMBEDDING_DIM}]
|
||||
)
|
||||
""")
|
||||
|
||||
# Partial index for follow-ups (created manually since executescript can't
|
||||
# handle IF NOT EXISTS for partial indexes cleanly on all versions)
|
||||
try:
|
||||
db.execute("""
|
||||
CREATE INDEX idx_memories_followup
|
||||
ON memories(lifecycle_state, follow_up_due)
|
||||
WHERE lifecycle_state = 'pending_followup'
|
||||
""")
|
||||
except sqlite3.OperationalError:
|
||||
pass # index already exists
|
||||
|
||||
db.commit()
|
||||
_db = db
|
||||
return db
|
||||
|
||||
|
||||
def _get_db() -> sqlite3.Connection:
|
||||
"""Get or initialize the database connection."""
|
||||
if _db is None:
|
||||
return init_db()
|
||||
return _db
|
||||
|
||||
|
||||
def _row_to_dict(row: sqlite3.Row) -> dict:
|
||||
"""Convert a sqlite3.Row to a plain dict."""
|
||||
return dict(row)
|
||||
|
||||
|
||||
def _generate_id() -> str:
|
||||
"""Generate a unique memory ID."""
|
||||
return f"m_{int(time.time() * 1000)}"
|
||||
|
||||
|
||||
def _now_iso() -> str:
|
||||
"""Current UTC time as ISO string."""
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Write-time classification (rule-based, Phase 1)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def classify_memory(content: str) -> dict:
|
||||
"""Rule-based classification for memory properties.
|
||||
Returns defaults that can be overridden by explicit parameters."""
|
||||
content_lower = content.lower()
|
||||
|
||||
# Privacy detection
|
||||
privacy = "standard"
|
||||
for level, keywords in PRIVACY_KEYWORDS.items():
|
||||
if any(kw in content_lower for kw in keywords):
|
||||
privacy = level
|
||||
break
|
||||
|
||||
# Memory type detection
|
||||
memory_type = "semantic"
|
||||
temporal_markers = [
|
||||
"today", "yesterday", "tonight", "this morning", "just now",
|
||||
"feeling", "right now", "this week", "earlier",
|
||||
]
|
||||
if any(kw in content_lower for kw in temporal_markers):
|
||||
memory_type = "episodic"
|
||||
|
||||
# Importance heuristic
|
||||
importance = 0.5
|
||||
if privacy == "local_only":
|
||||
importance = 0.7
|
||||
elif privacy == "sensitive":
|
||||
importance = 0.6
|
||||
|
||||
return {
|
||||
"memory_type": memory_type,
|
||||
"privacy_level": privacy,
|
||||
"importance": importance,
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# CRUD operations
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def add_memory(
|
||||
character_id: str,
|
||||
content: str,
|
||||
memory_type: str | None = None,
|
||||
category: str = "other",
|
||||
importance: float | None = None,
|
||||
privacy_level: str | None = None,
|
||||
tags: list[str] | None = None,
|
||||
follow_up_due: str | None = None,
|
||||
follow_up_context: str | None = None,
|
||||
source: str = "user_explicit",
|
||||
expires_at: str | None = None,
|
||||
) -> dict:
|
||||
"""Add a new memory record. Auto-classifies fields not explicitly set."""
|
||||
db = _get_db()
|
||||
classified = classify_memory(content)
|
||||
|
||||
memory_type = memory_type or classified["memory_type"]
|
||||
privacy_level = privacy_level or classified["privacy_level"]
|
||||
importance = importance if importance is not None else classified["importance"]
|
||||
|
||||
lifecycle_state = "active"
|
||||
if follow_up_due or follow_up_context:
|
||||
lifecycle_state = "pending_followup"
|
||||
if not follow_up_due:
|
||||
follow_up_due = "next_interaction"
|
||||
|
||||
mem_id = _generate_id()
|
||||
now = _now_iso()
|
||||
|
||||
# Generate embedding
|
||||
embedding = get_embedding(content)
|
||||
|
||||
db.execute("""
|
||||
INSERT INTO memories (
|
||||
id, character_id, content, memory_type, category,
|
||||
privacy_level, importance, lifecycle_state,
|
||||
follow_up_due, follow_up_context, source,
|
||||
created_at, tags, surfaced_count
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 0)
|
||||
""", (
|
||||
mem_id, character_id, content, memory_type, category,
|
||||
privacy_level, importance, lifecycle_state,
|
||||
follow_up_due, follow_up_context, source,
|
||||
now, json.dumps(tags) if tags else None,
|
||||
))
|
||||
|
||||
# Insert embedding into vec0 table
|
||||
db.execute(
|
||||
"INSERT INTO memory_embeddings (id, embedding) VALUES (?, ?)",
|
||||
(mem_id, _serialize_f32(embedding)),
|
||||
)
|
||||
|
||||
db.commit()
|
||||
|
||||
return {
|
||||
"id": mem_id,
|
||||
"character_id": character_id,
|
||||
"content": content,
|
||||
"memory_type": memory_type,
|
||||
"category": category,
|
||||
"privacy_level": privacy_level,
|
||||
"importance": importance,
|
||||
"lifecycle_state": lifecycle_state,
|
||||
"follow_up_due": follow_up_due,
|
||||
"follow_up_context": follow_up_context,
|
||||
"source": source,
|
||||
"created_at": now,
|
||||
"tags": tags,
|
||||
}
|
||||
|
||||
|
||||
def update_memory(memory_id: str, **fields) -> dict | None:
|
||||
"""Update specific fields on a memory record."""
|
||||
db = _get_db()
|
||||
|
||||
# Validate that memory exists
|
||||
row = db.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
|
||||
allowed = {
|
||||
"content", "memory_type", "category", "privacy_level", "importance",
|
||||
"lifecycle_state", "follow_up_due", "follow_up_context", "source",
|
||||
"last_accessed", "expires_at", "previous_value", "tags", "surfaced_count",
|
||||
}
|
||||
updates = {k: v for k, v in fields.items() if k in allowed}
|
||||
if not updates:
|
||||
return _row_to_dict(row)
|
||||
|
||||
# If content changed, update embedding and store previous value
|
||||
if "content" in updates:
|
||||
updates["previous_value"] = row["content"]
|
||||
embedding = get_embedding(updates["content"])
|
||||
# Update vec0 table: delete old, insert new
|
||||
db.execute("DELETE FROM memory_embeddings WHERE id = ?", (memory_id,))
|
||||
db.execute(
|
||||
"INSERT INTO memory_embeddings (id, embedding) VALUES (?, ?)",
|
||||
(memory_id, _serialize_f32(embedding)),
|
||||
)
|
||||
|
||||
if "tags" in updates and isinstance(updates["tags"], list):
|
||||
updates["tags"] = json.dumps(updates["tags"])
|
||||
|
||||
set_clause = ", ".join(f"{k} = ?" for k in updates)
|
||||
values = list(updates.values()) + [memory_id]
|
||||
db.execute(f"UPDATE memories SET {set_clause} WHERE id = ?", values)
|
||||
db.commit()
|
||||
|
||||
row = db.execute("SELECT * FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
return _row_to_dict(row) if row else None
|
||||
|
||||
|
||||
def delete_memory(memory_id: str) -> bool:
|
||||
"""Delete a memory record and its embedding."""
|
||||
db = _get_db()
|
||||
row = db.execute("SELECT id FROM memories WHERE id = ?", (memory_id,)).fetchone()
|
||||
if not row:
|
||||
return False
|
||||
db.execute("DELETE FROM memories WHERE id = ?", (memory_id,))
|
||||
db.execute("DELETE FROM memory_embeddings WHERE id = ?", (memory_id,))
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Retrieval
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def retrieve_memories(
|
||||
character_id: str,
|
||||
context_text: str = "",
|
||||
limit: int = 20,
|
||||
exclude_private_for_cloud: bool = False,
|
||||
) -> list[dict]:
|
||||
"""Dual retrieval: semantic similarity + recency, merged and ranked.
|
||||
|
||||
If context_text is empty, falls back to recency-only retrieval.
|
||||
"""
|
||||
db = _get_db()
|
||||
|
||||
privacy_filter = ""
|
||||
if exclude_private_for_cloud:
|
||||
privacy_filter = "AND m.privacy_level != 'local_only'"
|
||||
|
||||
# Always include high-importance memories
|
||||
high_importance = db.execute(f"""
|
||||
SELECT * FROM memories m
|
||||
WHERE m.character_id = ?
|
||||
AND m.lifecycle_state IN ('active', 'pending_followup')
|
||||
AND m.importance > 0.8
|
||||
{privacy_filter}
|
||||
ORDER BY m.created_at DESC
|
||||
LIMIT 5
|
||||
""", (character_id,)).fetchall()
|
||||
|
||||
seen_ids = {r["id"] for r in high_importance}
|
||||
results = {r["id"]: {**_row_to_dict(r), "_score": 1.0} for r in high_importance}
|
||||
|
||||
# Semantic search (if context provided and embeddings exist)
|
||||
if context_text:
|
||||
try:
|
||||
query_emb = get_embedding(context_text)
|
||||
vec_rows = db.execute("""
|
||||
SELECT id, distance
|
||||
FROM memory_embeddings
|
||||
WHERE embedding MATCH ?
|
||||
AND k = 30
|
||||
""", (_serialize_f32(query_emb),)).fetchall()
|
||||
|
||||
vec_ids = [r["id"] for r in vec_rows if r["id"] not in seen_ids]
|
||||
vec_distances = {r["id"]: r["distance"] for r in vec_rows}
|
||||
|
||||
if vec_ids:
|
||||
placeholders = ",".join("?" * len(vec_ids))
|
||||
sem_rows = db.execute(f"""
|
||||
SELECT * FROM memories m
|
||||
WHERE m.id IN ({placeholders})
|
||||
AND m.character_id = ?
|
||||
AND m.lifecycle_state IN ('active', 'pending_followup')
|
||||
{privacy_filter}
|
||||
""", (*vec_ids, character_id)).fetchall()
|
||||
|
||||
for r in sem_rows:
|
||||
d = _row_to_dict(r)
|
||||
# Convert cosine distance to similarity (sqlite-vec returns L2 distance for vec0)
|
||||
dist = vec_distances.get(r["id"], 1.0)
|
||||
semantic_score = max(0.0, 1.0 - dist)
|
||||
d["_score"] = 0.6 * semantic_score + 0.1 * d["importance"]
|
||||
results[r["id"]] = d
|
||||
seen_ids.add(r["id"])
|
||||
except Exception as e:
|
||||
print(f"[MemoryStore] Vector search error: {e}")
|
||||
|
||||
# Recency search: last 7 days, ordered by importance + recency
|
||||
recency_rows = db.execute(f"""
|
||||
SELECT * FROM memories m
|
||||
WHERE m.character_id = ?
|
||||
AND m.lifecycle_state IN ('active', 'pending_followup')
|
||||
AND m.created_at > datetime('now', '-7 days')
|
||||
{privacy_filter}
|
||||
ORDER BY m.importance DESC, m.created_at DESC
|
||||
LIMIT 10
|
||||
""", (character_id,)).fetchall()
|
||||
|
||||
for r in recency_rows:
|
||||
if r["id"] not in seen_ids:
|
||||
d = _row_to_dict(r)
|
||||
# Recency score based on age in days (newer = higher)
|
||||
try:
|
||||
created = datetime.fromisoformat(d["created_at"])
|
||||
age_days = (datetime.now(timezone.utc) - created).total_seconds() / 86400
|
||||
recency_score = max(0.0, 1.0 - (age_days / 7.0))
|
||||
except (ValueError, TypeError):
|
||||
recency_score = 0.5
|
||||
d["_score"] = 0.3 * recency_score + 0.1 * d["importance"]
|
||||
results[r["id"]] = d
|
||||
seen_ids.add(r["id"])
|
||||
|
||||
# Sort by score descending, return top N
|
||||
ranked = sorted(results.values(), key=lambda x: x.get("_score", 0), reverse=True)
|
||||
|
||||
# Update last_accessed for returned memories
|
||||
returned = ranked[:limit]
|
||||
now = _now_iso()
|
||||
for mem in returned:
|
||||
mem.pop("_score", None)
|
||||
db.execute(
|
||||
"UPDATE memories SET last_accessed = ? WHERE id = ?",
|
||||
(now, mem["id"]),
|
||||
)
|
||||
db.commit()
|
||||
|
||||
return returned
|
||||
|
||||
|
||||
def get_pending_followups(character_id: str) -> list[dict]:
|
||||
"""Get follow-up memories that are due for surfacing."""
|
||||
db = _get_db()
|
||||
now = _now_iso()
|
||||
|
||||
rows = db.execute("""
|
||||
SELECT * FROM memories
|
||||
WHERE character_id = ?
|
||||
AND lifecycle_state = 'pending_followup'
|
||||
AND (follow_up_due <= ? OR follow_up_due = 'next_interaction')
|
||||
ORDER BY importance DESC, created_at DESC
|
||||
LIMIT 5
|
||||
""", (character_id, now)).fetchall()
|
||||
|
||||
return [_row_to_dict(r) for r in rows]
|
||||
|
||||
|
||||
def search_memories(
|
||||
character_id: str,
|
||||
query: str,
|
||||
memory_type: str | None = None,
|
||||
limit: int = 10,
|
||||
) -> list[dict]:
|
||||
"""Semantic search for memories matching a query."""
|
||||
db = _get_db()
|
||||
|
||||
query_emb = get_embedding(query)
|
||||
vec_rows = db.execute("""
|
||||
SELECT id, distance
|
||||
FROM memory_embeddings
|
||||
WHERE embedding MATCH ?
|
||||
AND k = ?
|
||||
""", (_serialize_f32(query_emb), limit * 3)).fetchall()
|
||||
|
||||
if not vec_rows:
|
||||
return []
|
||||
|
||||
vec_ids = [r["id"] for r in vec_rows]
|
||||
vec_distances = {r["id"]: r["distance"] for r in vec_rows}
|
||||
placeholders = ",".join("?" * len(vec_ids))
|
||||
|
||||
type_filter = "AND m.memory_type = ?" if memory_type else ""
|
||||
params = [*vec_ids, character_id]
|
||||
if memory_type:
|
||||
params.append(memory_type)
|
||||
|
||||
rows = db.execute(f"""
|
||||
SELECT * FROM memories m
|
||||
WHERE m.id IN ({placeholders})
|
||||
AND m.character_id = ?
|
||||
{type_filter}
|
||||
ORDER BY m.created_at DESC
|
||||
""", params).fetchall()
|
||||
|
||||
# Sort by similarity
|
||||
results = []
|
||||
for r in rows:
|
||||
d = _row_to_dict(r)
|
||||
d["_distance"] = vec_distances.get(r["id"], 1.0)
|
||||
results.append(d)
|
||||
results.sort(key=lambda x: x["_distance"])
|
||||
|
||||
for r in results:
|
||||
r.pop("_distance", None)
|
||||
|
||||
return results[:limit]
|
||||
|
||||
|
||||
def list_memories(
|
||||
character_id: str,
|
||||
memory_type: str | None = None,
|
||||
lifecycle_state: str | None = None,
|
||||
category: str | None = None,
|
||||
limit: int = 20,
|
||||
offset: int = 0,
|
||||
) -> list[dict]:
|
||||
"""List memories with optional filters."""
|
||||
db = _get_db()
|
||||
|
||||
conditions = ["character_id = ?"]
|
||||
params: list = [character_id]
|
||||
|
||||
if memory_type:
|
||||
conditions.append("memory_type = ?")
|
||||
params.append(memory_type)
|
||||
if lifecycle_state:
|
||||
conditions.append("lifecycle_state = ?")
|
||||
params.append(lifecycle_state)
|
||||
if category:
|
||||
conditions.append("category = ?")
|
||||
params.append(category)
|
||||
|
||||
where = " AND ".join(conditions)
|
||||
params.extend([limit, offset])
|
||||
|
||||
rows = db.execute(f"""
|
||||
SELECT * FROM memories
|
||||
WHERE {where}
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ? OFFSET ?
|
||||
""", params).fetchall()
|
||||
|
||||
return [_row_to_dict(r) for r in rows]
|
||||
|
||||
|
||||
def count_memories(character_id: str) -> int:
|
||||
"""Count memories for a character."""
|
||||
db = _get_db()
|
||||
row = db.execute(
|
||||
"SELECT COUNT(*) as cnt FROM memories WHERE character_id = ?",
|
||||
(character_id,),
|
||||
).fetchone()
|
||||
return row["cnt"] if row else 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Lifecycle management
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def resolve_followup(memory_id: str) -> bool:
|
||||
"""Mark a follow-up as resolved."""
|
||||
db = _get_db()
|
||||
result = db.execute("""
|
||||
UPDATE memories
|
||||
SET lifecycle_state = 'resolved',
|
||||
follow_up_due = NULL
|
||||
WHERE id = ? AND lifecycle_state = 'pending_followup'
|
||||
""", (memory_id,))
|
||||
db.commit()
|
||||
return result.rowcount > 0
|
||||
|
||||
|
||||
def archive_memory(memory_id: str) -> bool:
|
||||
"""Archive a memory (keeps it for relational inference, not surfaced)."""
|
||||
db = _get_db()
|
||||
result = db.execute("""
|
||||
UPDATE memories
|
||||
SET lifecycle_state = 'archived'
|
||||
WHERE id = ?
|
||||
""", (memory_id,))
|
||||
db.commit()
|
||||
return result.rowcount > 0
|
||||
|
||||
|
||||
def auto_resolve_expired_followups() -> int:
|
||||
"""Auto-resolve follow-ups that are more than 48h past due."""
|
||||
db = _get_db()
|
||||
cutoff = (datetime.now(timezone.utc) - timedelta(hours=48)).isoformat()
|
||||
result = db.execute("""
|
||||
UPDATE memories
|
||||
SET lifecycle_state = 'resolved',
|
||||
follow_up_due = NULL
|
||||
WHERE lifecycle_state = 'pending_followup'
|
||||
AND follow_up_due != 'next_interaction'
|
||||
AND follow_up_due < ?
|
||||
""", (cutoff,))
|
||||
db.commit()
|
||||
return result.rowcount
|
||||
|
||||
|
||||
def auto_archive_old_resolved() -> int:
|
||||
"""Archive resolved memories older than 7 days."""
|
||||
db = _get_db()
|
||||
cutoff = (datetime.now(timezone.utc) - timedelta(days=7)).isoformat()
|
||||
result = db.execute("""
|
||||
UPDATE memories
|
||||
SET lifecycle_state = 'archived'
|
||||
WHERE lifecycle_state = 'resolved'
|
||||
AND created_at < ?
|
||||
""", (cutoff,))
|
||||
db.commit()
|
||||
return result.rowcount
|
||||
|
||||
|
||||
def increment_surfaced_count(memory_id: str) -> int:
|
||||
"""Increment surfaced_count and return new value. Auto-resolves if >= 1."""
|
||||
db = _get_db()
|
||||
row = db.execute(
|
||||
"SELECT surfaced_count FROM memories WHERE id = ?", (memory_id,)
|
||||
).fetchone()
|
||||
if not row:
|
||||
return 0
|
||||
|
||||
new_count = (row["surfaced_count"] or 0) + 1
|
||||
if new_count >= 2:
|
||||
# Auto-resolve: surfaced twice without user engagement
|
||||
db.execute("""
|
||||
UPDATE memories
|
||||
SET surfaced_count = ?, lifecycle_state = 'resolved', follow_up_due = NULL
|
||||
WHERE id = ?
|
||||
""", (new_count, memory_id))
|
||||
else:
|
||||
# Update next_interaction to actual timestamp so the 48h timer starts
|
||||
db.execute("""
|
||||
UPDATE memories
|
||||
SET surfaced_count = ?,
|
||||
follow_up_due = CASE
|
||||
WHEN follow_up_due = 'next_interaction' THEN ?
|
||||
ELSE follow_up_due
|
||||
END
|
||||
WHERE id = ?
|
||||
""", (new_count, _now_iso(), memory_id))
|
||||
db.commit()
|
||||
return new_count
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Deduplication
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def find_similar(
|
||||
character_id: str,
|
||||
content: str,
|
||||
memory_type: str = "semantic",
|
||||
threshold: float = 0.85,
|
||||
) -> dict | None:
|
||||
"""Find an existing memory that is semantically similar (>threshold).
|
||||
Returns the matching memory dict or None."""
|
||||
db = _get_db()
|
||||
query_emb = get_embedding(content)
|
||||
|
||||
vec_rows = db.execute("""
|
||||
SELECT id, distance
|
||||
FROM memory_embeddings
|
||||
WHERE embedding MATCH ?
|
||||
AND k = 5
|
||||
""", (_serialize_f32(query_emb),)).fetchall()
|
||||
|
||||
for vr in vec_rows:
|
||||
similarity = max(0.0, 1.0 - vr["distance"])
|
||||
if similarity >= threshold:
|
||||
row = db.execute("""
|
||||
SELECT * FROM memories
|
||||
WHERE id = ? AND character_id = ? AND memory_type = ?
|
||||
AND lifecycle_state = 'active'
|
||||
""", (vr["id"], character_id, memory_type)).fetchone()
|
||||
if row:
|
||||
return _row_to_dict(row)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def add_or_merge_memory(
|
||||
character_id: str,
|
||||
content: str,
|
||||
memory_type: str | None = None,
|
||||
category: str = "other",
|
||||
importance: float | None = None,
|
||||
privacy_level: str | None = None,
|
||||
tags: list[str] | None = None,
|
||||
follow_up_due: str | None = None,
|
||||
follow_up_context: str | None = None,
|
||||
source: str = "user_explicit",
|
||||
expires_at: str | None = None,
|
||||
dedup_threshold: float = 0.85,
|
||||
) -> dict:
|
||||
"""Add a memory, or merge with an existing similar one (semantic dedup).
|
||||
For semantic memories, if a similar one exists (>threshold), update it
|
||||
instead of creating a new record."""
|
||||
resolved_type = memory_type or classify_memory(content)["memory_type"]
|
||||
|
||||
if resolved_type == "semantic":
|
||||
existing = find_similar(character_id, content, "semantic", dedup_threshold)
|
||||
if existing:
|
||||
updated = update_memory(existing["id"], content=content)
|
||||
if updated:
|
||||
return updated
|
||||
|
||||
return add_memory(
|
||||
character_id=character_id,
|
||||
content=content,
|
||||
memory_type=memory_type,
|
||||
category=category,
|
||||
importance=importance,
|
||||
privacy_level=privacy_level,
|
||||
tags=tags,
|
||||
follow_up_due=follow_up_due,
|
||||
follow_up_context=follow_up_context,
|
||||
source=source,
|
||||
expires_at=expires_at,
|
||||
)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Migration from JSON
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Mapping from old JSON categories to new memory types
|
||||
_CATEGORY_TO_TYPE = {
|
||||
"preference": "semantic",
|
||||
"personal_info": "semantic",
|
||||
"interaction": "episodic",
|
||||
"emotional": "episodic",
|
||||
"system": "semantic",
|
||||
"tool_usage": "semantic",
|
||||
"home_layout": "semantic",
|
||||
"device": "semantic",
|
||||
"routine": "semantic",
|
||||
"other": "semantic",
|
||||
}
|
||||
|
||||
_CATEGORY_TO_IMPORTANCE = {
|
||||
"personal_info": 0.7,
|
||||
"preference": 0.6,
|
||||
"emotional": 0.5,
|
||||
"interaction": 0.4,
|
||||
"system": 0.4,
|
||||
"tool_usage": 0.3,
|
||||
"home_layout": 0.5,
|
||||
"device": 0.4,
|
||||
"routine": 0.5,
|
||||
"other": 0.4,
|
||||
}
|
||||
|
||||
_CATEGORY_TO_PRIVACY = {
|
||||
"emotional": "sensitive",
|
||||
"personal_info": "sensitive",
|
||||
}
|
||||
|
||||
|
||||
def migrate_from_json(memories_dir: str | None = None) -> dict:
|
||||
"""Migrate all JSON memory files to SQLite.
|
||||
Returns {migrated: int, skipped: int, errors: [str]}."""
|
||||
db = _get_db()
|
||||
mem_dir = Path(memories_dir) if memories_dir else MEMORIES_DIR
|
||||
|
||||
migrated = 0
|
||||
skipped = 0
|
||||
errors = []
|
||||
|
||||
# Migrate personal memories
|
||||
personal_dir = mem_dir / "personal"
|
||||
if personal_dir.exists():
|
||||
for json_file in personal_dir.glob("*.json"):
|
||||
try:
|
||||
with open(json_file) as f:
|
||||
data = json.load(f)
|
||||
character_id = data.get("characterId", json_file.stem)
|
||||
for mem in data.get("memories", []):
|
||||
content = mem.get("content", "").strip()
|
||||
if not content:
|
||||
skipped += 1
|
||||
continue
|
||||
category = mem.get("category", "other")
|
||||
created_at = mem.get("createdAt", _now_iso())
|
||||
|
||||
try:
|
||||
add_memory(
|
||||
character_id=character_id,
|
||||
content=content,
|
||||
memory_type=_CATEGORY_TO_TYPE.get(category, "semantic"),
|
||||
category=category,
|
||||
importance=_CATEGORY_TO_IMPORTANCE.get(category, 0.5),
|
||||
privacy_level=_CATEGORY_TO_PRIVACY.get(category, "standard"),
|
||||
source="migrated_json",
|
||||
)
|
||||
# Fix created_at to original value
|
||||
db.execute(
|
||||
"UPDATE memories SET created_at = ? WHERE id = (SELECT id FROM memories ORDER BY rowid DESC LIMIT 1)",
|
||||
(created_at,),
|
||||
)
|
||||
db.commit()
|
||||
migrated += 1
|
||||
except Exception as e:
|
||||
errors.append(f"personal/{json_file.name}: {e}")
|
||||
|
||||
# Rename to backup
|
||||
backup = json_file.with_suffix(".json.bak")
|
||||
json_file.rename(backup)
|
||||
except Exception as e:
|
||||
errors.append(f"personal/{json_file.name}: {e}")
|
||||
|
||||
# Migrate general memories
|
||||
general_file = mem_dir / "general.json"
|
||||
if general_file.exists():
|
||||
try:
|
||||
with open(general_file) as f:
|
||||
data = json.load(f)
|
||||
for mem in data.get("memories", []):
|
||||
content = mem.get("content", "").strip()
|
||||
if not content:
|
||||
skipped += 1
|
||||
continue
|
||||
category = mem.get("category", "other")
|
||||
created_at = mem.get("createdAt", _now_iso())
|
||||
|
||||
try:
|
||||
add_memory(
|
||||
character_id="shared",
|
||||
content=content,
|
||||
memory_type=_CATEGORY_TO_TYPE.get(category, "semantic"),
|
||||
category=category,
|
||||
importance=_CATEGORY_TO_IMPORTANCE.get(category, 0.5),
|
||||
privacy_level="standard",
|
||||
source="migrated_json",
|
||||
)
|
||||
db.execute(
|
||||
"UPDATE memories SET created_at = ? WHERE id = (SELECT id FROM memories ORDER BY rowid DESC LIMIT 1)",
|
||||
(created_at,),
|
||||
)
|
||||
db.commit()
|
||||
migrated += 1
|
||||
except Exception as e:
|
||||
errors.append(f"general.json: {e}")
|
||||
|
||||
backup = general_file.with_suffix(".json.bak")
|
||||
general_file.rename(backup)
|
||||
except Exception as e:
|
||||
errors.append(f"general.json: {e}")
|
||||
|
||||
return {"migrated": migrated, "skipped": skipped, "errors": errors}
|
||||
@@ -37,6 +37,26 @@ from pathlib import Path
|
||||
import wave
|
||||
import io
|
||||
import re
|
||||
from datetime import datetime, timezone
|
||||
from urllib.parse import parse_qs
|
||||
|
||||
from memory_store import (
|
||||
init_db as init_memory_db,
|
||||
retrieve_memories as _retrieve_memories,
|
||||
get_pending_followups,
|
||||
auto_resolve_expired_followups,
|
||||
auto_archive_old_resolved,
|
||||
increment_surfaced_count,
|
||||
add_memory as _add_memory,
|
||||
add_or_merge_memory,
|
||||
update_memory as _update_memory,
|
||||
delete_memory as _delete_memory,
|
||||
list_memories as _list_memories,
|
||||
search_memories as _search_memories,
|
||||
resolve_followup,
|
||||
count_memories,
|
||||
migrate_from_json,
|
||||
)
|
||||
from wyoming.client import AsyncTcpClient
|
||||
from wyoming.tts import Synthesize, SynthesizeVoice
|
||||
from wyoming.asr import Transcribe, Transcript
|
||||
@@ -48,7 +68,7 @@ TIMEOUT_WARM = 120 # Model already loaded in VRAM
|
||||
TIMEOUT_COLD = 180 # Model needs loading first (~10-20s load + inference)
|
||||
OLLAMA_PS_URL = "http://localhost:11434/api/ps"
|
||||
VTUBE_BRIDGE_URL = "http://localhost:8002"
|
||||
DEFAULT_MODEL = "anthropic/claude-sonnet-4-20250514"
|
||||
DEFAULT_MODEL = "anthropic/claude-sonnet-4-6"
|
||||
|
||||
|
||||
def _vtube_fire_and_forget(path: str, data: dict):
|
||||
@@ -85,12 +105,21 @@ SATELLITE_MAP_PATH = Path("/Users/aodhan/homeai-data/satellite-map.json")
|
||||
MEMORIES_DIR = Path("/Users/aodhan/homeai-data/memories")
|
||||
ACTIVE_TTS_VOICE_PATH = Path("/Users/aodhan/homeai-data/active-tts-voice.json")
|
||||
ACTIVE_MODE_PATH = Path("/Users/aodhan/homeai-data/active-mode.json")
|
||||
ACTIVE_STYLE_PATH = Path("/Users/aodhan/homeai-data/active-prompt-style.json")
|
||||
PROMPT_STYLES_DIR = Path(__file__).parent / "prompt-styles"
|
||||
|
||||
# Cloud provider model mappings for mode routing
|
||||
# Cloud provider model mappings for mode routing (fallback when style has no model)
|
||||
CLOUD_MODELS = {
|
||||
"anthropic": "anthropic/claude-sonnet-4-20250514",
|
||||
"anthropic": "anthropic/claude-sonnet-4-6",
|
||||
"openai": "openai/gpt-4o",
|
||||
}
|
||||
LOCAL_MODEL = "ollama/qwen3.5:35b-a3b"
|
||||
|
||||
# Lock to serialise model-switch + agent-call (openclaw config is global)
|
||||
_model_lock = threading.Lock()
|
||||
|
||||
# Initialize memory database at module load
|
||||
init_memory_db()
|
||||
|
||||
|
||||
def load_mode() -> dict:
|
||||
@@ -102,11 +131,56 @@ def load_mode() -> dict:
|
||||
return {"mode": "private", "cloud_provider": "anthropic", "overrides": {}}
|
||||
|
||||
|
||||
def resolve_model(mode_data: dict) -> str | None:
|
||||
"""Resolve which model to use based on mode. Returns None for default (private/local)."""
|
||||
def resolve_model(mode_data: dict) -> str:
|
||||
"""Resolve which model to use based on mode."""
|
||||
mode = mode_data.get("mode", "private")
|
||||
if mode == "private":
|
||||
return None # Use OpenClaw default (ollama/qwen3.5:35b-a3b)
|
||||
return mode_data.get("local_model", LOCAL_MODEL)
|
||||
provider = mode_data.get("cloud_provider", "anthropic")
|
||||
return CLOUD_MODELS.get(provider, CLOUD_MODELS["anthropic"])
|
||||
|
||||
|
||||
def load_prompt_style(style_id: str) -> dict:
|
||||
"""Load a prompt style template by ID. Returns the style dict or a default."""
|
||||
if not style_id:
|
||||
style_id = "standard"
|
||||
safe_id = style_id.replace("/", "_").replace("..", "")
|
||||
style_path = PROMPT_STYLES_DIR / f"{safe_id}.json"
|
||||
try:
|
||||
with open(style_path) as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return {"id": "standard", "name": "Standard", "group": "cloud", "instruction": "", "strip_sections": []}
|
||||
|
||||
|
||||
def load_active_style() -> str:
|
||||
"""Load the active prompt style ID from state file. Defaults to 'standard'."""
|
||||
try:
|
||||
with open(ACTIVE_STYLE_PATH) as f:
|
||||
data = json.load(f)
|
||||
return data.get("style", "standard")
|
||||
except Exception:
|
||||
return "standard"
|
||||
|
||||
|
||||
def resolve_model_for_style(style: dict, mode_data: dict) -> str:
|
||||
"""Resolve model based on prompt style, falling back to mode config.
|
||||
Priority: style 'model' field > group-based routing > mode default."""
|
||||
mode = mode_data.get("mode", "private")
|
||||
group = style.get("group", "cloud")
|
||||
|
||||
# Private mode always uses local model regardless of style
|
||||
if mode == "private" and group == "local":
|
||||
return mode_data.get("local_model", LOCAL_MODEL)
|
||||
|
||||
# Per-style model override (e.g. haiku for quick, opus for roleplay)
|
||||
style_model = style.get("model")
|
||||
if style_model:
|
||||
return style_model
|
||||
|
||||
# Fallback: cloud model from mode config
|
||||
if group == "local":
|
||||
return mode_data.get("local_model", LOCAL_MODEL)
|
||||
provider = mode_data.get("cloud_provider", "anthropic")
|
||||
return CLOUD_MODELS.get(provider, CLOUD_MODELS["anthropic"])
|
||||
|
||||
@@ -192,31 +266,44 @@ def load_character(character_id: str = None) -> dict:
|
||||
return {}
|
||||
|
||||
|
||||
def load_character_prompt(satellite_id: str = None, character_id: str = None) -> str:
|
||||
def load_character_prompt(satellite_id: str = None, character_id: str = None,
|
||||
prompt_style: str = None, user_message: str = "",
|
||||
is_cloud: bool = False) -> str:
|
||||
"""Load the full system prompt for a character, resolved by satellite or explicit ID.
|
||||
Builds a rich prompt from system_prompt + profile fields (background, dialogue_style, etc.)."""
|
||||
Builds a rich prompt from style instruction + system_prompt + profile fields + memories.
|
||||
The prompt_style controls HOW the character responds (brief, conversational, roleplay, etc.)."""
|
||||
if not character_id:
|
||||
character_id = resolve_character_id(satellite_id)
|
||||
char = load_character(character_id)
|
||||
if not char:
|
||||
return ""
|
||||
|
||||
# Load prompt style template
|
||||
style_id = prompt_style or load_active_style()
|
||||
style = load_prompt_style(style_id)
|
||||
strip_sections = set(style.get("strip_sections", []))
|
||||
|
||||
sections = []
|
||||
|
||||
# Core system prompt
|
||||
# 1. Response style instruction (framing directive — goes first)
|
||||
instruction = style.get("instruction", "")
|
||||
if instruction:
|
||||
sections.append(f"[Response Style: {style.get('name', style_id)}]\n{instruction}")
|
||||
|
||||
# 2. Core character identity (system_prompt)
|
||||
prompt = char.get("system_prompt", "")
|
||||
if prompt:
|
||||
sections.append(prompt)
|
||||
|
||||
# Character profile fields
|
||||
# 3. Character profile fields (filtered by style's strip_sections)
|
||||
profile_parts = []
|
||||
if char.get("background"):
|
||||
if "background" not in strip_sections and char.get("background"):
|
||||
profile_parts.append(f"## Background\n{char['background']}")
|
||||
if char.get("appearance"):
|
||||
if "appearance" not in strip_sections and char.get("appearance"):
|
||||
profile_parts.append(f"## Appearance\n{char['appearance']}")
|
||||
if char.get("dialogue_style"):
|
||||
if "dialogue_style" not in strip_sections and char.get("dialogue_style"):
|
||||
profile_parts.append(f"## Dialogue Style\n{char['dialogue_style']}")
|
||||
if char.get("skills"):
|
||||
if "skills" not in strip_sections and char.get("skills"):
|
||||
skills = char["skills"]
|
||||
if isinstance(skills, list):
|
||||
skills_text = ", ".join(skills[:15])
|
||||
@@ -226,7 +313,18 @@ def load_character_prompt(satellite_id: str = None, character_id: str = None) ->
|
||||
if profile_parts:
|
||||
sections.append("[Character Profile]\n" + "\n\n".join(profile_parts))
|
||||
|
||||
# Character metadata
|
||||
# 4. Per-character style overrides (optional customization per style)
|
||||
style_overrides = char.get("prompt_style_overrides", {}).get(style_id, {})
|
||||
if style_overrides:
|
||||
override_parts = []
|
||||
if style_overrides.get("dialogue_style"):
|
||||
override_parts.append(f"## Dialogue Style Override\n{style_overrides['dialogue_style']}")
|
||||
if style_overrides.get("system_prompt_suffix"):
|
||||
override_parts.append(style_overrides["system_prompt_suffix"])
|
||||
if override_parts:
|
||||
sections.append("[Style-Specific Notes]\n" + "\n\n".join(override_parts))
|
||||
|
||||
# 5. Character metadata
|
||||
meta_lines = []
|
||||
if char.get("display_name"):
|
||||
meta_lines.append(f"Your name is: {char['display_name']}")
|
||||
@@ -243,47 +341,86 @@ def load_character_prompt(satellite_id: str = None, character_id: str = None) ->
|
||||
if meta_lines:
|
||||
sections.append("[Character Metadata]\n" + "\n".join(meta_lines))
|
||||
|
||||
# Memories (personal + general)
|
||||
personal, general = load_memories(character_id)
|
||||
# 6. Memories (personal + general, context-aware retrieval)
|
||||
personal, general, followups = load_memories(character_id, context=user_message, is_cloud=is_cloud)
|
||||
if personal:
|
||||
sections.append("[Personal Memories]\n" + "\n".join(f"- {m}" for m in personal))
|
||||
if general:
|
||||
sections.append("[General Knowledge]\n" + "\n".join(f"- {m}" for m in general))
|
||||
|
||||
# 7. Pending follow-ups (things the character should naturally bring up)
|
||||
if followups:
|
||||
followup_lines = [
|
||||
f"- {fu['follow_up_context']} (from {fu['created_at'][:10]})"
|
||||
for fu in followups[:3]
|
||||
]
|
||||
sections.append(
|
||||
"[Pending Follow-ups — Bring these up naturally if relevant]\n"
|
||||
"You have unresolved topics to check on with the user. "
|
||||
"Weave them into conversation naturally — don't list them. "
|
||||
"If the user addresses one, use memory-ctl resolve <id> to mark it resolved.\n"
|
||||
+ "\n".join(followup_lines)
|
||||
)
|
||||
|
||||
return "\n\n".join(sections)
|
||||
|
||||
|
||||
def load_memories(character_id: str) -> tuple[list[str], list[str]]:
|
||||
"""Load personal (per-character) and general memories.
|
||||
Returns (personal_contents, general_contents) truncated to fit context budget."""
|
||||
PERSONAL_BUDGET = 4000 # max chars for personal memories in prompt
|
||||
GENERAL_BUDGET = 3000 # max chars for general memories in prompt
|
||||
def _truncate_to_budget(contents: list[str], budget: int) -> list[str]:
|
||||
"""Truncate a list of strings to fit within a character budget."""
|
||||
result = []
|
||||
used = 0
|
||||
for content in contents:
|
||||
if used + len(content) > budget:
|
||||
break
|
||||
result.append(content)
|
||||
used += len(content)
|
||||
return result
|
||||
|
||||
def _read_memories(path: Path, budget: int) -> list[str]:
|
||||
|
||||
def load_memories(character_id: str, context: str = "", is_cloud: bool = False) -> tuple[list[str], list[str], list[dict]]:
|
||||
"""Load personal and general memories using semantic + recency retrieval.
|
||||
Returns (personal_contents, general_contents, pending_followups)."""
|
||||
PERSONAL_BUDGET = 4000
|
||||
GENERAL_BUDGET = 3000
|
||||
|
||||
# Check if SQLite has any memories; fall back to JSON if empty (pre-migration)
|
||||
if count_memories(character_id) == 0 and count_memories("shared") == 0:
|
||||
return _load_memories_json_fallback(character_id), [], []
|
||||
|
||||
personal_mems = _retrieve_memories(character_id, context, limit=15,
|
||||
exclude_private_for_cloud=is_cloud)
|
||||
general_mems = _retrieve_memories("shared", context, limit=10,
|
||||
exclude_private_for_cloud=is_cloud)
|
||||
followups = get_pending_followups(character_id)
|
||||
|
||||
personal = _truncate_to_budget([m["content"] for m in personal_mems], PERSONAL_BUDGET)
|
||||
general = _truncate_to_budget([m["content"] for m in general_mems], GENERAL_BUDGET)
|
||||
|
||||
return personal, general, followups
|
||||
|
||||
|
||||
def _load_memories_json_fallback(character_id: str) -> list[str]:
|
||||
"""Legacy JSON fallback for pre-migration state."""
|
||||
def _read(path: Path, budget: int) -> list[str]:
|
||||
try:
|
||||
with open(path) as f:
|
||||
data = json.load(f)
|
||||
except Exception:
|
||||
return []
|
||||
memories = data.get("memories", [])
|
||||
# Sort newest first
|
||||
memories.sort(key=lambda m: m.get("createdAt", ""), reverse=True)
|
||||
result = []
|
||||
used = 0
|
||||
result, used = [], 0
|
||||
for m in memories:
|
||||
content = m.get("content", "").strip()
|
||||
if not content:
|
||||
continue
|
||||
if used + len(content) > budget:
|
||||
if used + len(content) > 4000:
|
||||
break
|
||||
result.append(content)
|
||||
used += len(content)
|
||||
return result
|
||||
|
||||
safe_id = character_id.replace("/", "_")
|
||||
personal = _read_memories(MEMORIES_DIR / "personal" / f"{safe_id}.json", PERSONAL_BUDGET)
|
||||
general = _read_memories(MEMORIES_DIR / "general.json", GENERAL_BUDGET)
|
||||
return personal, general
|
||||
return _read(MEMORIES_DIR / "personal" / f"{safe_id}.json", 4000)
|
||||
|
||||
|
||||
class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
@@ -297,6 +434,7 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
"""Send a JSON response."""
|
||||
self.send_response(status_code)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Access-Control-Allow-Origin", "*")
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(data).encode())
|
||||
|
||||
@@ -319,11 +457,17 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
self._handle_stt_request()
|
||||
return
|
||||
|
||||
# Only handle the agent message endpoint
|
||||
# Agent message endpoint
|
||||
if parsed_path.path == "/api/agent/message":
|
||||
self._handle_agent_request()
|
||||
return
|
||||
|
||||
|
||||
# Memory API: POST /api/memories/...
|
||||
if parsed_path.path.startswith("/api/memories/"):
|
||||
parts = parsed_path.path[len("/api/memories/"):].strip("/").split("/")
|
||||
self._handle_memory_post(parts)
|
||||
return
|
||||
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
|
||||
def _handle_tts_request(self):
|
||||
@@ -399,11 +543,29 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
audio_bytes = resp.read()
|
||||
return audio_bytes, "audio/mpeg"
|
||||
|
||||
def do_PUT(self):
|
||||
"""Handle PUT requests (memory updates)."""
|
||||
parsed_path = urlparse(self.path)
|
||||
if parsed_path.path.startswith("/api/memories/"):
|
||||
parts = parsed_path.path[len("/api/memories/"):].strip("/").split("/")
|
||||
self._handle_memory_put(parts)
|
||||
return
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
|
||||
def do_DELETE(self):
|
||||
"""Handle DELETE requests (memory deletion)."""
|
||||
parsed_path = urlparse(self.path)
|
||||
if parsed_path.path.startswith("/api/memories/"):
|
||||
parts = parsed_path.path[len("/api/memories/"):].strip("/").split("/")
|
||||
self._handle_memory_delete(parts)
|
||||
return
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
|
||||
def do_OPTIONS(self):
|
||||
"""Handle CORS preflight requests."""
|
||||
self.send_response(204)
|
||||
self.send_header("Access-Control-Allow-Origin", "*")
|
||||
self.send_header("Access-Control-Allow-Methods", "POST, GET, OPTIONS")
|
||||
self.send_header("Access-Control-Allow-Methods", "POST, GET, PUT, DELETE, OPTIONS")
|
||||
self.send_header("Access-Control-Allow-Headers", "Content-Type")
|
||||
self.end_headers()
|
||||
|
||||
@@ -531,19 +693,55 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
self._send_json_response(200, {"status": "ok", "message": "Wake word received"})
|
||||
|
||||
@staticmethod
|
||||
def _call_openclaw(message: str, agent: str, timeout: int, model: str = None) -> str:
|
||||
"""Call OpenClaw CLI and return stdout."""
|
||||
cmd = ["/opt/homebrew/bin/openclaw", "agent", "--message", message, "--agent", agent]
|
||||
if model:
|
||||
cmd.extend(["--model", model])
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
check=True,
|
||||
def _config_set(path: str, value: str):
|
||||
"""Set an OpenClaw config value."""
|
||||
subprocess.run(
|
||||
["/opt/homebrew/bin/openclaw", "config", "set", path, value],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
return result.stdout.strip()
|
||||
|
||||
@staticmethod
|
||||
def _call_openclaw(message: str, agent: str, timeout: int,
|
||||
model: str = None, session_id: str = None,
|
||||
params: dict = None, thinking: str = None) -> str:
|
||||
"""Call OpenClaw CLI and return stdout.
|
||||
Temporarily switches the gateway's primary model and inference params
|
||||
via `openclaw config set`, protected by _model_lock to prevent races."""
|
||||
cmd = ["/opt/homebrew/bin/openclaw", "agent", "--message", message, "--agent", agent]
|
||||
if session_id:
|
||||
cmd.extend(["--session-id", session_id])
|
||||
if thinking:
|
||||
cmd.extend(["--thinking", thinking])
|
||||
|
||||
with _model_lock:
|
||||
if model:
|
||||
OpenClawBridgeHandler._config_set(
|
||||
"agents.defaults.model.primary", model)
|
||||
|
||||
# Set per-style temperature if provided
|
||||
temp_path = None
|
||||
if model and params and params.get("temperature") is not None:
|
||||
temp_path = f'agents.defaults.models["{model}"].params.temperature'
|
||||
OpenClawBridgeHandler._config_set(
|
||||
temp_path, str(params["temperature"]))
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
check=True,
|
||||
)
|
||||
return result.stdout.strip()
|
||||
finally:
|
||||
# Restore defaults
|
||||
if model and model != DEFAULT_MODEL:
|
||||
OpenClawBridgeHandler._config_set(
|
||||
"agents.defaults.model.primary", DEFAULT_MODEL)
|
||||
if temp_path:
|
||||
# Restore to neutral default
|
||||
OpenClawBridgeHandler._config_set(temp_path, "0.5")
|
||||
|
||||
@staticmethod
|
||||
def _needs_followup(response: str) -> bool:
|
||||
@@ -588,6 +786,8 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
agent = data.get("agent", "main")
|
||||
satellite_id = data.get("satellite_id")
|
||||
explicit_character_id = data.get("character_id")
|
||||
requested_style = data.get("prompt_style")
|
||||
conversation_id = data.get("conversation_id")
|
||||
|
||||
if not message:
|
||||
self._send_json_response(400, {"error": "Message is required"})
|
||||
@@ -598,10 +798,28 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
character_id = explicit_character_id
|
||||
else:
|
||||
character_id = resolve_character_id(satellite_id)
|
||||
system_prompt = load_character_prompt(character_id=character_id)
|
||||
|
||||
# Resolve prompt style: explicit > character default > global active
|
||||
char = load_character(character_id)
|
||||
style_id = requested_style or char.get("default_prompt_style") or load_active_style()
|
||||
style = load_prompt_style(style_id)
|
||||
print(f"[OpenClaw Bridge] Prompt style: {style.get('name', style_id)} ({style.get('group', 'cloud')})")
|
||||
|
||||
# Determine if routing to cloud (for privacy filtering)
|
||||
mode_data = load_mode()
|
||||
active_model = resolve_model_for_style(style, mode_data)
|
||||
is_cloud = style.get("group", "cloud") == "cloud" and mode_data.get("mode") != "private"
|
||||
|
||||
system_prompt = load_character_prompt(
|
||||
character_id=character_id, prompt_style=style_id,
|
||||
user_message=message, is_cloud=is_cloud,
|
||||
)
|
||||
|
||||
# Run lifecycle maintenance (cheap SQL updates)
|
||||
auto_resolve_expired_followups()
|
||||
auto_archive_old_resolved()
|
||||
|
||||
# Set the active TTS config for the Wyoming server to pick up
|
||||
char = load_character(character_id)
|
||||
tts_config = char.get("tts", {})
|
||||
if tts_config:
|
||||
set_active_tts_voice(character_id, tts_config)
|
||||
@@ -616,14 +834,30 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
if system_prompt:
|
||||
message = f"System Context: {system_prompt}\n\nUser Request: {message}"
|
||||
|
||||
# Load mode and resolve model routing
|
||||
mode_data = load_mode()
|
||||
model_override = resolve_model(mode_data)
|
||||
active_model = model_override or DEFAULT_MODEL
|
||||
if model_override:
|
||||
print(f"[OpenClaw Bridge] Mode: PUBLIC → {model_override}")
|
||||
group = style.get("group", "cloud")
|
||||
print(f"[OpenClaw Bridge] Routing: {group.upper()} → {active_model}")
|
||||
|
||||
# Resolve session ID for OpenClaw thread isolation
|
||||
# Dashboard chats: use conversation_id (each "New Chat" = fresh thread)
|
||||
# Satellites: use rotating 12-hour bucket so old context expires naturally
|
||||
if conversation_id:
|
||||
session_id = conversation_id
|
||||
elif satellite_id:
|
||||
now = datetime.now(timezone.utc)
|
||||
half = "am" if now.hour < 12 else "pm"
|
||||
session_id = f"sat_{satellite_id}_{now.strftime('%Y%m%d')}_{half}"
|
||||
else:
|
||||
print(f"[OpenClaw Bridge] Mode: PRIVATE ({active_model})")
|
||||
# API call with no conversation or satellite — use a transient session
|
||||
session_id = f"api_{int(datetime.now(timezone.utc).timestamp())}"
|
||||
print(f"[OpenClaw Bridge] Session: {session_id}")
|
||||
|
||||
# Extract style inference params (temperature, etc.) and thinking level
|
||||
style_params = style.get("params", {})
|
||||
style_thinking = style.get("thinking")
|
||||
if style_params:
|
||||
print(f"[OpenClaw Bridge] Style params: {style_params}")
|
||||
if style_thinking:
|
||||
print(f"[OpenClaw Bridge] Thinking: {style_thinking}")
|
||||
|
||||
# Check if model is warm to set appropriate timeout
|
||||
warm = is_model_warm()
|
||||
@@ -635,7 +869,7 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
|
||||
# Call OpenClaw CLI (use full path for launchd compatibility)
|
||||
try:
|
||||
response_text = self._call_openclaw(message, agent, timeout, model=model_override)
|
||||
response_text = self._call_openclaw(message, agent, timeout, model=active_model, session_id=session_id, params=style_params, thinking=style_thinking)
|
||||
|
||||
# Re-prompt if the model promised to act but didn't call a tool.
|
||||
# Detect "I'll do X" / "Let me X" responses that lack any result.
|
||||
@@ -645,11 +879,19 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
"You just said you would do something but didn't actually call the exec tool. "
|
||||
"Do NOT explain what you will do — call the tool NOW using exec and return the result."
|
||||
)
|
||||
response_text = self._call_openclaw(followup, agent, timeout, model=model_override)
|
||||
response_text = self._call_openclaw(followup, agent, timeout, model=active_model, session_id=session_id, params=style_params, thinking=style_thinking)
|
||||
|
||||
# Increment surfaced_count on follow-ups that were injected into prompt
|
||||
try:
|
||||
followups = get_pending_followups(character_id)
|
||||
for fu in followups[:3]:
|
||||
increment_surfaced_count(fu["id"])
|
||||
except Exception as e:
|
||||
print(f"[OpenClaw Bridge] Follow-up tracking error: {e}")
|
||||
|
||||
# Signal avatar: idle (TTS handler will override to 'speaking' if voice is used)
|
||||
_vtube_fire_and_forget("/expression", {"event": "idle"})
|
||||
self._send_json_response(200, {"response": response_text, "model": active_model})
|
||||
self._send_json_response(200, {"response": response_text, "model": active_model, "prompt_style": style_id})
|
||||
except subprocess.TimeoutExpired:
|
||||
self._send_json_response(504, {"error": f"OpenClaw command timed out after {timeout}s (model was {'warm' if warm else 'cold'})"})
|
||||
except subprocess.CalledProcessError as e:
|
||||
@@ -660,18 +902,174 @@ class OpenClawBridgeHandler(BaseHTTPRequestHandler):
|
||||
except Exception as e:
|
||||
self._send_json_response(500, {"error": str(e)})
|
||||
|
||||
def do_GET(self):
|
||||
"""Handle GET requests (health check)."""
|
||||
parsed_path = urlparse(self.path)
|
||||
# ------------------------------------------------------------------
|
||||
# Memory REST API
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
if parsed_path.path == "/status" or parsed_path.path == "/":
|
||||
def _read_json_body(self) -> dict | None:
|
||||
"""Read and parse JSON body from request. Returns None on error (response already sent)."""
|
||||
content_length = int(self.headers.get("Content-Length", 0))
|
||||
if content_length == 0:
|
||||
self._send_json_response(400, {"error": "Empty body"})
|
||||
return None
|
||||
try:
|
||||
return json.loads(self.rfile.read(content_length).decode())
|
||||
except json.JSONDecodeError:
|
||||
self._send_json_response(400, {"error": "Invalid JSON"})
|
||||
return None
|
||||
|
||||
def _handle_memory_get(self, path_parts: list[str], query_params: dict):
|
||||
"""Handle GET /api/memories/..."""
|
||||
# GET /api/memories/general
|
||||
if len(path_parts) == 1 and path_parts[0] == "general":
|
||||
limit = int(query_params.get("limit", ["50"])[0])
|
||||
offset = int(query_params.get("offset", ["0"])[0])
|
||||
memory_type = query_params.get("type", [None])[0]
|
||||
lifecycle = query_params.get("lifecycle", [None])[0]
|
||||
category = query_params.get("category", [None])[0]
|
||||
memories = _list_memories("shared", memory_type=memory_type,
|
||||
lifecycle_state=lifecycle, category=category,
|
||||
limit=limit, offset=offset)
|
||||
self._send_json_response(200, {"memories": memories})
|
||||
return
|
||||
|
||||
if len(path_parts) < 1:
|
||||
self._send_json_response(400, {"error": "Character ID required"})
|
||||
return
|
||||
|
||||
char_id = path_parts[0]
|
||||
|
||||
# GET /api/memories/:characterId/followups
|
||||
if len(path_parts) == 2 and path_parts[1] == "followups":
|
||||
followups = get_pending_followups(char_id)
|
||||
self._send_json_response(200, {"followups": followups})
|
||||
return
|
||||
|
||||
# GET /api/memories/:characterId
|
||||
limit = int(query_params.get("limit", ["50"])[0])
|
||||
offset = int(query_params.get("offset", ["0"])[0])
|
||||
memory_type = query_params.get("type", [None])[0]
|
||||
lifecycle = query_params.get("lifecycle", [None])[0]
|
||||
category = query_params.get("category", [None])[0]
|
||||
query = query_params.get("q", [None])[0]
|
||||
|
||||
if query:
|
||||
memories = _search_memories(char_id, query, memory_type=memory_type, limit=limit)
|
||||
else:
|
||||
memories = _list_memories(char_id, memory_type=memory_type,
|
||||
lifecycle_state=lifecycle, category=category,
|
||||
limit=limit, offset=offset)
|
||||
self._send_json_response(200, {"memories": memories, "characterId": char_id})
|
||||
|
||||
def _handle_memory_post(self, path_parts: list[str]):
|
||||
"""Handle POST /api/memories/..."""
|
||||
data = self._read_json_body()
|
||||
if data is None:
|
||||
return
|
||||
|
||||
# POST /api/memories/migrate
|
||||
if len(path_parts) == 1 and path_parts[0] == "migrate":
|
||||
result = migrate_from_json()
|
||||
self._send_json_response(200, result)
|
||||
return
|
||||
|
||||
# POST /api/memories/:memoryId/resolve
|
||||
if len(path_parts) == 2 and path_parts[1] == "resolve":
|
||||
ok = resolve_followup(path_parts[0])
|
||||
self._send_json_response(200 if ok else 404,
|
||||
{"ok": ok, "id": path_parts[0]})
|
||||
return
|
||||
|
||||
# POST /api/memories/general — add general memory
|
||||
if len(path_parts) == 1 and path_parts[0] == "general":
|
||||
content = data.get("content", "").strip()
|
||||
if not content:
|
||||
self._send_json_response(400, {"error": "content is required"})
|
||||
return
|
||||
mem = add_or_merge_memory(
|
||||
character_id="shared",
|
||||
content=content,
|
||||
memory_type=data.get("memory_type"),
|
||||
category=data.get("category", "other"),
|
||||
importance=data.get("importance"),
|
||||
privacy_level=data.get("privacy_level"),
|
||||
tags=data.get("tags"),
|
||||
source=data.get("source", "dashboard"),
|
||||
)
|
||||
self._send_json_response(200, {"ok": True, "memory": mem})
|
||||
return
|
||||
|
||||
# POST /api/memories/:characterId — add personal memory
|
||||
if len(path_parts) == 1:
|
||||
char_id = path_parts[0]
|
||||
content = data.get("content", "").strip()
|
||||
if not content:
|
||||
self._send_json_response(400, {"error": "content is required"})
|
||||
return
|
||||
mem = add_or_merge_memory(
|
||||
character_id=char_id,
|
||||
content=content,
|
||||
memory_type=data.get("memory_type"),
|
||||
category=data.get("category", "other"),
|
||||
importance=data.get("importance"),
|
||||
privacy_level=data.get("privacy_level"),
|
||||
tags=data.get("tags"),
|
||||
follow_up_due=data.get("follow_up_due"),
|
||||
follow_up_context=data.get("follow_up_context"),
|
||||
source=data.get("source", "dashboard"),
|
||||
)
|
||||
self._send_json_response(200, {"ok": True, "memory": mem})
|
||||
return
|
||||
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
|
||||
def _handle_memory_put(self, path_parts: list[str]):
|
||||
"""Handle PUT /api/memories/:memoryId — update a memory."""
|
||||
if len(path_parts) != 1:
|
||||
self._send_json_response(400, {"error": "Memory ID required"})
|
||||
return
|
||||
data = self._read_json_body()
|
||||
if data is None:
|
||||
return
|
||||
mem = _update_memory(path_parts[0], **data)
|
||||
if mem:
|
||||
self._send_json_response(200, {"ok": True, "memory": mem})
|
||||
else:
|
||||
self._send_json_response(404, {"error": "Memory not found"})
|
||||
|
||||
def _handle_memory_delete(self, path_parts: list[str]):
|
||||
"""Handle DELETE /api/memories/:memoryId."""
|
||||
if len(path_parts) != 1:
|
||||
self._send_json_response(400, {"error": "Memory ID required"})
|
||||
return
|
||||
ok = _delete_memory(path_parts[0])
|
||||
self._send_json_response(200 if ok else 404, {"ok": ok, "id": path_parts[0]})
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# HTTP method dispatchers
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def do_GET(self):
|
||||
"""Handle GET requests."""
|
||||
parsed_path = urlparse(self.path)
|
||||
path = parsed_path.path
|
||||
|
||||
if path == "/status" or path == "/":
|
||||
self._send_json_response(200, {
|
||||
"status": "ok",
|
||||
"service": "OpenClaw HTTP Bridge",
|
||||
"version": "1.0.0"
|
||||
"version": "2.0.0"
|
||||
})
|
||||
else:
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
return
|
||||
|
||||
# Memory API: GET /api/memories/...
|
||||
if path.startswith("/api/memories/"):
|
||||
parts = path[len("/api/memories/"):].strip("/").split("/")
|
||||
query_params = parse_qs(parsed_path.query)
|
||||
self._handle_memory_get(parts, query_params)
|
||||
return
|
||||
|
||||
self._send_json_response(404, {"error": "Not found"})
|
||||
|
||||
|
||||
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
|
||||
|
||||
13
homeai-agent/prompt-styles/creative.json
Normal file
13
homeai-agent/prompt-styles/creative.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "creative",
|
||||
"name": "Creative",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"description": "In-depth answers, longer conversational responses",
|
||||
"thinking": "low",
|
||||
"params": {
|
||||
"temperature": 0.7
|
||||
},
|
||||
"instruction": "Give thorough, in-depth answers. Respond at whatever length the topic requires — short for simple things, long for complex ones. Be conversational and engaging, like a knowledgeable friend. Vary your sentence structure and word choice to keep things interesting. Do not use roleplay actions or narration. If a topic has interesting depth worth exploring, offer to continue. This mode is for rich conversation, not commands.",
|
||||
"strip_sections": []
|
||||
}
|
||||
13
homeai-agent/prompt-styles/game-master.json
Normal file
13
homeai-agent/prompt-styles/game-master.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "game-master",
|
||||
"name": "Game Master",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"description": "Second-person interactive narration with user as participant",
|
||||
"thinking": "off",
|
||||
"params": {
|
||||
"temperature": 0.9
|
||||
},
|
||||
"instruction": "Narrate in second person — the user is the subject experiencing the scene. Describe what they see, hear, and feel with vivid, varied language. Write your character's dialogue in quotes and their actions in prose. After describing the scene or an interaction, prompt the user for their next action. Keep the user engaged as an active participant. Balance rich description with opportunities for user agency. Avoid repeating descriptive patterns — each scene should feel fresh and unpredictable. This is a 2nd-person interactive experience.",
|
||||
"strip_sections": []
|
||||
}
|
||||
13
homeai-agent/prompt-styles/quick.json
Normal file
13
homeai-agent/prompt-styles/quick.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "quick",
|
||||
"name": "Quick",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-haiku-4-5-20251001",
|
||||
"description": "Brief responses for commands and quick questions",
|
||||
"thinking": "off",
|
||||
"params": {
|
||||
"temperature": 0.15
|
||||
},
|
||||
"instruction": "RESPONSE RULES — STRICT:\n- Respond as briefly as possible. For smart home commands, confirm with 1-3 words (\"Done.\", \"Lights on.\", \"Playing jazz.\").\n- For factual questions, give the shortest correct answer. One sentence max.\n- No small talk, no elaboration, no follow-up questions unless the request is genuinely ambiguous.\n- Never describe your actions, emotions, or thought process.\n- Never add flair, personality, or creative embellishments — be a reliable, predictable tool.\n- If a tool call is needed, execute it and report the result. Nothing else.",
|
||||
"strip_sections": ["background", "appearance", "dialogue_style"]
|
||||
}
|
||||
13
homeai-agent/prompt-styles/roleplayer.json
Normal file
13
homeai-agent/prompt-styles/roleplayer.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "roleplayer",
|
||||
"name": "Roleplayer",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"description": "First-person roleplay with character actions and expressions",
|
||||
"thinking": "off",
|
||||
"params": {
|
||||
"temperature": 0.85
|
||||
},
|
||||
"instruction": "Respond entirely in first person as your character. Use action descriptions enclosed in asterisks (*adjusts glasses*, *leans forward thoughtfully*) to convey body language, emotions, and physical actions. Stay fully in character at all times — your personality, speech patterns, and mannerisms should be consistent with your character profile. React emotionally and physically to what the user says. Vary your expressions, gestures, and phrasings — never repeat the same actions or sentence structures. Surprise the user with unexpected but in-character reactions. This is an immersive 1st-person interaction.",
|
||||
"strip_sections": []
|
||||
}
|
||||
13
homeai-agent/prompt-styles/standard.json
Normal file
13
homeai-agent/prompt-styles/standard.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "standard",
|
||||
"name": "Standard",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-sonnet-4-6",
|
||||
"description": "Conversational responses, concise but informative",
|
||||
"thinking": "off",
|
||||
"params": {
|
||||
"temperature": 0.4
|
||||
},
|
||||
"instruction": "Respond naturally and conversationally. Be concise but informative — a few sentences is ideal. Do not use roleplay actions, narration, or describe your expressions/body language. Treat the interaction as a chat, not a performance. Stay helpful, on-topic, and consistent. Prioritise clarity and accuracy over flair.",
|
||||
"strip_sections": []
|
||||
}
|
||||
13
homeai-agent/prompt-styles/storyteller.json
Normal file
13
homeai-agent/prompt-styles/storyteller.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"id": "storyteller",
|
||||
"name": "Storyteller",
|
||||
"group": "cloud",
|
||||
"model": "anthropic/claude-opus-4-6",
|
||||
"description": "Third-person narrative with periodic reader check-ins",
|
||||
"thinking": "off",
|
||||
"params": {
|
||||
"temperature": 0.95
|
||||
},
|
||||
"instruction": "Narrate in third person as a storyteller. Describe scenes, character actions, dialogue, and atmosphere as a novelist would. Your character should be written about, not speaking as themselves directly to the user. Write rich, evocative prose with varied vocabulary, rhythm, and imagery. Avoid formulaic descriptions — each passage should have its own texture and mood. Periodically check in with the reader about story direction. The user drives the direction but you drive the narrative between check-ins. This is a 3rd-person storytelling experience.",
|
||||
"strip_sections": []
|
||||
}
|
||||
@@ -26,6 +26,8 @@
|
||||
<string>/Users/aodhan</string>
|
||||
<key>GAZE_API_KEY</key>
|
||||
<string>e63401f17e4845e1059f830267f839fe7fc7b6083b1cb1730863318754d799f4</string>
|
||||
<key>HA_TOKEN</key>
|
||||
<string></string>
|
||||
</dict>
|
||||
|
||||
<key>RunAtLoad</key>
|
||||
|
||||
@@ -46,6 +46,16 @@
|
||||
}
|
||||
},
|
||||
|
||||
"dream_id": {
|
||||
"type": "string",
|
||||
"description": "Linked Dream character ID for syncing character data and images"
|
||||
},
|
||||
|
||||
"gaze_character": {
|
||||
"type": "string",
|
||||
"description": "Linked GAZE character_id for auto-assigned cover image and default image generation preset"
|
||||
},
|
||||
|
||||
"gaze_presets": {
|
||||
"type": "array",
|
||||
"description": "GAZE image generation presets with trigger conditions",
|
||||
@@ -72,7 +82,25 @@
|
||||
}
|
||||
},
|
||||
|
||||
"notes": { "type": "string" }
|
||||
"notes": { "type": "string" },
|
||||
|
||||
"default_prompt_style": {
|
||||
"type": "string",
|
||||
"description": "Default prompt style for this character (quick, standard, creative, roleplayer, game-master, storyteller). Overrides global active style when this character is active.",
|
||||
"enum": ["", "quick", "standard", "creative", "roleplayer", "game-master", "storyteller"]
|
||||
},
|
||||
|
||||
"prompt_style_overrides": {
|
||||
"type": "object",
|
||||
"description": "Per-style customizations for this character. Keys are style IDs, values contain override fields.",
|
||||
"additionalProperties": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"dialogue_style": { "type": "string", "description": "Override dialogue style for this prompt style" },
|
||||
"system_prompt_suffix": { "type": "string", "description": "Additional instructions appended for this prompt style" }
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"additionalProperties": true
|
||||
}
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
import { BrowserRouter, Routes, Route, NavLink } from 'react-router-dom';
|
||||
import { useState, useCallback, useEffect } from 'react';
|
||||
import { BrowserRouter, Routes, Route, NavLink, useLocation } from 'react-router-dom';
|
||||
import Dashboard from './pages/Dashboard';
|
||||
import Chat from './pages/Chat';
|
||||
import Characters from './pages/Characters';
|
||||
import Editor from './pages/Editor';
|
||||
import Memories from './pages/Memories';
|
||||
|
||||
function NavItem({ to, children, icon }) {
|
||||
function NavItem({ to, children, icon, onClick }) {
|
||||
return (
|
||||
<NavLink
|
||||
to={to}
|
||||
onClick={onClick}
|
||||
className={({ isActive }) =>
|
||||
`flex items-center gap-3 px-4 py-2.5 rounded-lg text-sm font-medium transition-colors ${
|
||||
isActive
|
||||
@@ -24,22 +26,78 @@ function NavItem({ to, children, icon }) {
|
||||
}
|
||||
|
||||
function Layout({ children }) {
|
||||
const [sidebarOpen, setSidebarOpen] = useState(false)
|
||||
const location = useLocation()
|
||||
|
||||
// Close sidebar on route change (mobile)
|
||||
useEffect(() => {
|
||||
setSidebarOpen(false)
|
||||
}, [location.pathname])
|
||||
|
||||
const closeSidebar = useCallback(() => setSidebarOpen(false), [])
|
||||
|
||||
return (
|
||||
<div className="h-screen bg-gray-950 flex overflow-hidden">
|
||||
{/* Mobile header bar */}
|
||||
<div className="fixed top-0 left-0 right-0 z-30 flex items-center gap-3 px-4 py-3 bg-gray-900/95 backdrop-blur border-b border-gray-800 md:hidden">
|
||||
<button
|
||||
onClick={() => setSidebarOpen(true)}
|
||||
className="p-1.5 text-gray-400 hover:text-white transition-colors"
|
||||
aria-label="Open menu"
|
||||
>
|
||||
<svg className="w-6 h-6" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M3.75 6.75h16.5M3.75 12h16.5m-16.5 5.25h16.5" />
|
||||
</svg>
|
||||
</button>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="w-7 h-7 rounded-md bg-gradient-to-br from-indigo-500 to-purple-600 flex items-center justify-center">
|
||||
<svg className="w-4 h-4 text-white" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M2.25 12l8.954-8.955c.44-.439 1.152-.439 1.591 0L21.75 12M4.5 9.75v10.125c0 .621.504 1.125 1.125 1.125H9.75v-4.875c0-.621.504-1.125 1.125-1.125h2.25c.621 0 1.125.504 1.125 1.125V21h4.125c.621 0 1.125-.504 1.125-1.125V9.75M8.25 21h8.25" />
|
||||
</svg>
|
||||
</div>
|
||||
<span className="text-sm font-bold text-white">HomeAI</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Mobile backdrop */}
|
||||
{sidebarOpen && (
|
||||
<div
|
||||
className="fixed inset-0 bg-black/60 z-40 md:hidden"
|
||||
onClick={closeSidebar}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Sidebar */}
|
||||
<aside className="w-64 bg-gray-900 border-r border-gray-800 flex flex-col shrink-0">
|
||||
<aside className={`
|
||||
fixed inset-y-0 left-0 z-50 w-64 bg-gray-900 border-r border-gray-800 flex flex-col shrink-0
|
||||
transform transition-transform duration-200 ease-out
|
||||
${sidebarOpen ? 'translate-x-0' : '-translate-x-full'}
|
||||
md:static md:translate-x-0
|
||||
`}>
|
||||
{/* Logo */}
|
||||
<div className="px-6 py-5 border-b border-gray-800">
|
||||
<div className="flex items-center gap-3">
|
||||
<div className="w-9 h-9 rounded-lg bg-gradient-to-br from-indigo-500 to-purple-600 flex items-center justify-center">
|
||||
<svg className="w-5 h-5 text-white" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M2.25 12l8.954-8.955c.44-.439 1.152-.439 1.591 0L21.75 12M4.5 9.75v10.125c0 .621.504 1.125 1.125 1.125H9.75v-4.875c0-.621.504-1.125 1.125-1.125h2.25c.621 0 1.125.504 1.125 1.125V21h4.125c.621 0 1.125-.504 1.125-1.125V9.75M8.25 21h8.25" />
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex items-center gap-3">
|
||||
<div className="w-9 h-9 rounded-lg bg-gradient-to-br from-indigo-500 to-purple-600 flex items-center justify-center">
|
||||
<svg className="w-5 h-5 text-white" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M2.25 12l8.954-8.955c.44-.439 1.152-.439 1.591 0L21.75 12M4.5 9.75v10.125c0 .621.504 1.125 1.125 1.125H9.75v-4.875c0-.621.504-1.125 1.125-1.125h2.25c.621 0 1.125.504 1.125 1.125V21h4.125c.621 0 1.125-.504 1.125-1.125V9.75M8.25 21h8.25" />
|
||||
</svg>
|
||||
</div>
|
||||
<div>
|
||||
<h1 className="text-lg font-bold text-white tracking-tight">HomeAI</h1>
|
||||
<p className="text-xs text-gray-500">LINDBLUM</p>
|
||||
</div>
|
||||
</div>
|
||||
{/* Close button on mobile */}
|
||||
<button
|
||||
onClick={closeSidebar}
|
||||
className="p-1 text-gray-500 hover:text-white md:hidden"
|
||||
aria-label="Close menu"
|
||||
>
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</div>
|
||||
<div>
|
||||
<h1 className="text-lg font-bold text-white tracking-tight">HomeAI</h1>
|
||||
<p className="text-xs text-gray-500">LINDBLUM</p>
|
||||
</div>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -47,6 +105,7 @@ function Layout({ children }) {
|
||||
<nav className="flex-1 px-3 py-4 space-y-1">
|
||||
<NavItem
|
||||
to="/"
|
||||
onClick={closeSidebar}
|
||||
icon={
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M3.75 6A2.25 2.25 0 016 3.75h2.25A2.25 2.25 0 0110.5 6v2.25a2.25 2.25 0 01-2.25 2.25H6a2.25 2.25 0 01-2.25-2.25V6zM3.75 15.75A2.25 2.25 0 016 13.5h2.25a2.25 2.25 0 012.25 2.25V18a2.25 2.25 0 01-2.25 2.25H6A2.25 2.25 0 013.75 18v-2.25zM13.5 6a2.25 2.25 0 012.25-2.25H18A2.25 2.25 0 0120.25 6v2.25A2.25 2.25 0 0118 10.5h-2.25a2.25 2.25 0 01-2.25-2.25V6zM13.5 15.75a2.25 2.25 0 012.25-2.25H18a2.25 2.25 0 012.25 2.25V18A2.25 2.25 0 0118 20.25h-2.25A2.25 2.25 0 0113.5 18v-2.25z" />
|
||||
@@ -58,6 +117,7 @@ function Layout({ children }) {
|
||||
|
||||
<NavItem
|
||||
to="/chat"
|
||||
onClick={closeSidebar}
|
||||
icon={
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M8.625 12a.375.375 0 11-.75 0 .375.375 0 01.75 0zm0 0H8.25m4.125 0a.375.375 0 11-.75 0 .375.375 0 01.75 0zm0 0H12m4.125 0a.375.375 0 11-.75 0 .375.375 0 01.75 0zm0 0h-.375M21 12c0 4.556-4.03 8.25-9 8.25a9.764 9.764 0 01-2.555-.337A5.972 5.972 0 015.41 20.97a5.969 5.969 0 01-.474-.065 4.48 4.48 0 00.978-2.025c.09-.457-.133-.901-.467-1.226C3.93 16.178 3 14.189 3 12c0-4.556 4.03-8.25 9-8.25s9 3.694 9 8.25z" />
|
||||
@@ -69,6 +129,7 @@ function Layout({ children }) {
|
||||
|
||||
<NavItem
|
||||
to="/characters"
|
||||
onClick={closeSidebar}
|
||||
icon={
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M15.75 6a3.75 3.75 0 11-7.5 0 3.75 3.75 0 017.5 0zM4.501 20.118a7.5 7.5 0 0114.998 0A17.933 17.933 0 0112 21.75c-2.676 0-5.216-.584-7.499-1.632z" />
|
||||
@@ -80,6 +141,7 @@ function Layout({ children }) {
|
||||
|
||||
<NavItem
|
||||
to="/memories"
|
||||
onClick={closeSidebar}
|
||||
icon={
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 18v-5.25m0 0a6.01 6.01 0 001.5-.189m-1.5.189a6.01 6.01 0 01-1.5-.189m3.75 7.478a12.06 12.06 0 01-4.5 0m3.75 2.383a14.406 14.406 0 01-3 0M14.25 18v-.192c0-.983.658-1.823 1.508-2.316a7.5 7.5 0 10-7.517 0c.85.493 1.509 1.333 1.509 2.316V18" />
|
||||
@@ -91,6 +153,7 @@ function Layout({ children }) {
|
||||
|
||||
<NavItem
|
||||
to="/editor"
|
||||
onClick={closeSidebar}
|
||||
icon={
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M9.594 3.94c.09-.542.56-.94 1.11-.94h2.593c.55 0 1.02.398 1.11.94l.213 1.281c.063.374.313.686.645.87.074.04.147.083.22.127.324.196.72.257 1.075.124l1.217-.456a1.125 1.125 0 011.37.49l1.296 2.247a1.125 1.125 0 01-.26 1.431l-1.003.827c-.293.24-.438.613-.431.992a6.759 6.759 0 010 .255c-.007.378.138.75.43.99l1.005.828c.424.35.534.954.26 1.43l-1.298 2.247a1.125 1.125 0 01-1.369.491l-1.217-.456c-.355-.133-.75-.072-1.076.124a6.57 6.57 0 01-.22.128c-.331.183-.581.495-.644.869l-.213 1.28c-.09.543-.56.941-1.11.941h-2.594c-.55 0-1.02-.398-1.11-.94l-.213-1.281c-.062-.374-.312-.686-.644-.87a6.52 6.52 0 01-.22-.127c-.325-.196-.72-.257-1.076-.124l-1.217.456a1.125 1.125 0 01-1.369-.49l-1.297-2.247a1.125 1.125 0 01.26-1.431l1.004-.827c.292-.24.437-.613.43-.992a6.932 6.932 0 010-.255c.007-.378-.138-.75-.43-.99l-1.004-.828a1.125 1.125 0 01-.26-1.43l1.297-2.247a1.125 1.125 0 011.37-.491l1.216.456c.356.133.751.072 1.076-.124.072-.044.146-.087.22-.128.332-.183.582-.495.644-.869l.214-1.281z" />
|
||||
@@ -109,8 +172,8 @@ function Layout({ children }) {
|
||||
</div>
|
||||
</aside>
|
||||
|
||||
{/* Main content */}
|
||||
<main className="flex-1 overflow-hidden flex flex-col">
|
||||
{/* Main content — add top padding on mobile for the header bar */}
|
||||
<main className="flex-1 overflow-hidden flex flex-col pt-14 md:pt-0">
|
||||
{children}
|
||||
</main>
|
||||
</div>
|
||||
@@ -122,11 +185,11 @@ function App() {
|
||||
<BrowserRouter>
|
||||
<Layout>
|
||||
<Routes>
|
||||
<Route path="/" element={<div className="flex-1 overflow-y-auto p-8"><div className="max-w-6xl mx-auto"><Dashboard /></div></div>} />
|
||||
<Route path="/" element={<div className="flex-1 overflow-y-auto p-4 md:p-8"><div className="max-w-6xl mx-auto"><Dashboard /></div></div>} />
|
||||
<Route path="/chat" element={<Chat />} />
|
||||
<Route path="/characters" element={<div className="flex-1 overflow-y-auto p-8"><div className="max-w-6xl mx-auto"><Characters /></div></div>} />
|
||||
<Route path="/memories" element={<div className="flex-1 overflow-y-auto p-8"><div className="max-w-6xl mx-auto"><Memories /></div></div>} />
|
||||
<Route path="/editor" element={<div className="flex-1 overflow-y-auto p-8"><div className="max-w-6xl mx-auto"><Editor /></div></div>} />
|
||||
<Route path="/characters" element={<div className="flex-1 overflow-y-auto p-4 md:p-8"><div className="max-w-6xl mx-auto"><Characters /></div></div>} />
|
||||
<Route path="/memories" element={<div className="flex-1 overflow-y-auto p-4 md:p-8"><div className="max-w-6xl mx-auto"><Memories /></div></div>} />
|
||||
<Route path="/editor" element={<div className="flex-1 overflow-y-auto p-4 md:p-8"><div className="max-w-6xl mx-auto"><Editor /></div></div>} />
|
||||
</Routes>
|
||||
</Layout>
|
||||
</BrowserRouter>
|
||||
|
||||
@@ -2,7 +2,7 @@ import { useEffect, useRef } from 'react'
|
||||
import MessageBubble from './MessageBubble'
|
||||
import ThinkingIndicator from './ThinkingIndicator'
|
||||
|
||||
export default function ChatPanel({ messages, isLoading, onReplay, character }) {
|
||||
export default function ChatPanel({ messages, isLoading, onReplay, onRetry, character }) {
|
||||
const bottomRef = useRef(null)
|
||||
const name = character?.name || 'AI'
|
||||
const image = character?.image || null
|
||||
@@ -32,7 +32,7 @@ export default function ChatPanel({ messages, isLoading, onReplay, character })
|
||||
return (
|
||||
<div className="flex-1 overflow-y-auto py-4">
|
||||
{messages.map((msg) => (
|
||||
<MessageBubble key={msg.id} message={msg} onReplay={onReplay} character={character} />
|
||||
<MessageBubble key={msg.id} message={msg} onReplay={onReplay} onRetry={onRetry} character={character} />
|
||||
))}
|
||||
{isLoading && <ThinkingIndicator character={character} />}
|
||||
<div ref={bottomRef} />
|
||||
|
||||
@@ -10,61 +10,95 @@ function timeAgo(dateStr) {
|
||||
return `${days}d ago`
|
||||
}
|
||||
|
||||
export default function ConversationList({ conversations, activeId, onCreate, onSelect, onDelete }) {
|
||||
export default function ConversationList({ conversations, activeId, onCreate, onSelect, onDelete, isOpen, onToggle }) {
|
||||
return (
|
||||
<div className="w-72 border-r border-gray-800 flex flex-col bg-gray-950 shrink-0">
|
||||
{/* New chat button */}
|
||||
<div className="p-3 border-b border-gray-800">
|
||||
<button
|
||||
onClick={onCreate}
|
||||
className="w-full flex items-center justify-center gap-2 px-3 py-2 bg-indigo-600 hover:bg-indigo-500 text-white text-sm rounded-lg transition-colors"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 4.5v15m7.5-7.5h-15" />
|
||||
</svg>
|
||||
New chat
|
||||
</button>
|
||||
</div>
|
||||
<>
|
||||
{/* Mobile toggle button */}
|
||||
<button
|
||||
onClick={onToggle}
|
||||
className="md:hidden absolute left-2 top-2 z-10 p-2 text-gray-400 hover:text-white bg-gray-900/80 rounded-lg border border-gray-800"
|
||||
aria-label="Toggle conversations"
|
||||
title="Conversations"
|
||||
>
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M20.25 8.511c.884.284 1.5 1.128 1.5 2.097v4.286c0 1.136-.847 2.1-1.98 2.193-.34.027-.68.052-1.02.072v3.091l-3-3c-1.354 0-2.694-.055-4.02-.163a2.115 2.115 0 01-.825-.242m9.345-8.334a2.126 2.126 0 00-.476-.095 48.64 48.64 0 00-8.048 0c-1.131.094-1.976 1.057-1.976 2.192v4.286c0 .837.46 1.58 1.155 1.951m9.345-8.334V6.637c0-1.621-1.152-3.026-2.76-3.235A48.455 48.455 0 0011.25 3c-2.115 0-4.198.137-6.24.402-1.608.209-2.76 1.614-2.76 3.235v6.226c0 1.621 1.152 3.026 2.76 3.235.577.075 1.157.14 1.74.194V21l4.155-4.155" />
|
||||
</svg>
|
||||
</button>
|
||||
|
||||
{/* Conversation list */}
|
||||
<div className="flex-1 overflow-y-auto">
|
||||
{conversations.length === 0 ? (
|
||||
<p className="text-xs text-gray-600 text-center py-6">No conversations yet</p>
|
||||
) : (
|
||||
conversations.map(conv => (
|
||||
<div
|
||||
key={conv.id}
|
||||
onClick={() => onSelect(conv.id)}
|
||||
className={`group flex items-start gap-2 px-3 py-2.5 cursor-pointer border-b border-gray-800/50 transition-colors ${
|
||||
conv.id === activeId
|
||||
? 'bg-gray-800 text-white'
|
||||
: 'text-gray-400 hover:bg-gray-800/50 hover:text-gray-200'
|
||||
}`}
|
||||
>
|
||||
<div className="flex-1 min-w-0">
|
||||
<p className="text-sm truncate">
|
||||
{conv.title || 'New conversation'}
|
||||
</p>
|
||||
<div className="flex items-center gap-2 mt-0.5">
|
||||
{conv.characterName && (
|
||||
<span className="text-xs text-indigo-400/70">{conv.characterName}</span>
|
||||
)}
|
||||
<span className="text-xs text-gray-600">{timeAgo(conv.updatedAt)}</span>
|
||||
</div>
|
||||
</div>
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); onDelete(conv.id) }}
|
||||
className="opacity-0 group-hover:opacity-100 p-1 text-gray-500 hover:text-red-400 transition-all shrink-0 mt-0.5"
|
||||
title="Delete"
|
||||
{/* Mobile backdrop */}
|
||||
{isOpen && (
|
||||
<div className="fixed inset-0 bg-black/50 z-20 md:hidden" onClick={onToggle} />
|
||||
)}
|
||||
|
||||
{/* Conversation panel */}
|
||||
<div className={`
|
||||
fixed inset-y-0 left-0 z-30 w-72 bg-gray-950 border-r border-gray-800 flex flex-col
|
||||
transform transition-transform duration-200 ease-out
|
||||
${isOpen ? 'translate-x-0' : '-translate-x-full'}
|
||||
md:static md:translate-x-0 md:shrink-0
|
||||
`}>
|
||||
{/* Header with close on mobile */}
|
||||
<div className="p-3 border-b border-gray-800 flex items-center gap-2">
|
||||
<button
|
||||
onClick={onCreate}
|
||||
className="flex-1 flex items-center justify-center gap-2 px-3 py-2.5 bg-indigo-600 hover:bg-indigo-500 text-white text-sm rounded-lg transition-colors"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 4.5v15m7.5-7.5h-15" />
|
||||
</svg>
|
||||
New chat
|
||||
</button>
|
||||
<button
|
||||
onClick={onToggle}
|
||||
className="p-2 text-gray-500 hover:text-white md:hidden"
|
||||
aria-label="Close"
|
||||
>
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Conversation list */}
|
||||
<div className="flex-1 overflow-y-auto">
|
||||
{conversations.length === 0 ? (
|
||||
<p className="text-xs text-gray-600 text-center py-6">No conversations yet</p>
|
||||
) : (
|
||||
conversations.map(conv => (
|
||||
<div
|
||||
key={conv.id}
|
||||
onClick={() => { onSelect(conv.id); if (onToggle) onToggle() }}
|
||||
className={`group flex items-start gap-2 px-3 py-2.5 cursor-pointer border-b border-gray-800/50 transition-colors ${
|
||||
conv.id === activeId
|
||||
? 'bg-gray-800 text-white'
|
||||
: 'text-gray-400 hover:bg-gray-800/50 hover:text-gray-200'
|
||||
}`}
|
||||
>
|
||||
<svg className="w-3.5 h-3.5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M14.74 9l-.346 9m-4.788 0L9.26 9m9.968-3.21c.342.052.682.107 1.022.166m-1.022-.165L18.16 19.673a2.25 2.25 0 01-2.244 2.077H8.084a2.25 2.25 0 01-2.244-2.077L4.772 5.79m14.456 0a48.108 48.108 0 00-3.478-.397m-12 .562c.34-.059.68-.114 1.022-.165m0 0a48.11 48.11 0 013.478-.397m7.5 0v-.916c0-1.18-.91-2.164-2.09-2.201a51.964 51.964 0 00-3.32 0c-1.18.037-2.09 1.022-2.09 2.201v.916m7.5 0a48.667 48.667 0 00-7.5 0" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
<div className="flex-1 min-w-0">
|
||||
<p className="text-sm truncate">
|
||||
{conv.title || 'New conversation'}
|
||||
</p>
|
||||
<div className="flex items-center gap-2 mt-0.5">
|
||||
{conv.characterName && (
|
||||
<span className="text-xs text-indigo-400/70">{conv.characterName}</span>
|
||||
)}
|
||||
<span className="text-xs text-gray-600">{timeAgo(conv.updatedAt)}</span>
|
||||
</div>
|
||||
</div>
|
||||
<button
|
||||
onClick={(e) => { e.stopPropagation(); onDelete(conv.id) }}
|
||||
className="opacity-0 group-hover:opacity-100 p-1.5 text-gray-500 hover:text-red-400 transition-all shrink-0 mt-0.5"
|
||||
title="Delete"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M14.74 9l-.346 9m-4.788 0L9.26 9m9.968-3.21c.342.052.682.107 1.022.166m-1.022-.165L18.16 19.673a2.25 2.25 0 01-2.244 2.077H8.084a2.25 2.25 0 01-2.244-2.077L4.772 5.79m14.456 0a48.108 48.108 0 00-3.478-.397m-12 .562c.34-.059.68-.114 1.022-.165m0 0a48.11 48.11 0 013.478-.397m7.5 0v-.916c0-1.18-.91-2.164-2.09-2.201a51.964 51.964 0 00-3.32 0c-1.18.037-2.09 1.022-2.09 2.201v.916m7.5 0a48.667 48.667 0 00-7.5 0" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -20,7 +20,7 @@ export default function InputBar({ onSend, onVoiceToggle, isLoading, isRecording
|
||||
}
|
||||
|
||||
return (
|
||||
<form onSubmit={handleSubmit} className="border-t border-gray-800 bg-gray-950 px-4 py-3 shrink-0">
|
||||
<form onSubmit={handleSubmit} className="border-t border-gray-800 bg-gray-950 px-3 sm:px-4 py-2 sm:py-3 shrink-0">
|
||||
<div className="flex items-end gap-2 max-w-3xl mx-auto">
|
||||
<VoiceButton
|
||||
isRecording={isRecording}
|
||||
@@ -41,7 +41,7 @@ export default function InputBar({ onSend, onVoiceToggle, isLoading, isRecording
|
||||
<button
|
||||
type="submit"
|
||||
disabled={!text.trim() || isLoading}
|
||||
className="w-10 h-10 rounded-full bg-indigo-600 text-white flex items-center justify-center shrink-0 hover:bg-indigo-500 disabled:opacity-40 disabled:hover:bg-indigo-600 transition-colors"
|
||||
className="w-11 h-11 sm:w-10 sm:h-10 rounded-full bg-indigo-600 text-white flex items-center justify-center shrink-0 hover:bg-indigo-500 disabled:opacity-40 disabled:hover:bg-indigo-600 transition-colors"
|
||||
>
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M6 12L3.269 3.126A59.768 59.768 0 0121.485 12 59.77 59.77 0 013.27 20.876L5.999 12zm0 0h7.5" />
|
||||
|
||||
@@ -88,12 +88,12 @@ function RichContent({ text }) {
|
||||
)
|
||||
}
|
||||
|
||||
export default function MessageBubble({ message, onReplay, character }) {
|
||||
export default function MessageBubble({ message, onReplay, onRetry, character }) {
|
||||
const isUser = message.role === 'user'
|
||||
|
||||
return (
|
||||
<div className={`flex ${isUser ? 'justify-end' : 'justify-start'} px-4 py-1.5`}>
|
||||
<div className={`flex items-start gap-3 max-w-[80%] ${isUser ? 'flex-row-reverse' : ''}`}>
|
||||
<div className={`flex ${isUser ? 'justify-end' : 'justify-start'} px-3 sm:px-4 py-1.5`}>
|
||||
<div className={`flex items-start gap-2 sm:gap-3 max-w-[92%] sm:max-w-[80%] ${isUser ? 'flex-row-reverse' : ''}`}>
|
||||
{!isUser && <Avatar character={character} />}
|
||||
<div>
|
||||
<div
|
||||
@@ -114,6 +114,18 @@ export default function MessageBubble({ message, onReplay, character }) {
|
||||
{message.model}
|
||||
</span>
|
||||
)}
|
||||
{message.isError && onRetry && (
|
||||
<button
|
||||
onClick={() => onRetry(message.id)}
|
||||
className="text-red-400 hover:text-red-300 transition-colors flex items-center gap-1 text-xs"
|
||||
title="Retry"
|
||||
>
|
||||
<svg className="w-3.5 h-3.5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M16.023 9.348h4.992v-.001M2.985 19.644v-4.992m0 0h4.992m-4.993 0l3.181 3.183a8.25 8.25 0 0013.803-3.7M4.031 9.865a8.25 8.25 0 0113.803-3.7l3.181 3.182" />
|
||||
</svg>
|
||||
Retry
|
||||
</button>
|
||||
)}
|
||||
{!message.isError && onReplay && (
|
||||
<button
|
||||
onClick={() => onReplay(message.content)}
|
||||
|
||||
52
homeai-dashboard/src/components/PromptStyleSelector.jsx
Normal file
52
homeai-dashboard/src/components/PromptStyleSelector.jsx
Normal file
@@ -0,0 +1,52 @@
|
||||
const GROUP_LABELS = { cloud: 'Cloud', local: 'Local' }
|
||||
|
||||
const GROUP_COLORS = {
|
||||
cloud: {
|
||||
active: 'bg-indigo-600 text-white',
|
||||
inactive: 'text-indigo-400 hover:bg-indigo-900/30',
|
||||
},
|
||||
local: {
|
||||
active: 'bg-emerald-600 text-white',
|
||||
inactive: 'text-emerald-400 hover:bg-emerald-900/30',
|
||||
},
|
||||
}
|
||||
|
||||
export default function PromptStyleSelector({ styles, activeStyle, onSelect }) {
|
||||
if (!styles || styles.length === 0) return null
|
||||
|
||||
const groups = { cloud: [], local: [] }
|
||||
for (const s of styles) {
|
||||
const g = s.group === 'local' ? 'local' : 'cloud'
|
||||
groups[g].push(s)
|
||||
}
|
||||
|
||||
return (
|
||||
<div className="flex items-center gap-2 sm:gap-3 px-3 sm:px-4 py-1.5 border-b border-gray-800/50 shrink-0 overflow-x-auto scrollbar-none">
|
||||
{Object.entries(groups).map(([group, groupStyles]) => (
|
||||
groupStyles.length > 0 && (
|
||||
<div key={group} className="flex items-center gap-1">
|
||||
<span className="text-[10px] uppercase tracking-wider text-gray-600 mr-1">
|
||||
{GROUP_LABELS[group]}
|
||||
</span>
|
||||
{groupStyles.map((s) => {
|
||||
const isActive = s.id === activeStyle
|
||||
const colors = GROUP_COLORS[group] || GROUP_COLORS.cloud
|
||||
return (
|
||||
<button
|
||||
key={s.id}
|
||||
onClick={() => onSelect(s.id)}
|
||||
className={`text-xs px-2.5 py-1 sm:px-2 sm:py-0.5 rounded-full transition-colors whitespace-nowrap ${
|
||||
isActive ? colors.active : colors.inactive
|
||||
}`}
|
||||
title={s.description || s.name}
|
||||
>
|
||||
{s.name}
|
||||
</button>
|
||||
)
|
||||
})}
|
||||
</div>
|
||||
)
|
||||
))}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -8,7 +8,7 @@ export default function SettingsDrawer({ isOpen, onClose, settings, onUpdate })
|
||||
return (
|
||||
<>
|
||||
<div className="fixed inset-0 bg-black/50 z-40" onClick={onClose} />
|
||||
<div className="fixed right-0 top-0 bottom-0 w-80 bg-gray-900 border-l border-gray-800 z-50 flex flex-col">
|
||||
<div className="fixed right-0 top-0 bottom-0 w-full sm:w-80 bg-gray-900 border-l border-gray-800 z-50 flex flex-col">
|
||||
<div className="flex items-center justify-between px-4 py-3 border-b border-gray-800">
|
||||
<h2 className="text-sm font-medium text-gray-200">Settings</h2>
|
||||
<button onClick={onClose} className="text-gray-500 hover:text-gray-300">
|
||||
|
||||
@@ -8,7 +8,7 @@ export default function VoiceButton({ isRecording, isTranscribing, onToggle, dis
|
||||
<button
|
||||
onClick={handleClick}
|
||||
disabled={disabled || isTranscribing}
|
||||
className={`w-10 h-10 rounded-full flex items-center justify-center transition-all shrink-0 ${
|
||||
className={`w-11 h-11 sm:w-10 sm:h-10 rounded-full flex items-center justify-center transition-all shrink-0 ${
|
||||
isRecording
|
||||
? 'bg-red-500 text-white shadow-[0_0_0_4px_rgba(239,68,68,0.3)] animate-pulse'
|
||||
: isTranscribing
|
||||
|
||||
@@ -70,7 +70,8 @@ export function useChat(conversationId, conversationMeta, onConversationUpdate)
|
||||
}, [conversationMeta, onConversationUpdate])
|
||||
|
||||
// send accepts an optional overrideId for when the conversation was just created
|
||||
const send = useCallback(async (text, overrideId) => {
|
||||
// and an optional promptStyle to control response style
|
||||
const send = useCallback(async (text, overrideId, promptStyle) => {
|
||||
if (!text.trim() || isLoading) return null
|
||||
|
||||
const userMsg = { id: Date.now(), role: 'user', content: text.trim(), timestamp: new Date().toISOString() }
|
||||
@@ -80,13 +81,15 @@ export function useChat(conversationId, conversationMeta, onConversationUpdate)
|
||||
setIsLoading(true)
|
||||
|
||||
try {
|
||||
const { response, model } = await sendMessage(text.trim(), conversationMeta?.characterId || null)
|
||||
const activeConvId = overrideId || idRef.current
|
||||
const { response, model, prompt_style } = await sendMessage(text.trim(), conversationMeta?.characterId || null, promptStyle, activeConvId)
|
||||
const assistantMsg = {
|
||||
id: Date.now() + 1,
|
||||
role: 'assistant',
|
||||
content: response,
|
||||
timestamp: new Date().toISOString(),
|
||||
...(model && { model }),
|
||||
...(prompt_style && { prompt_style }),
|
||||
}
|
||||
const allMessages = [...newMessages, assistantMsg]
|
||||
setMessages(allMessages)
|
||||
@@ -114,6 +117,52 @@ export function useChat(conversationId, conversationMeta, onConversationUpdate)
|
||||
}
|
||||
}, [isLoading, messages, persist])
|
||||
|
||||
// Retry: remove the error message, re-send the preceding user message
|
||||
const retry = useCallback(async (errorMsgId, promptStyle) => {
|
||||
const idx = messages.findIndex(m => m.id === errorMsgId)
|
||||
if (idx < 1) return null
|
||||
// Find the user message right before the error
|
||||
const userMsg = messages[idx - 1]
|
||||
if (!userMsg || userMsg.role !== 'user') return null
|
||||
// Remove the error message
|
||||
const cleaned = messages.filter(m => m.id !== errorMsgId)
|
||||
setMessages(cleaned)
|
||||
await persist(cleaned)
|
||||
// Re-send (but we need to temporarily set messages back without the error so send picks up correctly)
|
||||
// Instead, inline the send logic with the cleaned message list
|
||||
setIsLoading(true)
|
||||
try {
|
||||
const activeConvId = idRef.current
|
||||
const { response, model, prompt_style } = await sendMessage(userMsg.content, conversationMeta?.characterId || null, promptStyle, activeConvId)
|
||||
const assistantMsg = {
|
||||
id: Date.now() + 1,
|
||||
role: 'assistant',
|
||||
content: response,
|
||||
timestamp: new Date().toISOString(),
|
||||
...(model && { model }),
|
||||
...(prompt_style && { prompt_style }),
|
||||
}
|
||||
const allMessages = [...cleaned, assistantMsg]
|
||||
setMessages(allMessages)
|
||||
await persist(allMessages)
|
||||
return response
|
||||
} catch (err) {
|
||||
const newError = {
|
||||
id: Date.now() + 1,
|
||||
role: 'assistant',
|
||||
content: `Error: ${err.message}`,
|
||||
timestamp: new Date().toISOString(),
|
||||
isError: true,
|
||||
}
|
||||
const allMessages = [...cleaned, newError]
|
||||
setMessages(allMessages)
|
||||
await persist(allMessages)
|
||||
return null
|
||||
} finally {
|
||||
setIsLoading(false)
|
||||
}
|
||||
}, [messages, persist, conversationMeta])
|
||||
|
||||
const clearHistory = useCallback(async () => {
|
||||
setMessages([])
|
||||
if (idRef.current) {
|
||||
@@ -121,5 +170,5 @@ export function useChat(conversationId, conversationMeta, onConversationUpdate)
|
||||
}
|
||||
}, [persist])
|
||||
|
||||
return { messages, isLoading, isLoadingConv, send, clearHistory }
|
||||
return { messages, isLoading, isLoadingConv, send, retry, clearHistory }
|
||||
}
|
||||
|
||||
27
homeai-dashboard/src/hooks/useFollowups.js
Normal file
27
homeai-dashboard/src/hooks/useFollowups.js
Normal file
@@ -0,0 +1,27 @@
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
import { getFollowups } from '../lib/memoryApi';
|
||||
|
||||
export function useFollowups(characterId) {
|
||||
const [followups, setFollowups] = useState([]);
|
||||
const [loading, setLoading] = useState(false);
|
||||
|
||||
const refresh = useCallback(async () => {
|
||||
if (!characterId) {
|
||||
setFollowups([]);
|
||||
return;
|
||||
}
|
||||
setLoading(true);
|
||||
try {
|
||||
const data = await getFollowups(characterId);
|
||||
setFollowups(data.followups || []);
|
||||
} catch {
|
||||
setFollowups([]);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, [characterId]);
|
||||
|
||||
useEffect(() => { refresh(); }, [refresh]);
|
||||
|
||||
return { followups, loading, refresh };
|
||||
}
|
||||
34
homeai-dashboard/src/hooks/usePromptStyle.js
Normal file
34
homeai-dashboard/src/hooks/usePromptStyle.js
Normal file
@@ -0,0 +1,34 @@
|
||||
import { useState, useEffect, useCallback } from 'react'
|
||||
import { getPromptStyles, getActiveStyle, setActiveStyle } from '../lib/api'
|
||||
|
||||
export function usePromptStyle() {
|
||||
const [styles, setStyles] = useState([])
|
||||
const [activeStyle, setActive] = useState('standard')
|
||||
const [isLoading, setIsLoading] = useState(true)
|
||||
|
||||
useEffect(() => {
|
||||
let cancelled = false
|
||||
Promise.all([getPromptStyles(), getActiveStyle()])
|
||||
.then(([allStyles, active]) => {
|
||||
if (cancelled) return
|
||||
setStyles(allStyles)
|
||||
setActive(active.style || 'standard')
|
||||
setIsLoading(false)
|
||||
})
|
||||
.catch(() => {
|
||||
if (!cancelled) setIsLoading(false)
|
||||
})
|
||||
return () => { cancelled = true }
|
||||
}, [])
|
||||
|
||||
const selectStyle = useCallback(async (styleId) => {
|
||||
setActive(styleId)
|
||||
try {
|
||||
await setActiveStyle(styleId)
|
||||
} catch (err) {
|
||||
console.error('Failed to set prompt style:', err)
|
||||
}
|
||||
}, [])
|
||||
|
||||
return { styles, activeStyle, selectStyle, isLoading }
|
||||
}
|
||||
@@ -33,3 +33,12 @@ body {
|
||||
::selection {
|
||||
background: rgba(99, 102, 241, 0.3);
|
||||
}
|
||||
|
||||
/* Hide scrollbar for horizontal scroll containers */
|
||||
.scrollbar-none {
|
||||
-ms-overflow-style: none;
|
||||
scrollbar-width: none;
|
||||
}
|
||||
.scrollbar-none::-webkit-scrollbar {
|
||||
display: none;
|
||||
}
|
||||
|
||||
@@ -18,9 +18,11 @@ async function fetchWithRetry(url, options, retries = MAX_RETRIES) {
|
||||
}
|
||||
}
|
||||
|
||||
export async function sendMessage(text, characterId = null) {
|
||||
export async function sendMessage(text, characterId = null, promptStyle = null, conversationId = null) {
|
||||
const payload = { message: text, agent: 'main' }
|
||||
if (characterId) payload.character_id = characterId
|
||||
if (promptStyle) payload.prompt_style = promptStyle
|
||||
if (conversationId) payload.conversation_id = conversationId
|
||||
const res = await fetchWithRetry('/api/agent/message', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
@@ -31,7 +33,29 @@ export async function sendMessage(text, characterId = null) {
|
||||
throw new Error(err.error || `HTTP ${res.status}`)
|
||||
}
|
||||
const data = await res.json()
|
||||
return { response: data.response, model: data.model || null }
|
||||
return { response: data.response, model: data.model || null, prompt_style: data.prompt_style || null }
|
||||
}
|
||||
|
||||
export async function getPromptStyles() {
|
||||
const res = await fetch('/api/prompt-styles')
|
||||
if (!res.ok) return []
|
||||
return await res.json()
|
||||
}
|
||||
|
||||
export async function getActiveStyle() {
|
||||
const res = await fetch('/api/prompt-style')
|
||||
if (!res.ok) return { style: 'standard' }
|
||||
return await res.json()
|
||||
}
|
||||
|
||||
export async function setActiveStyle(style) {
|
||||
const res = await fetch('/api/prompt-style', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ style }),
|
||||
})
|
||||
if (!res.ok) throw new Error('Failed to set prompt style')
|
||||
return await res.json()
|
||||
}
|
||||
|
||||
export async function synthesize(text, voice, engine = 'kokoro', model = null) {
|
||||
|
||||
@@ -1,11 +1,20 @@
|
||||
export async function getPersonalMemories(characterId) {
|
||||
const res = await fetch(`/api/memories/personal/${encodeURIComponent(characterId)}`)
|
||||
// Memory API — proxied through Vite middleware to bridge (port 8081)
|
||||
|
||||
export async function getPersonalMemories(characterId, { type, lifecycle, category, q, limit } = {}) {
|
||||
const params = new URLSearchParams()
|
||||
if (type) params.set('type', type)
|
||||
if (lifecycle) params.set('lifecycle', lifecycle)
|
||||
if (category) params.set('category', category)
|
||||
if (q) params.set('q', q)
|
||||
if (limit) params.set('limit', limit)
|
||||
const qs = params.toString() ? `?${params}` : ''
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(characterId)}${qs}`)
|
||||
if (!res.ok) return { characterId, memories: [] }
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function savePersonalMemory(characterId, memory) {
|
||||
const res = await fetch(`/api/memories/personal/${encodeURIComponent(characterId)}`, {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(characterId)}`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(memory),
|
||||
@@ -15,14 +24,21 @@ export async function savePersonalMemory(characterId, memory) {
|
||||
}
|
||||
|
||||
export async function deletePersonalMemory(characterId, memoryId) {
|
||||
const res = await fetch(`/api/memories/personal/${encodeURIComponent(characterId)}/${encodeURIComponent(memoryId)}`, {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(memoryId)}`, {
|
||||
method: 'DELETE',
|
||||
})
|
||||
if (!res.ok) throw new Error(`Failed to delete memory: ${res.status}`)
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function getGeneralMemories() {
|
||||
const res = await fetch('/api/memories/general')
|
||||
export async function getGeneralMemories({ type, lifecycle, category, limit } = {}) {
|
||||
const params = new URLSearchParams()
|
||||
if (type) params.set('type', type)
|
||||
if (lifecycle) params.set('lifecycle', lifecycle)
|
||||
if (category) params.set('category', category)
|
||||
if (limit) params.set('limit', limit)
|
||||
const qs = params.toString() ? `?${params}` : ''
|
||||
const res = await fetch(`/api/memories/general${qs}`)
|
||||
if (!res.ok) return { memories: [] }
|
||||
return res.json()
|
||||
}
|
||||
@@ -38,8 +54,45 @@ export async function saveGeneralMemory(memory) {
|
||||
}
|
||||
|
||||
export async function deleteGeneralMemory(memoryId) {
|
||||
const res = await fetch(`/api/memories/general/${encodeURIComponent(memoryId)}`, {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(memoryId)}`, {
|
||||
method: 'DELETE',
|
||||
})
|
||||
if (!res.ok) throw new Error(`Failed to delete memory: ${res.status}`)
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function getFollowups(characterId) {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(characterId)}/followups`)
|
||||
if (!res.ok) return { followups: [] }
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function resolveFollowup(memoryId) {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(memoryId)}/resolve`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({}),
|
||||
})
|
||||
if (!res.ok) throw new Error(`Failed to resolve follow-up: ${res.status}`)
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function updateMemory(memoryId, fields) {
|
||||
const res = await fetch(`/api/memories/${encodeURIComponent(memoryId)}`, {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(fields),
|
||||
})
|
||||
if (!res.ok) throw new Error(`Failed to update memory: ${res.status}`)
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export async function runMigration() {
|
||||
const res = await fetch('/api/memories/migrate', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({}),
|
||||
})
|
||||
if (!res.ok) throw new Error(`Migration failed: ${res.status}`)
|
||||
return res.json()
|
||||
}
|
||||
|
||||
@@ -162,9 +162,9 @@ export default function Characters() {
|
||||
return (
|
||||
<div className="space-y-8">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex flex-col sm:flex-row sm:items-center justify-between gap-3">
|
||||
<div>
|
||||
<h1 className="text-3xl font-bold text-gray-100">Characters</h1>
|
||||
<h1 className="text-2xl sm:text-3xl font-bold text-gray-100">Characters</h1>
|
||||
<p className="text-sm text-gray-500 mt-1">
|
||||
{profiles.length} profile{profiles.length !== 1 ? 's' : ''} stored
|
||||
{activeProfile && (
|
||||
@@ -174,19 +174,20 @@ export default function Characters() {
|
||||
)}
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex gap-3">
|
||||
<div className="flex gap-2 sm:gap-3">
|
||||
<button
|
||||
onClick={() => {
|
||||
sessionStorage.removeItem('edit_character');
|
||||
sessionStorage.removeItem('edit_character_profile_id');
|
||||
navigate('/editor');
|
||||
}}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-indigo-600 hover:bg-indigo-500 text-white rounded-lg transition-colors"
|
||||
className="flex items-center gap-2 px-3 sm:px-4 py-2 bg-indigo-600 hover:bg-indigo-500 text-white text-sm rounded-lg transition-colors"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 4.5v15m7.5-7.5h-15" />
|
||||
</svg>
|
||||
New Character
|
||||
<span className="hidden sm:inline">New Character</span>
|
||||
<span className="sm:hidden">New</span>
|
||||
</button>
|
||||
<label className="flex items-center gap-2 px-4 py-2 bg-gray-800 hover:bg-gray-700 text-gray-300 rounded-lg cursor-pointer border border-gray-700 transition-colors">
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
@@ -249,17 +250,30 @@ export default function Characters() {
|
||||
>
|
||||
{/* Image area */}
|
||||
<div className="relative h-48 bg-gray-900 flex items-center justify-center overflow-hidden group">
|
||||
{profile.image ? (
|
||||
<img
|
||||
src={profile.image}
|
||||
alt={char.display_name || char.name}
|
||||
className="w-full h-full object-cover"
|
||||
/>
|
||||
) : (
|
||||
<div className="text-6xl font-bold text-gray-700 select-none">
|
||||
{(char.display_name || char.name || '?')[0].toUpperCase()}
|
||||
</div>
|
||||
)}
|
||||
{(() => {
|
||||
const imgSrc = profile.image
|
||||
|| (char.dream_id && `/api/dream/characters/${char.dream_id}/image`)
|
||||
|| (char.gaze_character && `/api/gaze/character/${char.gaze_character}/cover`)
|
||||
|| null;
|
||||
return imgSrc ? (
|
||||
<img
|
||||
src={imgSrc}
|
||||
alt={char.display_name || char.name}
|
||||
className="w-full h-full object-cover"
|
||||
onError={(e) => {
|
||||
e.target.style.display = 'none';
|
||||
const fallback = e.target.nextElementSibling;
|
||||
if (fallback) fallback.style.display = '';
|
||||
}}
|
||||
/>
|
||||
) : null;
|
||||
})()}
|
||||
<div
|
||||
className="text-6xl font-bold text-gray-700 select-none absolute inset-0 flex items-center justify-center"
|
||||
style={(profile.image || char.dream_id || char.gaze_character) ? { display: 'none' } : {}}
|
||||
>
|
||||
{(char.display_name || char.name || '?')[0].toUpperCase()}
|
||||
</div>
|
||||
<label className="absolute inset-0 flex items-center justify-center bg-black/50 opacity-0 group-hover:opacity-100 transition-opacity cursor-pointer">
|
||||
<div className="text-center">
|
||||
<svg className="w-8 h-8 mx-auto text-white/80 mb-1" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
@@ -313,13 +327,18 @@ export default function Characters() {
|
||||
{char.tts.voice_ref_path.split('/').pop()}
|
||||
</span>
|
||||
)}
|
||||
{char.gaze_character && (
|
||||
<span className="px-2 py-0.5 bg-violet-500/20 text-violet-300 text-xs rounded-full border border-violet-500/30" title={`GAZE character: ${char.gaze_character}`}>
|
||||
{char.gaze_character}
|
||||
</span>
|
||||
)}
|
||||
{(() => {
|
||||
const defaultPreset = char.gaze_presets?.find(gp => gp.trigger === 'self-portrait')?.preset
|
||||
|| char.gaze_presets?.[0]?.preset
|
||||
|| char.gaze_preset
|
||||
|| null;
|
||||
return defaultPreset ? (
|
||||
<span className="px-2 py-0.5 bg-violet-500/20 text-violet-300 text-xs rounded-full border border-violet-500/30" title={`GAZE: ${defaultPreset}`}>
|
||||
return defaultPreset && defaultPreset !== char.gaze_character ? (
|
||||
<span className="px-2 py-0.5 bg-violet-500/20 text-violet-300 text-xs rounded-full border border-violet-500/30" title={`GAZE preset: ${defaultPreset}`}>
|
||||
{defaultPreset}
|
||||
</span>
|
||||
) : null;
|
||||
@@ -386,8 +405,8 @@ export default function Characters() {
|
||||
</div>
|
||||
|
||||
{/* Default character */}
|
||||
<div className="flex items-center gap-3">
|
||||
<label className="text-sm text-gray-400 w-32 shrink-0">Default</label>
|
||||
<div className="flex flex-col sm:flex-row sm:items-center gap-1.5 sm:gap-3">
|
||||
<label className="text-sm text-gray-400 sm:w-32 shrink-0">Default</label>
|
||||
<select
|
||||
value={satMap.default || ''}
|
||||
onChange={(e) => saveSatMap({ ...satMap, default: e.target.value })}
|
||||
@@ -402,8 +421,8 @@ export default function Characters() {
|
||||
|
||||
{/* Per-satellite assignments */}
|
||||
{Object.entries(satMap.satellites || {}).map(([satId, charId]) => (
|
||||
<div key={satId} className="flex items-center gap-3">
|
||||
<span className="text-sm text-gray-300 w-32 shrink-0 truncate font-mono" title={satId}>{satId}</span>
|
||||
<div key={satId} className="flex flex-col sm:flex-row sm:items-center gap-1.5 sm:gap-3">
|
||||
<span className="text-sm text-gray-300 sm:w-32 shrink-0 truncate font-mono" title={satId}>{satId}</span>
|
||||
<select
|
||||
value={charId}
|
||||
onChange={(e) => {
|
||||
@@ -432,13 +451,13 @@ export default function Characters() {
|
||||
))}
|
||||
|
||||
{/* Add new satellite */}
|
||||
<div className="flex items-center gap-3 pt-2 border-t border-gray-800">
|
||||
<div className="flex flex-col sm:flex-row sm:items-center gap-1.5 sm:gap-3 pt-2 border-t border-gray-800">
|
||||
<input
|
||||
type="text"
|
||||
value={newSatId}
|
||||
onChange={(e) => setNewSatId(e.target.value)}
|
||||
placeholder="Satellite ID (from bridge log)"
|
||||
className="w-32 shrink-0 bg-gray-800 text-gray-200 text-sm rounded-lg px-3 py-2 border border-gray-700 focus:outline-none focus:border-indigo-500 font-mono"
|
||||
className="sm:w-32 shrink-0 bg-gray-800 text-gray-200 text-sm rounded-lg px-3 py-2 border border-gray-700 focus:outline-none focus:border-indigo-500 font-mono"
|
||||
/>
|
||||
<select
|
||||
value={newSatChar}
|
||||
|
||||
@@ -4,6 +4,7 @@ import InputBar from '../components/InputBar'
|
||||
import StatusIndicator from '../components/StatusIndicator'
|
||||
import SettingsDrawer from '../components/SettingsDrawer'
|
||||
import ConversationList from '../components/ConversationList'
|
||||
import PromptStyleSelector from '../components/PromptStyleSelector'
|
||||
import { useSettings } from '../hooks/useSettings'
|
||||
import { useBridgeHealth } from '../hooks/useBridgeHealth'
|
||||
import { useChat } from '../hooks/useChat'
|
||||
@@ -11,6 +12,8 @@ import { useTtsPlayback } from '../hooks/useTtsPlayback'
|
||||
import { useVoiceInput } from '../hooks/useVoiceInput'
|
||||
import { useActiveCharacter } from '../hooks/useActiveCharacter'
|
||||
import { useConversations } from '../hooks/useConversations'
|
||||
import { usePromptStyle } from '../hooks/usePromptStyle'
|
||||
import { useFollowups } from '../hooks/useFollowups'
|
||||
|
||||
export default function Chat() {
|
||||
const { settings, updateSetting } = useSettings()
|
||||
@@ -26,7 +29,9 @@ export default function Chat() {
|
||||
characterName: character?.name || '',
|
||||
}
|
||||
|
||||
const { messages, isLoading, isLoadingConv, send, clearHistory } = useChat(activeId, convMeta, updateMeta)
|
||||
const { messages, isLoading, isLoadingConv, send, retry, clearHistory } = useChat(activeId, convMeta, updateMeta)
|
||||
const { styles, activeStyle, selectStyle } = usePromptStyle()
|
||||
const { followups, refresh: refreshFollowups } = useFollowups(character?.id)
|
||||
|
||||
// Use character's TTS config if available, fall back to global settings
|
||||
const ttsEngine = character?.tts?.engine || settings.ttsEngine
|
||||
@@ -37,6 +42,7 @@ export default function Chat() {
|
||||
const { isPlaying, speak, stop } = useTtsPlayback(ttsVoice, ttsEngine, ttsModel)
|
||||
const { isRecording, isTranscribing, startRecording, stopRecording } = useVoiceInput(settings.sttMode)
|
||||
const [settingsOpen, setSettingsOpen] = useState(false)
|
||||
const [convListOpen, setConvListOpen] = useState(false)
|
||||
|
||||
const handleSend = useCallback(async (text) => {
|
||||
// Auto-create a conversation if none is active
|
||||
@@ -44,11 +50,19 @@ export default function Chat() {
|
||||
if (!activeId) {
|
||||
newId = await create(convMeta.characterId, convMeta.characterName)
|
||||
}
|
||||
const response = await send(text, newId)
|
||||
const response = await send(text, newId, activeStyle)
|
||||
if (response && settings.autoTts) {
|
||||
speak(response)
|
||||
}
|
||||
}, [activeId, create, convMeta, send, settings.autoTts, speak])
|
||||
refreshFollowups()
|
||||
}, [activeId, create, convMeta, send, settings.autoTts, speak, activeStyle, refreshFollowups])
|
||||
|
||||
const handleRetry = useCallback(async (errorMsgId) => {
|
||||
const response = await retry(errorMsgId, activeStyle)
|
||||
if (response && settings.autoTts) {
|
||||
speak(response)
|
||||
}
|
||||
}, [retry, activeStyle, settings.autoTts, speak])
|
||||
|
||||
const handleVoiceToggle = useCallback(async () => {
|
||||
if (isRecording) {
|
||||
@@ -63,8 +77,12 @@ export default function Chat() {
|
||||
create(convMeta.characterId, convMeta.characterName)
|
||||
}, [create, convMeta])
|
||||
|
||||
const toggleConvList = useCallback(() => {
|
||||
setConvListOpen(prev => !prev)
|
||||
}, [])
|
||||
|
||||
return (
|
||||
<div className="flex-1 flex min-h-0">
|
||||
<div className="flex-1 flex min-h-0 relative">
|
||||
{/* Conversation sidebar */}
|
||||
<ConversationList
|
||||
conversations={conversations}
|
||||
@@ -72,19 +90,21 @@ export default function Chat() {
|
||||
onCreate={handleNewChat}
|
||||
onSelect={select}
|
||||
onDelete={remove}
|
||||
isOpen={convListOpen}
|
||||
onToggle={toggleConvList}
|
||||
/>
|
||||
|
||||
{/* Chat area */}
|
||||
<div className="flex-1 flex flex-col min-h-0 min-w-0">
|
||||
{/* Status bar */}
|
||||
<header className="flex items-center justify-between px-4 py-2 border-b border-gray-800/50 shrink-0">
|
||||
<div className="flex items-center gap-2">
|
||||
<header className="flex items-center justify-between px-3 sm:px-4 py-2 border-b border-gray-800/50 shrink-0">
|
||||
<div className="flex items-center gap-2 ml-10 md:ml-0">
|
||||
<StatusIndicator isOnline={isOnline} />
|
||||
<span className="text-xs text-gray-500">
|
||||
{isOnline === null ? 'Connecting...' : isOnline ? 'Connected' : 'Offline'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex items-center gap-1 sm:gap-2">
|
||||
{messages.length > 0 && (
|
||||
<button
|
||||
onClick={clearHistory}
|
||||
@@ -105,7 +125,7 @@ export default function Chat() {
|
||||
)}
|
||||
<button
|
||||
onClick={() => setSettingsOpen(true)}
|
||||
className="text-gray-500 hover:text-gray-300 transition-colors p-1"
|
||||
className="text-gray-500 hover:text-gray-300 transition-colors p-1.5"
|
||||
title="Settings"
|
||||
>
|
||||
<svg className="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1.5}>
|
||||
@@ -116,11 +136,34 @@ export default function Chat() {
|
||||
</div>
|
||||
</header>
|
||||
|
||||
{/* Prompt style selector */}
|
||||
<PromptStyleSelector
|
||||
styles={styles}
|
||||
activeStyle={activeStyle}
|
||||
onSelect={selectStyle}
|
||||
/>
|
||||
|
||||
{/* Follow-up banner */}
|
||||
{followups.length > 0 && (
|
||||
<div className="px-4 py-2 bg-amber-500/10 border-b border-amber-500/20 text-sm text-amber-300 flex items-center gap-2 shrink-0">
|
||||
<svg className="w-4 h-4 shrink-0" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 6v6h4.5m4.5 0a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
<span>
|
||||
{followups.length} pending follow-up{followups.length !== 1 ? 's' : ''}
|
||||
<span className="text-amber-400/60 ml-1">
|
||||
— {followups.map(f => f.follow_up_context).join('; ')}
|
||||
</span>
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Messages */}
|
||||
<ChatPanel
|
||||
messages={messages}
|
||||
isLoading={isLoading || isLoadingConv}
|
||||
onReplay={speak}
|
||||
onRetry={handleRetry}
|
||||
character={character}
|
||||
/>
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ const SERVICES = [
|
||||
name: 'Open WebUI',
|
||||
url: 'http://localhost:3030',
|
||||
healthPath: '/',
|
||||
uiUrl: 'http://localhost:3030',
|
||||
uiUrl: 'http://10.0.0.101:3030',
|
||||
description: 'Chat interface',
|
||||
category: 'AI & LLM',
|
||||
restart: { type: 'docker', id: 'homeai-open-webui' },
|
||||
@@ -74,12 +74,13 @@ const SERVICES = [
|
||||
uiUrl: 'https://10.0.0.199:8123',
|
||||
description: 'Smart home platform',
|
||||
category: 'Smart Home',
|
||||
auth: true,
|
||||
},
|
||||
{
|
||||
name: 'Uptime Kuma',
|
||||
url: 'http://localhost:3001',
|
||||
healthPath: '/',
|
||||
uiUrl: 'http://localhost:3001',
|
||||
uiUrl: 'http://10.0.0.101:3001',
|
||||
description: 'Service health monitoring',
|
||||
category: 'Infrastructure',
|
||||
restart: { type: 'docker', id: 'homeai-uptime-kuma' },
|
||||
@@ -88,7 +89,7 @@ const SERVICES = [
|
||||
name: 'n8n',
|
||||
url: 'http://localhost:5678',
|
||||
healthPath: '/',
|
||||
uiUrl: 'http://localhost:5678',
|
||||
uiUrl: 'http://10.0.0.101:5678',
|
||||
description: 'Workflow automation',
|
||||
category: 'Infrastructure',
|
||||
restart: { type: 'docker', id: 'homeai-n8n' },
|
||||
@@ -97,7 +98,7 @@ const SERVICES = [
|
||||
name: 'code-server',
|
||||
url: 'http://localhost:8090',
|
||||
healthPath: '/',
|
||||
uiUrl: 'http://localhost:8090',
|
||||
uiUrl: 'http://10.0.0.101:8090',
|
||||
description: 'Browser-based VS Code',
|
||||
category: 'Infrastructure',
|
||||
restart: { type: 'docker', id: 'homeai-code-server' },
|
||||
@@ -171,10 +172,11 @@ export default function Dashboard() {
|
||||
try {
|
||||
const target = encodeURIComponent(service.url + service.healthPath);
|
||||
const modeParam = service.tcp ? '&mode=tcp' : '';
|
||||
const authParam = service.auth ? '&auth=1' : '';
|
||||
const controller = new AbortController();
|
||||
const timeout = setTimeout(() => controller.abort(), 8000);
|
||||
|
||||
const res = await fetch(`/api/health?url=${target}${modeParam}`, { signal: controller.signal });
|
||||
const res = await fetch(`/api/health?url=${target}${modeParam}${authParam}`, { signal: controller.signal });
|
||||
clearTimeout(timeout);
|
||||
|
||||
const data = await res.json();
|
||||
@@ -249,9 +251,9 @@ export default function Dashboard() {
|
||||
return (
|
||||
<div className="space-y-8">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex flex-col sm:flex-row sm:items-center justify-between gap-3">
|
||||
<div>
|
||||
<h1 className="text-3xl font-bold text-gray-100">Service Status</h1>
|
||||
<h1 className="text-2xl sm:text-3xl font-bold text-gray-100">Service Status</h1>
|
||||
<p className="text-sm text-gray-500 mt-1">
|
||||
{onlineCount}/{totalCount} services online
|
||||
{lastRefresh && (
|
||||
|
||||
@@ -12,8 +12,8 @@ const DEFAULT_CHARACTER = {
|
||||
skills: [],
|
||||
system_prompt: "",
|
||||
model_overrides: {
|
||||
primary: "qwen3.5:35b-a3b",
|
||||
fast: "qwen2.5:7b"
|
||||
primary: "",
|
||||
fast: ""
|
||||
},
|
||||
tts: {
|
||||
engine: "kokoro",
|
||||
@@ -22,9 +22,20 @@ const DEFAULT_CHARACTER = {
|
||||
},
|
||||
gaze_presets: [],
|
||||
custom_rules: [],
|
||||
default_prompt_style: "",
|
||||
prompt_style_overrides: {},
|
||||
notes: ""
|
||||
};
|
||||
|
||||
const PROMPT_STYLES = [
|
||||
{ id: 'quick', name: 'Quick', group: 'cloud' },
|
||||
{ id: 'standard', name: 'Standard', group: 'cloud' },
|
||||
{ id: 'creative', name: 'Creative', group: 'cloud' },
|
||||
{ id: 'roleplayer', name: 'Roleplayer', group: 'local' },
|
||||
{ id: 'game-master', name: 'Game Master', group: 'local' },
|
||||
{ id: 'storyteller', name: 'Storyteller', group: 'local' },
|
||||
];
|
||||
|
||||
export default function Editor() {
|
||||
const [character, setCharacter] = useState(() => {
|
||||
const editData = sessionStorage.getItem('edit_character');
|
||||
@@ -59,10 +70,17 @@ export default function Editor() {
|
||||
const [elevenLabsModels, setElevenLabsModels] = useState([]);
|
||||
const [isLoadingElevenLabs, setIsLoadingElevenLabs] = useState(false);
|
||||
|
||||
// GAZE presets state (from API)
|
||||
// GAZE state (from API)
|
||||
const [availableGazePresets, setAvailableGazePresets] = useState([]);
|
||||
const [availableGazeCharacters, setAvailableGazeCharacters] = useState([]);
|
||||
const [isLoadingGaze, setIsLoadingGaze] = useState(false);
|
||||
|
||||
// Dream import state
|
||||
const [dreamCharacters, setDreamCharacters] = useState([]);
|
||||
const [isLoadingDream, setIsLoadingDream] = useState(false);
|
||||
const [dreamImportDone, setDreamImportDone] = useState(false);
|
||||
const [selectedDreamId, setSelectedDreamId] = useState('');
|
||||
|
||||
// Character lookup state
|
||||
const [lookupName, setLookupName] = useState('');
|
||||
const [lookupFranchise, setLookupFranchise] = useState('');
|
||||
@@ -102,16 +120,31 @@ export default function Editor() {
|
||||
}
|
||||
}, [character.tts.engine]);
|
||||
|
||||
// Fetch GAZE presets on mount
|
||||
// Fetch GAZE presets + characters on mount
|
||||
useEffect(() => {
|
||||
setIsLoadingGaze(true);
|
||||
fetch('/api/gaze/presets')
|
||||
.then(r => r.ok ? r.json() : { presets: [] })
|
||||
.then(data => setAvailableGazePresets(data.presets || []))
|
||||
Promise.all([
|
||||
fetch('/api/gaze/presets').then(r => r.ok ? r.json() : { presets: [] }),
|
||||
fetch('/api/gaze/characters').then(r => r.ok ? r.json() : { characters: [] }),
|
||||
])
|
||||
.then(([presetsData, charsData]) => {
|
||||
setAvailableGazePresets(presetsData.presets || []);
|
||||
setAvailableGazeCharacters(charsData.characters || []);
|
||||
})
|
||||
.catch(() => {})
|
||||
.finally(() => setIsLoadingGaze(false));
|
||||
}, []);
|
||||
|
||||
// Fetch Dream characters on mount
|
||||
useEffect(() => {
|
||||
setIsLoadingDream(true);
|
||||
fetch('/api/dream/characters')
|
||||
.then(r => r.ok ? r.json() : { characters: [] })
|
||||
.then(data => setDreamCharacters(data.characters || []))
|
||||
.catch(() => {})
|
||||
.finally(() => setIsLoadingDream(false));
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (audioRef.current) { audioRef.current.pause(); audioRef.current = null; }
|
||||
@@ -272,6 +305,66 @@ export default function Editor() {
|
||||
});
|
||||
};
|
||||
|
||||
// Dream character import
|
||||
const handleDreamImport = async (dreamId) => {
|
||||
if (!dreamId) return;
|
||||
setIsLoadingDream(true);
|
||||
setError(null);
|
||||
try {
|
||||
const res = await fetch(`/api/dream/characters/${dreamId}`);
|
||||
if (!res.ok) throw new Error(`Dream returned ${res.status}`);
|
||||
const data = await res.json();
|
||||
const dc = data.character;
|
||||
|
||||
setCharacter(prev => ({
|
||||
...prev,
|
||||
name: prev.name || dc.name?.toLowerCase().replace(/\s+/g, '_') || prev.name,
|
||||
display_name: prev.display_name || dc.name || prev.display_name,
|
||||
description: dc.backstory ? dc.backstory.split('.').slice(0, 2).join('.') + '.' : prev.description,
|
||||
background: dc.backstory || prev.background,
|
||||
appearance: dc.appearance || prev.appearance,
|
||||
dialogue_style: dc.personality || prev.dialogue_style,
|
||||
system_prompt: prev.system_prompt || dc.systemPrompt || '',
|
||||
gaze_character: dc.gazeCharacterId || prev.gaze_character,
|
||||
dream_id: dc.id,
|
||||
}));
|
||||
|
||||
// Auto-add GAZE presets if linked
|
||||
if (dc.gazeCharacterId && availableGazePresets.length > 0) {
|
||||
handleGazeCharacterLink(dc.gazeCharacterId);
|
||||
}
|
||||
|
||||
setDreamImportDone(true);
|
||||
} catch (err) {
|
||||
setError(`Dream import failed: ${err.message}`);
|
||||
} finally {
|
||||
setIsLoadingDream(false);
|
||||
}
|
||||
};
|
||||
|
||||
// GAZE character linking
|
||||
const handleGazeCharacterLink = (characterId) => {
|
||||
setCharacter(prev => {
|
||||
const updated = { ...prev, gaze_character: characterId || undefined };
|
||||
// Auto-add matching presets when linking a character
|
||||
if (characterId && availableGazePresets.length > 0) {
|
||||
const matching = availableGazePresets.filter(p =>
|
||||
p.slug.includes(characterId) || characterId.includes(p.slug)
|
||||
);
|
||||
if (matching.length > 0) {
|
||||
const existingSlugs = new Set((prev.gaze_presets || []).map(gp => gp.preset));
|
||||
const newPresets = matching
|
||||
.filter(p => !existingSlugs.has(p.slug))
|
||||
.map(p => ({ preset: p.slug, trigger: 'self-portrait' }));
|
||||
if (newPresets.length > 0) {
|
||||
updated.gaze_presets = [...(prev.gaze_presets || []), ...newPresets];
|
||||
}
|
||||
}
|
||||
}
|
||||
return updated;
|
||||
});
|
||||
};
|
||||
|
||||
// GAZE preset helpers
|
||||
const addGazePreset = () => {
|
||||
setCharacter(prev => ({
|
||||
@@ -390,9 +483,9 @@ export default function Editor() {
|
||||
|
||||
return (
|
||||
<div className="space-y-6">
|
||||
<div className="flex justify-between items-center">
|
||||
<div className="flex flex-col sm:flex-row justify-between sm:items-center gap-3">
|
||||
<div>
|
||||
<h1 className="text-3xl font-bold text-gray-100">Character Editor</h1>
|
||||
<h1 className="text-2xl sm:text-3xl font-bold text-gray-100">Character Editor</h1>
|
||||
<p className="text-sm text-gray-500 mt-1">
|
||||
{character.display_name || character.name
|
||||
? `Editing: ${character.display_name || character.name}`
|
||||
@@ -442,6 +535,70 @@ export default function Editor() {
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Import from Dream */}
|
||||
{!isEditing && (
|
||||
<div className={cardClass}>
|
||||
<div className="flex items-center gap-2">
|
||||
<svg className="w-5 h-5 text-violet-400" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M9.813 15.904L9 18.75l-.813-2.846a4.5 4.5 0 00-3.09-3.09L2.25 12l2.846-.813a4.5 4.5 0 003.09-3.09L9 5.25l.813 2.846a4.5 4.5 0 003.09 3.09L15.75 12l-2.846.813a4.5 4.5 0 00-3.09 3.09zM18.259 8.715L18 9.75l-.259-1.035a3.375 3.375 0 00-2.455-2.456L14.25 6l1.036-.259a3.375 3.375 0 002.455-2.456L18 2.25l.259 1.035a3.375 3.375 0 002.455 2.456L21.75 6l-1.036.259a3.375 3.375 0 00-2.455 2.456z" />
|
||||
</svg>
|
||||
<h2 className="text-lg font-semibold text-gray-200">Import from Dream</h2>
|
||||
</div>
|
||||
<p className="text-xs text-gray-500">Import character data from Dream. Auto-links GAZE character if configured.</p>
|
||||
<div className="flex gap-3 items-end">
|
||||
<div className="flex-1">
|
||||
<label className={labelClass}>Dream Character</label>
|
||||
{isLoadingDream ? (
|
||||
<p className="text-sm text-gray-500">Loading Dream characters...</p>
|
||||
) : dreamCharacters.length > 0 ? (
|
||||
<select
|
||||
className={selectClass}
|
||||
value={selectedDreamId}
|
||||
onChange={(e) => setSelectedDreamId(e.target.value)}
|
||||
>
|
||||
<option value="">-- Select a character --</option>
|
||||
{dreamCharacters.map(c => (
|
||||
<option key={c.id} value={c.id}>{c.name}</option>
|
||||
))}
|
||||
</select>
|
||||
) : (
|
||||
<p className="text-sm text-gray-600 italic">No Dream characters available.</p>
|
||||
)}
|
||||
</div>
|
||||
{selectedDreamId && (
|
||||
<div className="w-12 h-12 rounded-lg overflow-hidden bg-gray-800 border border-gray-700 shrink-0">
|
||||
<img
|
||||
src={`/api/dream/characters/${selectedDreamId}/image`}
|
||||
alt="Preview"
|
||||
className="w-full h-full object-cover"
|
||||
onError={(e) => { e.target.style.display = 'none' }}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
<button
|
||||
onClick={() => handleDreamImport(selectedDreamId)}
|
||||
disabled={!selectedDreamId || isLoadingDream}
|
||||
className={`flex items-center gap-2 px-5 py-2 rounded-lg text-white transition-colors whitespace-nowrap ${
|
||||
dreamImportDone
|
||||
? 'bg-emerald-600 hover:bg-emerald-500'
|
||||
: 'bg-violet-600 hover:bg-violet-500 disabled:bg-gray-700 disabled:text-gray-500'
|
||||
}`}
|
||||
>
|
||||
{isLoadingDream && (
|
||||
<svg className="w-4 h-4 animate-spin" viewBox="0 0 24 24" fill="none">
|
||||
<circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4" />
|
||||
<path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4z" />
|
||||
</svg>
|
||||
)}
|
||||
{isLoadingDream ? 'Importing...' : dreamImportDone ? 'Imported' : 'Import'}
|
||||
</button>
|
||||
</div>
|
||||
{dreamImportDone && (
|
||||
<p className="text-xs text-emerald-400">Fields populated from Dream character. Review and edit below.</p>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Character Lookup — auto-fill from fictional character wiki */}
|
||||
{!isEditing && (
|
||||
<div className={cardClass}>
|
||||
@@ -501,7 +658,7 @@ export default function Editor() {
|
||||
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-6">
|
||||
{/* Basic Info */}
|
||||
<div className={cardClass}>
|
||||
<div className={cardClass + " flex flex-col"}>
|
||||
<h2 className="text-lg font-semibold text-gray-200">Basic Info</h2>
|
||||
<div>
|
||||
<label className={labelClass}>Name (ID)</label>
|
||||
@@ -511,9 +668,9 @@ export default function Editor() {
|
||||
<label className={labelClass}>Display Name</label>
|
||||
<input type="text" className={inputClass} value={character.display_name || ''} onChange={(e) => handleChange('display_name', e.target.value)} />
|
||||
</div>
|
||||
<div>
|
||||
<div className="flex-1 flex flex-col">
|
||||
<label className={labelClass}>Description</label>
|
||||
<input type="text" className={inputClass} value={character.description || ''} onChange={(e) => handleChange('description', e.target.value)} />
|
||||
<textarea className={inputClass + " flex-1 min-h-20 resize-y"} value={character.description || ''} onChange={(e) => handleChange('description', e.target.value)} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -757,6 +914,48 @@ export default function Editor() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* GAZE Character Link */}
|
||||
<div className={cardClass}>
|
||||
<h2 className="text-lg font-semibold text-gray-200">GAZE Character Link</h2>
|
||||
<p className="text-xs text-gray-500">Link to a GAZE character for automatic cover image and image generation presets.</p>
|
||||
<div className="flex items-center gap-3">
|
||||
<div className="flex-1">
|
||||
{isLoadingGaze ? (
|
||||
<p className="text-sm text-gray-500">Loading GAZE characters...</p>
|
||||
) : availableGazeCharacters.length > 0 ? (
|
||||
<select
|
||||
className={selectClass}
|
||||
value={character.gaze_character || ''}
|
||||
onChange={(e) => handleGazeCharacterLink(e.target.value)}
|
||||
>
|
||||
<option value="">-- None --</option>
|
||||
{availableGazeCharacters.map(c => (
|
||||
<option key={c.character_id} value={c.character_id}>{c.name} ({c.character_id})</option>
|
||||
))}
|
||||
</select>
|
||||
) : (
|
||||
<input
|
||||
type="text"
|
||||
className={inputClass}
|
||||
value={character.gaze_character || ''}
|
||||
onChange={(e) => handleGazeCharacterLink(e.target.value)}
|
||||
placeholder="GAZE character_id (e.g. tifa_lockhart)"
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
{character.gaze_character && (
|
||||
<div className="w-16 h-16 rounded-lg overflow-hidden bg-gray-800 border border-gray-700 shrink-0">
|
||||
<img
|
||||
src={`/api/gaze/character/${character.gaze_character}/cover`}
|
||||
alt="GAZE cover"
|
||||
className="w-full h-full object-cover"
|
||||
onError={(e) => { e.target.style.display = 'none' }}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-6">
|
||||
{/* Image Generation — GAZE presets */}
|
||||
<div className={cardClass}>
|
||||
@@ -832,22 +1031,37 @@ export default function Editor() {
|
||||
<h2 className="text-lg font-semibold text-gray-200">Model Overrides</h2>
|
||||
<div>
|
||||
<label className={labelClass}>Primary Model</label>
|
||||
<select className={selectClass} value={character.model_overrides?.primary || 'qwen3.5:35b-a3b'} onChange={(e) => handleNestedChange('model_overrides', 'primary', e.target.value)}>
|
||||
<option value="llama3.3:70b">llama3.3:70b</option>
|
||||
<option value="qwen3.5:35b-a3b">qwen3.5:35b-a3b</option>
|
||||
<option value="qwen2.5:7b">qwen2.5:7b</option>
|
||||
<option value="qwen3:32b">qwen3:32b</option>
|
||||
<option value="codestral:22b">codestral:22b</option>
|
||||
<select className={selectClass} value={character.model_overrides?.primary || ''} onChange={(e) => handleNestedChange('model_overrides', 'primary', e.target.value)}>
|
||||
<option value="">Default (system assigned)</option>
|
||||
<optgroup label="Cloud">
|
||||
<option value="anthropic/claude-sonnet-4-6">Claude Sonnet 4.6</option>
|
||||
<option value="anthropic/claude-haiku-4-5">Claude Haiku 4.5</option>
|
||||
<option value="anthropic/claude-opus-4-6">Claude Opus 4.6</option>
|
||||
</optgroup>
|
||||
<optgroup label="Local">
|
||||
<option value="qwen3.5:35b-a3b">qwen3.5:35b-a3b</option>
|
||||
<option value="llama3.3:70b">llama3.3:70b</option>
|
||||
<option value="qwen3:32b">qwen3:32b</option>
|
||||
<option value="qwen2.5:7b">qwen2.5:7b</option>
|
||||
<option value="codestral:22b">codestral:22b</option>
|
||||
</optgroup>
|
||||
</select>
|
||||
</div>
|
||||
<div>
|
||||
<label className={labelClass}>Fast Model</label>
|
||||
<select className={selectClass} value={character.model_overrides?.fast || 'qwen2.5:7b'} onChange={(e) => handleNestedChange('model_overrides', 'fast', e.target.value)}>
|
||||
<option value="qwen2.5:7b">qwen2.5:7b</option>
|
||||
<option value="qwen3.5:35b-a3b">qwen3.5:35b-a3b</option>
|
||||
<option value="llama3.3:70b">llama3.3:70b</option>
|
||||
<option value="qwen3:32b">qwen3:32b</option>
|
||||
<option value="codestral:22b">codestral:22b</option>
|
||||
<select className={selectClass} value={character.model_overrides?.fast || ''} onChange={(e) => handleNestedChange('model_overrides', 'fast', e.target.value)}>
|
||||
<option value="">Default (system assigned)</option>
|
||||
<optgroup label="Cloud">
|
||||
<option value="anthropic/claude-haiku-4-5">Claude Haiku 4.5</option>
|
||||
<option value="anthropic/claude-sonnet-4-6">Claude Sonnet 4.6</option>
|
||||
</optgroup>
|
||||
<optgroup label="Local">
|
||||
<option value="qwen2.5:7b">qwen2.5:7b</option>
|
||||
<option value="qwen3.5:35b-a3b">qwen3.5:35b-a3b</option>
|
||||
<option value="llama3.3:70b">llama3.3:70b</option>
|
||||
<option value="qwen3:32b">qwen3:32b</option>
|
||||
<option value="codestral:22b">codestral:22b</option>
|
||||
</optgroup>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
@@ -900,6 +1114,82 @@ export default function Editor() {
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Prompt Style Overrides */}
|
||||
<div className={cardClass}>
|
||||
<h2 className="text-lg font-semibold text-gray-200">Prompt Style</h2>
|
||||
|
||||
<div>
|
||||
<label className={labelClass}>Default Style</label>
|
||||
<select
|
||||
className={inputClass}
|
||||
value={character.default_prompt_style || ''}
|
||||
onChange={(e) => handleChange('default_prompt_style', e.target.value || '')}
|
||||
>
|
||||
<option value="">Use global default</option>
|
||||
{PROMPT_STYLES.map((s) => (
|
||||
<option key={s.id} value={s.id}>{s.name} ({s.group})</option>
|
||||
))}
|
||||
</select>
|
||||
<p className="text-xs text-gray-500 mt-1">Auto-select this style when this character is active</p>
|
||||
</div>
|
||||
|
||||
<details className="group">
|
||||
<summary className="cursor-pointer text-sm text-gray-400 hover:text-gray-300">
|
||||
Per-style overrides
|
||||
</summary>
|
||||
<div className="mt-3 space-y-3">
|
||||
{PROMPT_STYLES.map((s) => {
|
||||
const overrides = character.prompt_style_overrides || {};
|
||||
const styleOverride = overrides[s.id] || {};
|
||||
const hasContent = styleOverride.dialogue_style || styleOverride.system_prompt_suffix;
|
||||
return (
|
||||
<details key={s.id} className="group/inner">
|
||||
<summary className={`cursor-pointer text-sm ${hasContent ? 'text-indigo-400' : 'text-gray-500'} hover:text-gray-300`}>
|
||||
{s.name} {hasContent && '(customized)'}
|
||||
</summary>
|
||||
<div className="mt-2 space-y-2 pl-3 border-l border-gray-800">
|
||||
<div>
|
||||
<label className={labelClass}>Dialogue Style Override</label>
|
||||
<textarea
|
||||
className={inputClass + " h-16 resize-y text-sm"}
|
||||
value={styleOverride.dialogue_style || ''}
|
||||
onChange={(e) => {
|
||||
const val = e.target.value;
|
||||
setCharacter(prev => {
|
||||
const newOverrides = { ...(prev.prompt_style_overrides || {}) };
|
||||
newOverrides[s.id] = { ...(newOverrides[s.id] || {}), dialogue_style: val };
|
||||
if (!val && !newOverrides[s.id].system_prompt_suffix) delete newOverrides[s.id];
|
||||
return { ...prev, prompt_style_overrides: newOverrides };
|
||||
});
|
||||
}}
|
||||
placeholder={`Custom dialogue style for ${s.name} mode...`}
|
||||
/>
|
||||
</div>
|
||||
<div>
|
||||
<label className={labelClass}>Additional Instructions</label>
|
||||
<textarea
|
||||
className={inputClass + " h-16 resize-y text-sm"}
|
||||
value={styleOverride.system_prompt_suffix || ''}
|
||||
onChange={(e) => {
|
||||
const val = e.target.value;
|
||||
setCharacter(prev => {
|
||||
const newOverrides = { ...(prev.prompt_style_overrides || {}) };
|
||||
newOverrides[s.id] = { ...(newOverrides[s.id] || {}), system_prompt_suffix: val };
|
||||
if (!val && !newOverrides[s.id].dialogue_style) delete newOverrides[s.id];
|
||||
return { ...prev, prompt_style_overrides: newOverrides };
|
||||
});
|
||||
}}
|
||||
placeholder={`Extra instructions for ${s.name} mode...`}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</details>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
</details>
|
||||
</div>
|
||||
|
||||
{/* Notes */}
|
||||
<div className={cardClass}>
|
||||
<h2 className="text-lg font-semibold text-gray-200">Notes</h2>
|
||||
|
||||
@@ -2,6 +2,7 @@ import { useState, useEffect, useCallback } from 'react';
|
||||
import {
|
||||
getPersonalMemories, savePersonalMemory, deletePersonalMemory,
|
||||
getGeneralMemories, saveGeneralMemory, deleteGeneralMemory,
|
||||
getFollowups, resolveFollowup,
|
||||
} from '../lib/memoryApi';
|
||||
|
||||
const PERSONAL_CATEGORIES = [
|
||||
@@ -21,53 +22,131 @@ const GENERAL_CATEGORIES = [
|
||||
{ value: 'other', label: 'Other', color: 'bg-gray-500/20 text-gray-300 border-gray-500/30' },
|
||||
];
|
||||
|
||||
const LIFECYCLE_BADGES = {
|
||||
active: 'bg-emerald-500/20 text-emerald-300 border-emerald-500/30',
|
||||
pending_followup: 'bg-amber-500/20 text-amber-300 border-amber-500/30',
|
||||
resolved: 'bg-gray-500/20 text-gray-400 border-gray-500/30',
|
||||
archived: 'bg-gray-700/30 text-gray-500 border-gray-600/30',
|
||||
};
|
||||
|
||||
const MEMORY_TYPE_BADGES = {
|
||||
semantic: 'bg-indigo-500/20 text-indigo-300 border-indigo-500/30',
|
||||
episodic: 'bg-cyan-500/20 text-cyan-300 border-cyan-500/30',
|
||||
relational: 'bg-purple-500/20 text-purple-300 border-purple-500/30',
|
||||
opinion: 'bg-rose-500/20 text-rose-300 border-rose-500/30',
|
||||
};
|
||||
|
||||
const ACTIVE_KEY = 'homeai_active_character';
|
||||
|
||||
function CategoryBadge({ category, categories }) {
|
||||
const cat = categories.find(c => c.value === category) || categories[categories.length - 1];
|
||||
function Badge({ label, colorClass }) {
|
||||
return (
|
||||
<span className={`px-2 py-0.5 text-xs rounded-full border ${cat.color}`}>
|
||||
{cat.label}
|
||||
<span className={`px-2 py-0.5 text-xs rounded-full border ${colorClass}`}>
|
||||
{label}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function CategoryBadge({ category, categories }) {
|
||||
const cat = categories.find(c => c.value === category) || categories[categories.length - 1];
|
||||
return <Badge label={cat.label} colorClass={cat.color} />;
|
||||
}
|
||||
|
||||
function PrivacyIcon({ level }) {
|
||||
if (level === 'local_only') {
|
||||
return (
|
||||
<span title="Local only — never sent to cloud" className="text-rose-400">
|
||||
<svg className="w-3.5 h-3.5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M16.5 10.5V6.75a4.5 4.5 0 10-9 0v3.75m-.75 11.25h10.5a2.25 2.25 0 002.25-2.25v-6.75a2.25 2.25 0 00-2.25-2.25H6.75a2.25 2.25 0 00-2.25 2.25v6.75a2.25 2.25 0 002.25 2.25z" />
|
||||
</svg>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
if (level === 'sensitive') {
|
||||
return (
|
||||
<span title="Sensitive — cloud allowed but stripped when possible" className="text-amber-400">
|
||||
<svg className="w-3.5 h-3.5" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M9 12.75L11.25 15 15 9.75m-3-7.036A11.959 11.959 0 013.598 6 11.99 11.99 0 003 9.749c0 5.592 3.824 10.29 9 11.623 5.176-1.332 9-6.03 9-11.622 0-1.31-.21-2.571-.598-3.751h-.152c-3.196 0-6.1-1.248-8.25-3.285z" />
|
||||
</svg>
|
||||
</span>
|
||||
);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function ImportanceBar({ value }) {
|
||||
const pct = Math.round((value || 0.5) * 100);
|
||||
const color = value >= 0.8 ? 'bg-rose-400' : value >= 0.6 ? 'bg-amber-400' : 'bg-gray-500';
|
||||
return (
|
||||
<div title={`Importance: ${pct}%`} className="flex items-center gap-1">
|
||||
<div className="w-12 h-1.5 bg-gray-700 rounded-full overflow-hidden">
|
||||
<div className={`h-full ${color} rounded-full`} style={{ width: `${pct}%` }} />
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function MemoryCard({ memory, categories, onEdit, onDelete }) {
|
||||
const date = memory.created_at || memory.createdAt;
|
||||
return (
|
||||
<div className="border border-gray-700 rounded-lg p-4 bg-gray-800/50 space-y-2">
|
||||
<div className="flex items-start justify-between gap-3">
|
||||
<p className="text-sm text-gray-200 flex-1 whitespace-pre-wrap">{memory.content}</p>
|
||||
<div className="flex gap-1 shrink-0">
|
||||
<button
|
||||
onClick={() => onEdit(memory)}
|
||||
className="p-1.5 text-gray-500 hover:text-gray-300 transition-colors"
|
||||
title="Edit"
|
||||
>
|
||||
<button onClick={() => onEdit(memory)} className="p-1.5 text-gray-500 hover:text-gray-300 transition-colors" title="Edit">
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M16.862 4.487l1.687-1.688a1.875 1.875 0 112.652 2.652L10.582 16.07a4.5 4.5 0 01-1.897 1.13L6 18l.8-2.685a4.5 4.5 0 011.13-1.897l8.932-8.931z" />
|
||||
</svg>
|
||||
</button>
|
||||
<button
|
||||
onClick={() => onDelete(memory.id)}
|
||||
className="p-1.5 text-gray-500 hover:text-red-400 transition-colors"
|
||||
title="Delete"
|
||||
>
|
||||
<button onClick={() => onDelete(memory.id)} className="p-1.5 text-gray-500 hover:text-red-400 transition-colors" title="Delete">
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M14.74 9l-.346 9m-4.788 0L9.26 9m9.968-3.21c.342.052.682.107 1.022.166m-1.022-.165L18.16 19.673a2.25 2.25 0 01-2.244 2.077H8.084a2.25 2.25 0 01-2.244-2.077L4.772 5.79m14.456 0a48.108 48.108 0 00-3.478-.397m-12 .562c.34-.059.68-.114 1.022-.165m0 0a48.11 48.11 0 013.478-.397m7.5 0v-.916c0-1.18-.91-2.164-2.09-2.201a51.964 51.964 0 00-3.32 0c-1.18.037-2.09 1.022-2.09 2.201v.916m7.5 0a48.667 48.667 0 00-7.5 0" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex items-center gap-2 flex-wrap">
|
||||
<CategoryBadge category={memory.category} categories={categories} />
|
||||
{memory.memory_type && (
|
||||
<Badge label={memory.memory_type} colorClass={MEMORY_TYPE_BADGES[memory.memory_type] || MEMORY_TYPE_BADGES.semantic} />
|
||||
)}
|
||||
{memory.lifecycle_state && memory.lifecycle_state !== 'active' && (
|
||||
<Badge label={memory.lifecycle_state.replace('_', ' ')} colorClass={LIFECYCLE_BADGES[memory.lifecycle_state] || LIFECYCLE_BADGES.active} />
|
||||
)}
|
||||
<PrivacyIcon level={memory.privacy_level} />
|
||||
<ImportanceBar value={memory.importance} />
|
||||
<span className="text-xs text-gray-600">
|
||||
{memory.createdAt ? new Date(memory.createdAt).toLocaleDateString() : ''}
|
||||
{date ? new Date(date).toLocaleDateString() : ''}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function FollowupCard({ followup, onResolve }) {
|
||||
const date = followup.created_at || followup.createdAt;
|
||||
return (
|
||||
<div className="border border-amber-500/30 rounded-lg p-4 bg-amber-500/5 space-y-2">
|
||||
<div className="flex items-start justify-between gap-3">
|
||||
<div className="flex-1">
|
||||
<p className="text-sm text-amber-200 font-medium">{followup.follow_up_context}</p>
|
||||
<p className="text-xs text-gray-400 mt-1">{followup.content}</p>
|
||||
</div>
|
||||
<button
|
||||
onClick={() => onResolve(followup.id)}
|
||||
className="px-3 py-1.5 bg-amber-600/20 hover:bg-amber-600/40 text-amber-300 text-xs rounded-lg border border-amber-500/30 transition-colors shrink-0"
|
||||
>
|
||||
Resolve
|
||||
</button>
|
||||
</div>
|
||||
<div className="flex items-center gap-3 text-xs text-gray-500">
|
||||
<span>Due: {followup.follow_up_due === 'next_interaction' ? 'Next interaction' : new Date(followup.follow_up_due).toLocaleString()}</span>
|
||||
<span>Surfaced: {followup.surfaced_count || 0}x</span>
|
||||
<span>{date ? new Date(date).toLocaleDateString() : ''}</span>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function MemoryForm({ categories, editing, onSave, onCancel }) {
|
||||
const [content, setContent] = useState(editing?.content || '');
|
||||
const [category, setCategory] = useState(editing?.category || categories[0].value);
|
||||
@@ -104,10 +183,7 @@ function MemoryForm({ categories, editing, onSave, onCancel }) {
|
||||
))}
|
||||
</select>
|
||||
<div className="flex gap-2 ml-auto">
|
||||
<button
|
||||
onClick={onCancel}
|
||||
className="px-3 py-1.5 bg-gray-700 hover:bg-gray-600 text-gray-300 text-sm rounded-lg transition-colors"
|
||||
>
|
||||
<button onClick={onCancel} className="px-3 py-1.5 bg-gray-700 hover:bg-gray-600 text-gray-300 text-sm rounded-lg transition-colors">
|
||||
Cancel
|
||||
</button>
|
||||
<button
|
||||
@@ -124,10 +200,11 @@ function MemoryForm({ categories, editing, onSave, onCancel }) {
|
||||
}
|
||||
|
||||
export default function Memories() {
|
||||
const [tab, setTab] = useState('personal'); // 'personal' | 'general'
|
||||
const [tab, setTab] = useState('personal'); // 'personal' | 'general' | 'followups'
|
||||
const [characters, setCharacters] = useState([]);
|
||||
const [selectedCharId, setSelectedCharId] = useState('');
|
||||
const [memories, setMemories] = useState([]);
|
||||
const [followups, setFollowups] = useState([]);
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [showForm, setShowForm] = useState(false);
|
||||
const [editing, setEditing] = useState(null);
|
||||
@@ -155,14 +232,21 @@ export default function Memories() {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
try {
|
||||
if (tab === 'personal' && selectedCharId) {
|
||||
if (tab === 'followups' && selectedCharId) {
|
||||
const data = await getFollowups(selectedCharId);
|
||||
setFollowups(data.followups || []);
|
||||
setMemories([]);
|
||||
} else if (tab === 'personal' && selectedCharId) {
|
||||
const data = await getPersonalMemories(selectedCharId);
|
||||
setMemories(data.memories || []);
|
||||
setFollowups([]);
|
||||
} else if (tab === 'general') {
|
||||
const data = await getGeneralMemories();
|
||||
setMemories(data.memories || []);
|
||||
setFollowups([]);
|
||||
} else {
|
||||
setMemories([]);
|
||||
setFollowups([]);
|
||||
}
|
||||
} catch (err) {
|
||||
setError(err.message);
|
||||
@@ -206,14 +290,27 @@ export default function Memories() {
|
||||
setShowForm(true);
|
||||
};
|
||||
|
||||
const handleResolve = async (memoryId) => {
|
||||
try {
|
||||
await resolveFollowup(memoryId);
|
||||
await loadMemories();
|
||||
} catch (err) {
|
||||
setError(err.message);
|
||||
}
|
||||
};
|
||||
|
||||
const categories = tab === 'personal' ? PERSONAL_CATEGORIES : GENERAL_CATEGORIES;
|
||||
const filteredMemories = filter
|
||||
? memories.filter(m => m.content?.toLowerCase().includes(filter.toLowerCase()) || m.category === filter)
|
||||
? memories.filter(m =>
|
||||
m.content?.toLowerCase().includes(filter.toLowerCase()) ||
|
||||
m.category === filter ||
|
||||
m.memory_type === filter
|
||||
)
|
||||
: memories;
|
||||
|
||||
// Sort newest first
|
||||
const sortedMemories = [...filteredMemories].sort(
|
||||
(a, b) => (b.createdAt || '').localeCompare(a.createdAt || '')
|
||||
(a, b) => (b.created_at || b.createdAt || '').localeCompare(a.created_at || a.createdAt || '')
|
||||
);
|
||||
|
||||
const selectedChar = characters.find(c => c.id === selectedCharId);
|
||||
@@ -221,27 +318,31 @@ export default function Memories() {
|
||||
return (
|
||||
<div className="space-y-6">
|
||||
{/* Header */}
|
||||
<div className="flex items-center justify-between">
|
||||
<div className="flex flex-col sm:flex-row sm:items-center justify-between gap-3">
|
||||
<div>
|
||||
<h1 className="text-3xl font-bold text-gray-100">Memories</h1>
|
||||
<h1 className="text-2xl sm:text-3xl font-bold text-gray-100">Memories</h1>
|
||||
<p className="text-sm text-gray-500 mt-1">
|
||||
{sortedMemories.length} {tab} memor{sortedMemories.length !== 1 ? 'ies' : 'y'}
|
||||
{tab === 'personal' && selectedChar && (
|
||||
{tab === 'followups'
|
||||
? `${followups.length} pending follow-up${followups.length !== 1 ? 's' : ''}`
|
||||
: `${sortedMemories.length} ${tab} memor${sortedMemories.length !== 1 ? 'ies' : 'y'}`}
|
||||
{(tab === 'personal' || tab === 'followups') && selectedChar && (
|
||||
<span className="ml-1 text-indigo-400">
|
||||
for {selectedChar.data?.display_name || selectedChar.data?.name || selectedCharId}
|
||||
</span>
|
||||
)}
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
onClick={() => { setEditing(null); setShowForm(!showForm); }}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-indigo-600 hover:bg-indigo-500 text-white rounded-lg transition-colors"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 4.5v15m7.5-7.5h-15" />
|
||||
</svg>
|
||||
Add Memory
|
||||
</button>
|
||||
{tab !== 'followups' && (
|
||||
<button
|
||||
onClick={() => { setEditing(null); setShowForm(!showForm); }}
|
||||
className="flex items-center gap-2 px-4 py-2 bg-indigo-600 hover:bg-indigo-500 text-white rounded-lg transition-colors"
|
||||
>
|
||||
<svg className="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={2}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M12 4.5v15m7.5-7.5h-15" />
|
||||
</svg>
|
||||
Add Memory
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{error && (
|
||||
@@ -253,30 +354,21 @@ export default function Memories() {
|
||||
|
||||
{/* Tabs */}
|
||||
<div className="flex gap-1 bg-gray-900 p-1 rounded-lg border border-gray-800 w-fit">
|
||||
<button
|
||||
onClick={() => { setTab('personal'); setShowForm(false); setEditing(null); }}
|
||||
className={`px-4 py-2 text-sm font-medium rounded-md transition-colors ${
|
||||
tab === 'personal'
|
||||
? 'bg-gray-800 text-white'
|
||||
: 'text-gray-400 hover:text-gray-200'
|
||||
}`}
|
||||
>
|
||||
Personal
|
||||
</button>
|
||||
<button
|
||||
onClick={() => { setTab('general'); setShowForm(false); setEditing(null); }}
|
||||
className={`px-4 py-2 text-sm font-medium rounded-md transition-colors ${
|
||||
tab === 'general'
|
||||
? 'bg-gray-800 text-white'
|
||||
: 'text-gray-400 hover:text-gray-200'
|
||||
}`}
|
||||
>
|
||||
General
|
||||
</button>
|
||||
{['personal', 'general', 'followups'].map(t => (
|
||||
<button
|
||||
key={t}
|
||||
onClick={() => { setTab(t); setShowForm(false); setEditing(null); }}
|
||||
className={`px-4 py-2 text-sm font-medium rounded-md transition-colors ${
|
||||
tab === t ? 'bg-gray-800 text-white' : 'text-gray-400 hover:text-gray-200'
|
||||
}`}
|
||||
>
|
||||
{t === 'followups' ? 'Follow-ups' : t.charAt(0).toUpperCase() + t.slice(1)}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
|
||||
{/* Character selector (personal tab only) */}
|
||||
{tab === 'personal' && (
|
||||
{/* Character selector (personal + followups tabs) */}
|
||||
{(tab === 'personal' || tab === 'followups') && (
|
||||
<div className="flex items-center gap-3">
|
||||
<label className="text-sm text-gray-400">Character</label>
|
||||
<select
|
||||
@@ -293,19 +385,21 @@ export default function Memories() {
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Search filter */}
|
||||
<div>
|
||||
<input
|
||||
type="text"
|
||||
className="w-full bg-gray-800 border border-gray-700 text-gray-200 p-2 rounded-lg text-sm focus:border-indigo-500 focus:ring-1 focus:ring-indigo-500 outline-none"
|
||||
value={filter}
|
||||
onChange={(e) => setFilter(e.target.value)}
|
||||
placeholder="Search memories..."
|
||||
/>
|
||||
</div>
|
||||
{/* Search filter (not for followups) */}
|
||||
{tab !== 'followups' && (
|
||||
<div>
|
||||
<input
|
||||
type="text"
|
||||
className="w-full bg-gray-800 border border-gray-700 text-gray-200 p-2 rounded-lg text-sm focus:border-indigo-500 focus:ring-1 focus:ring-indigo-500 outline-none"
|
||||
value={filter}
|
||||
onChange={(e) => setFilter(e.target.value)}
|
||||
placeholder="Search memories..."
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Add/Edit form */}
|
||||
{showForm && (
|
||||
{showForm && tab !== 'followups' && (
|
||||
<MemoryForm
|
||||
categories={categories}
|
||||
editing={editing}
|
||||
@@ -314,11 +408,26 @@ export default function Memories() {
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Memory list */}
|
||||
{/* Content */}
|
||||
{loading ? (
|
||||
<div className="text-center py-12">
|
||||
<p className="text-gray-500">Loading memories...</p>
|
||||
<p className="text-gray-500">Loading...</p>
|
||||
</div>
|
||||
) : tab === 'followups' ? (
|
||||
followups.length === 0 ? (
|
||||
<div className="text-center py-12">
|
||||
<svg className="w-12 h-12 mx-auto text-gray-700 mb-3" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1}>
|
||||
<path strokeLinecap="round" strokeLinejoin="round" d="M9 12.75L11.25 15 15 9.75M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
<p className="text-gray-500 text-sm">No pending follow-ups.</p>
|
||||
</div>
|
||||
) : (
|
||||
<div className="space-y-3">
|
||||
{followups.map(fu => (
|
||||
<FollowupCard key={fu.id} followup={fu} onResolve={handleResolve} />
|
||||
))}
|
||||
</div>
|
||||
)
|
||||
) : sortedMemories.length === 0 ? (
|
||||
<div className="text-center py-12">
|
||||
<svg className="w-12 h-12 mx-auto text-gray-700 mb-3" fill="none" viewBox="0 0 24 24" stroke="currentColor" strokeWidth={1}>
|
||||
|
||||
@@ -7,8 +7,13 @@ const SATELLITE_MAP_PATH = '/Users/aodhan/homeai-data/satellite-map.json'
|
||||
const CONVERSATIONS_DIR = '/Users/aodhan/homeai-data/conversations'
|
||||
const MEMORIES_DIR = '/Users/aodhan/homeai-data/memories'
|
||||
const MODE_PATH = '/Users/aodhan/homeai-data/active-mode.json'
|
||||
const ACTIVE_STYLE_PATH = '/Users/aodhan/homeai-data/active-prompt-style.json'
|
||||
const PROMPT_STYLES_DIR = new URL('./homeai-agent/prompt-styles', import.meta.url).pathname
|
||||
const HA_TOKEN = process.env.HA_TOKEN || ''
|
||||
const GAZE_HOST = 'http://10.0.0.101:5782'
|
||||
const GAZE_API_KEY = process.env.GAZE_API_KEY || ''
|
||||
const DREAM_HOST = process.env.DREAM_HOST || 'http://10.0.0.101:3000'
|
||||
const DREAM_API_KEY = process.env.DREAM_API_KEY || ''
|
||||
|
||||
function characterStoragePlugin() {
|
||||
return {
|
||||
@@ -272,6 +277,7 @@ function healthCheckPlugin() {
|
||||
const params = new URL(req.url, 'http://localhost').searchParams;
|
||||
const url = params.get('url');
|
||||
const mode = params.get('mode'); // 'tcp' for raw TCP port check
|
||||
const needsAuth = params.get('auth') === '1'; // use server-side HA_TOKEN
|
||||
if (!url) {
|
||||
res.writeHead(400, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({ error: 'Missing url param' }));
|
||||
@@ -298,8 +304,12 @@ function healthCheckPlugin() {
|
||||
const { default: http } = await import('http');
|
||||
const client = parsedUrl.protocol === 'https:' ? https : http;
|
||||
|
||||
const opts = { rejectUnauthorized: false, timeout: 5000 };
|
||||
if (needsAuth && HA_TOKEN) {
|
||||
opts.headers = { 'Authorization': `Bearer ${HA_TOKEN}` };
|
||||
}
|
||||
await new Promise((resolve, reject) => {
|
||||
const reqObj = client.get(url, { rejectUnauthorized: false, timeout: 5000 }, (resp) => {
|
||||
const reqObj = client.get(url, opts, (resp) => {
|
||||
resp.resume();
|
||||
resolve();
|
||||
});
|
||||
@@ -387,15 +397,16 @@ function gazeProxyPlugin() {
|
||||
return {
|
||||
name: 'gaze-proxy',
|
||||
configureServer(server) {
|
||||
server.middlewares.use('/api/gaze/presets', async (req, res) => {
|
||||
// Helper to proxy a JSON GET to GAZE
|
||||
const proxyGazeJson = async (apiPath, res, fallback) => {
|
||||
if (!GAZE_API_KEY) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ presets: [] }))
|
||||
res.end(JSON.stringify(fallback))
|
||||
return
|
||||
}
|
||||
try {
|
||||
const http = await import('http')
|
||||
const url = new URL(`${GAZE_HOST}/api/v1/presets`)
|
||||
const url = new URL(`${GAZE_HOST}${apiPath}`)
|
||||
const proxyRes = await new Promise((resolve, reject) => {
|
||||
const r = http.default.get(url, { headers: { 'X-API-Key': GAZE_API_KEY }, timeout: 5000 }, resolve)
|
||||
r.on('error', reject)
|
||||
@@ -407,8 +418,110 @@ function gazeProxyPlugin() {
|
||||
res.end(Buffer.concat(chunks))
|
||||
} catch {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ presets: [] }))
|
||||
res.end(JSON.stringify(fallback))
|
||||
}
|
||||
}
|
||||
|
||||
server.middlewares.use('/api/gaze/presets', async (req, res) => {
|
||||
await proxyGazeJson('/api/v1/presets', res, { presets: [] })
|
||||
})
|
||||
|
||||
server.middlewares.use('/api/gaze/characters', async (req, res) => {
|
||||
await proxyGazeJson('/api/v1/characters', res, { characters: [] })
|
||||
})
|
||||
|
||||
// Proxy cover image for a GAZE character (binary passthrough)
|
||||
server.middlewares.use(async (req, res, next) => {
|
||||
const match = req.url.match(/^\/api\/gaze\/character\/([a-zA-Z0-9_\-]+)\/cover/)
|
||||
if (!match) return next()
|
||||
const characterId = match[1]
|
||||
if (!GAZE_API_KEY) {
|
||||
res.writeHead(404)
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
try {
|
||||
const { default: http } = await import('http')
|
||||
const url = new URL(`${GAZE_HOST}/api/v1/character/${characterId}/cover`)
|
||||
const r = http.get(url, { headers: { 'X-API-Key': GAZE_API_KEY }, timeout: 5000 }, (proxyRes) => {
|
||||
res.writeHead(proxyRes.statusCode, {
|
||||
'Content-Type': proxyRes.headers['content-type'] || 'image/png',
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
'Cache-Control': 'public, max-age=3600',
|
||||
})
|
||||
proxyRes.pipe(res)
|
||||
})
|
||||
r.on('error', () => { if (!res.headersSent) { res.writeHead(502); res.end() } })
|
||||
r.on('timeout', () => { r.destroy(); if (!res.headersSent) { res.writeHead(504); res.end() } })
|
||||
} catch {
|
||||
if (!res.headersSent) { res.writeHead(500); res.end() }
|
||||
}
|
||||
})
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function dreamProxyPlugin() {
|
||||
const dreamHeaders = DREAM_API_KEY ? { 'X-API-Key': DREAM_API_KEY } : {}
|
||||
return {
|
||||
name: 'dream-proxy',
|
||||
configureServer(server) {
|
||||
// Helper: proxy a JSON GET to Dream
|
||||
const proxyDreamJson = async (apiPath, res) => {
|
||||
try {
|
||||
const http = await import('http')
|
||||
const url = new URL(`${DREAM_HOST}${apiPath}`)
|
||||
const proxyRes = await new Promise((resolve, reject) => {
|
||||
const r = http.default.get(url, { headers: dreamHeaders, timeout: 5000 }, resolve)
|
||||
r.on('error', reject)
|
||||
r.on('timeout', () => { r.destroy(); reject(new Error('timeout')) })
|
||||
})
|
||||
const chunks = []
|
||||
for await (const chunk of proxyRes) chunks.push(chunk)
|
||||
res.writeHead(proxyRes.statusCode, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(Buffer.concat(chunks))
|
||||
} catch {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ characters: [], error: 'Dream unreachable' }))
|
||||
}
|
||||
}
|
||||
|
||||
// List characters (compact)
|
||||
server.middlewares.use('/api/dream/characters', async (req, res, next) => {
|
||||
// Only handle exact path (not sub-paths like /api/dream/characters/abc/image)
|
||||
if (req.url !== '/' && req.url !== '' && !req.url.startsWith('?')) return next()
|
||||
const qs = req.url === '/' || req.url === '' ? '' : req.url
|
||||
await proxyDreamJson(`/api/characters${qs}`, res)
|
||||
})
|
||||
|
||||
// Character image (binary passthrough)
|
||||
server.middlewares.use(async (req, res, next) => {
|
||||
const match = req.url.match(/^\/api\/dream\/characters\/([^/]+)\/image/)
|
||||
if (!match) return next()
|
||||
const charId = match[1]
|
||||
try {
|
||||
const { default: http } = await import('http')
|
||||
const url = new URL(`${DREAM_HOST}/api/characters/${charId}/image`)
|
||||
const r = http.get(url, { headers: dreamHeaders, timeout: 5000 }, (proxyRes) => {
|
||||
res.writeHead(proxyRes.statusCode, {
|
||||
'Content-Type': proxyRes.headers['content-type'] || 'image/png',
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
'Cache-Control': 'public, max-age=3600',
|
||||
})
|
||||
proxyRes.pipe(res)
|
||||
})
|
||||
r.on('error', () => { if (!res.headersSent) { res.writeHead(502); res.end() } })
|
||||
r.on('timeout', () => { r.destroy(); if (!res.headersSent) { res.writeHead(504); res.end() } })
|
||||
} catch {
|
||||
if (!res.headersSent) { res.writeHead(500); res.end() }
|
||||
}
|
||||
})
|
||||
|
||||
// Get single character (full details)
|
||||
server.middlewares.use(async (req, res, next) => {
|
||||
const match = req.url.match(/^\/api\/dream\/characters\/([^/]+)\/?$/)
|
||||
if (!match) return next()
|
||||
await proxyDreamJson(`/api/characters/${match[1]}`, res)
|
||||
})
|
||||
},
|
||||
}
|
||||
@@ -418,163 +531,67 @@ function memoryStoragePlugin() {
|
||||
return {
|
||||
name: 'memory-storage',
|
||||
configureServer(server) {
|
||||
const ensureDirs = async () => {
|
||||
const { mkdir } = await import('fs/promises')
|
||||
await mkdir(`${MEMORIES_DIR}/personal`, { recursive: true })
|
||||
}
|
||||
// Proxy all /api/memories/* requests to the OpenClaw bridge (port 8081)
|
||||
// The bridge handles SQLite + vector search; dashboard is just a passthrough
|
||||
const proxyMemoryRequest = async (req, res) => {
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204, {
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
'Access-Control-Allow-Methods': 'GET,POST,PUT,DELETE,OPTIONS',
|
||||
'Access-Control-Allow-Headers': 'Content-Type',
|
||||
})
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
|
||||
const readJsonFile = async (path, fallback) => {
|
||||
const { readFile } = await import('fs/promises')
|
||||
try {
|
||||
return JSON.parse(await readFile(path, 'utf-8'))
|
||||
} catch {
|
||||
return fallback
|
||||
const { default: http } = await import('http')
|
||||
const chunks = []
|
||||
for await (const chunk of req) chunks.push(chunk)
|
||||
const body = Buffer.concat(chunks)
|
||||
|
||||
// Reconstruct full path: /api/memories/... (req.url has the part after /api/memories)
|
||||
const targetPath = `/api/memories${req.url}`
|
||||
|
||||
await new Promise((resolve, reject) => {
|
||||
const proxyReq = http.request(
|
||||
`http://localhost:8081${targetPath}`,
|
||||
{
|
||||
method: req.method,
|
||||
headers: {
|
||||
'Content-Type': req.headers['content-type'] || 'application/json',
|
||||
...(body.length > 0 ? { 'Content-Length': body.length } : {}),
|
||||
},
|
||||
timeout: 30000,
|
||||
},
|
||||
(proxyRes) => {
|
||||
res.writeHead(proxyRes.statusCode, {
|
||||
'Content-Type': proxyRes.headers['content-type'] || 'application/json',
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
})
|
||||
proxyRes.pipe(res)
|
||||
proxyRes.on('end', resolve)
|
||||
proxyRes.on('error', resolve)
|
||||
}
|
||||
)
|
||||
proxyReq.on('error', reject)
|
||||
proxyReq.on('timeout', () => {
|
||||
proxyReq.destroy()
|
||||
reject(new Error('timeout'))
|
||||
})
|
||||
if (body.length > 0) proxyReq.write(body)
|
||||
proxyReq.end()
|
||||
})
|
||||
} catch (err) {
|
||||
console.error(`[memory-proxy] failed:`, err?.message || err)
|
||||
if (!res.headersSent) {
|
||||
res.writeHead(502, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: `Bridge unreachable: ${err?.message || 'unknown'}` }))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const writeJsonFile = async (path, data) => {
|
||||
const { writeFile } = await import('fs/promises')
|
||||
await writeFile(path, JSON.stringify(data, null, 2))
|
||||
}
|
||||
|
||||
// Personal memories: /api/memories/personal/:characterId[/:memoryId]
|
||||
server.middlewares.use('/api/memories/personal', async (req, res, next) => {
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204, { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET,POST,DELETE', 'Access-Control-Allow-Headers': 'Content-Type' })
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
|
||||
await ensureDirs()
|
||||
const url = new URL(req.url, 'http://localhost')
|
||||
const parts = url.pathname.replace(/^\/+/, '').split('/')
|
||||
const characterId = parts[0] ? parts[0].replace(/[^a-zA-Z0-9_\-\.]/g, '_') : null
|
||||
const memoryId = parts[1] || null
|
||||
|
||||
if (!characterId) {
|
||||
res.writeHead(400, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: 'Missing character ID' }))
|
||||
return
|
||||
}
|
||||
|
||||
const filePath = `${MEMORIES_DIR}/personal/${characterId}.json`
|
||||
|
||||
if (req.method === 'GET') {
|
||||
const data = await readJsonFile(filePath, { characterId, memories: [] })
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify(data))
|
||||
return
|
||||
}
|
||||
|
||||
if (req.method === 'POST') {
|
||||
try {
|
||||
const chunks = []
|
||||
for await (const chunk of req) chunks.push(chunk)
|
||||
const memory = JSON.parse(Buffer.concat(chunks).toString())
|
||||
const data = await readJsonFile(filePath, { characterId, memories: [] })
|
||||
if (memory.id) {
|
||||
const idx = data.memories.findIndex(m => m.id === memory.id)
|
||||
if (idx >= 0) {
|
||||
data.memories[idx] = { ...data.memories[idx], ...memory }
|
||||
} else {
|
||||
data.memories.push(memory)
|
||||
}
|
||||
} else {
|
||||
memory.id = 'm_' + Date.now()
|
||||
memory.createdAt = memory.createdAt || new Date().toISOString()
|
||||
data.memories.push(memory)
|
||||
}
|
||||
await writeJsonFile(filePath, data)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ ok: true, memory }))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if (req.method === 'DELETE' && memoryId) {
|
||||
try {
|
||||
const data = await readJsonFile(filePath, { characterId, memories: [] })
|
||||
data.memories = data.memories.filter(m => m.id !== memoryId)
|
||||
await writeJsonFile(filePath, data)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ ok: true }))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
next()
|
||||
})
|
||||
|
||||
// General memories: /api/memories/general[/:memoryId]
|
||||
server.middlewares.use('/api/memories/general', async (req, res, next) => {
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204, { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET,POST,DELETE', 'Access-Control-Allow-Headers': 'Content-Type' })
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
|
||||
await ensureDirs()
|
||||
const url = new URL(req.url, 'http://localhost')
|
||||
const memoryId = url.pathname.replace(/^\/+/, '') || null
|
||||
const filePath = `${MEMORIES_DIR}/general.json`
|
||||
|
||||
if (req.method === 'GET') {
|
||||
const data = await readJsonFile(filePath, { memories: [] })
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify(data))
|
||||
return
|
||||
}
|
||||
|
||||
if (req.method === 'POST') {
|
||||
try {
|
||||
const chunks = []
|
||||
for await (const chunk of req) chunks.push(chunk)
|
||||
const memory = JSON.parse(Buffer.concat(chunks).toString())
|
||||
const data = await readJsonFile(filePath, { memories: [] })
|
||||
if (memory.id) {
|
||||
const idx = data.memories.findIndex(m => m.id === memory.id)
|
||||
if (idx >= 0) {
|
||||
data.memories[idx] = { ...data.memories[idx], ...memory }
|
||||
} else {
|
||||
data.memories.push(memory)
|
||||
}
|
||||
} else {
|
||||
memory.id = 'm_' + Date.now()
|
||||
memory.createdAt = memory.createdAt || new Date().toISOString()
|
||||
data.memories.push(memory)
|
||||
}
|
||||
await writeJsonFile(filePath, data)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ ok: true, memory }))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if (req.method === 'DELETE' && memoryId) {
|
||||
try {
|
||||
const data = await readJsonFile(filePath, { memories: [] })
|
||||
data.memories = data.memories.filter(m => m.id !== memoryId)
|
||||
await writeJsonFile(filePath, data)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ ok: true }))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
next()
|
||||
})
|
||||
server.middlewares.use('/api/memories', proxyMemoryRequest)
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -698,6 +715,96 @@ function modePlugin() {
|
||||
}
|
||||
}
|
||||
|
||||
function promptStylePlugin() {
|
||||
return {
|
||||
name: 'prompt-style-api',
|
||||
configureServer(server) {
|
||||
// GET /api/prompt-styles — list all available styles
|
||||
server.middlewares.use('/api/prompt-styles', async (req, res, next) => {
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204, { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET', 'Access-Control-Allow-Headers': 'Content-Type' })
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
if (req.method !== 'GET') { next(); return }
|
||||
|
||||
const { readdir, readFile } = await import('fs/promises')
|
||||
try {
|
||||
const files = (await readdir(PROMPT_STYLES_DIR)).filter(f => f.endsWith('.json'))
|
||||
const styles = []
|
||||
for (const file of files) {
|
||||
try {
|
||||
const raw = await readFile(`${PROMPT_STYLES_DIR}/${file}`, 'utf-8')
|
||||
styles.push(JSON.parse(raw))
|
||||
} catch { /* skip corrupt files */ }
|
||||
}
|
||||
// Sort: cloud group first, then local
|
||||
styles.sort((a, b) => {
|
||||
if (a.group !== b.group) return a.group === 'cloud' ? -1 : 1
|
||||
return (a.id || '').localeCompare(b.id || '')
|
||||
})
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify(styles))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
})
|
||||
|
||||
// GET/POST /api/prompt-style — active style
|
||||
server.middlewares.use('/api/prompt-style', async (req, res, next) => {
|
||||
// Avoid matching /api/prompt-styles (plural)
|
||||
const url = new URL(req.url, 'http://localhost')
|
||||
if (url.pathname !== '/') { next(); return }
|
||||
|
||||
if (req.method === 'OPTIONS') {
|
||||
res.writeHead(204, { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET,POST', 'Access-Control-Allow-Headers': 'Content-Type' })
|
||||
res.end()
|
||||
return
|
||||
}
|
||||
|
||||
const { readFile, writeFile } = await import('fs/promises')
|
||||
|
||||
if (req.method === 'GET') {
|
||||
try {
|
||||
const raw = await readFile(ACTIVE_STYLE_PATH, 'utf-8')
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(raw)
|
||||
} catch {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ style: 'standard', updated_at: '' }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if (req.method === 'POST') {
|
||||
try {
|
||||
const chunks = []
|
||||
for await (const chunk of req) chunks.push(chunk)
|
||||
const data = JSON.parse(Buffer.concat(chunks).toString())
|
||||
const VALID_STYLES = ['quick', 'standard', 'creative', 'roleplayer', 'game-master', 'storyteller']
|
||||
if (!data.style || !VALID_STYLES.includes(data.style)) {
|
||||
res.writeHead(400, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: `Invalid style. Valid: ${VALID_STYLES.join(', ')}` }))
|
||||
return
|
||||
}
|
||||
const state = { style: data.style, updated_at: new Date().toISOString() }
|
||||
await writeFile(ACTIVE_STYLE_PATH, JSON.stringify(state, null, 2))
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' })
|
||||
res.end(JSON.stringify({ ok: true, ...state }))
|
||||
} catch (err) {
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: err.message }))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
next()
|
||||
})
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function bridgeProxyPlugin() {
|
||||
return {
|
||||
name: 'bridge-proxy',
|
||||
@@ -771,10 +878,12 @@ export default defineConfig({
|
||||
satelliteMapPlugin(),
|
||||
conversationStoragePlugin(),
|
||||
memoryStoragePlugin(),
|
||||
dreamProxyPlugin(),
|
||||
gazeProxyPlugin(),
|
||||
characterLookupPlugin(),
|
||||
healthCheckPlugin(),
|
||||
modePlugin(),
|
||||
promptStylePlugin(),
|
||||
bridgeProxyPlugin(),
|
||||
tailwindcss(),
|
||||
react(),
|
||||
|
||||
@@ -119,6 +119,7 @@ class KokoroEventHandler(AsyncEventHandler):
|
||||
TtsProgram(
|
||||
name="kokoro",
|
||||
description="Kokoro ONNX TTS",
|
||||
version="1.0",
|
||||
attribution=Attribution(
|
||||
name="thewh1teagle/kokoro-onnx",
|
||||
url="https://github.com/thewh1teagle/kokoro-onnx",
|
||||
@@ -128,6 +129,7 @@ class KokoroEventHandler(AsyncEventHandler):
|
||||
TtsVoice(
|
||||
name=self._default_voice,
|
||||
description="Kokoro voice",
|
||||
version="1.0",
|
||||
attribution=Attribution(name="kokoro", url=""),
|
||||
installed=True,
|
||||
languages=["en-us"],
|
||||
|
||||
Reference in New Issue
Block a user