Complete P2 (LLM) and P3 (voice pipeline) implementation
P2 — homeai-llm: - Fix ollama launchd plist path for Apple Silicon (/opt/homebrew/bin/ollama) - Add Modelfiles for local GGUF models: llama3.3:70b, qwen3:32b, codestral:22b (registered via `ollama create` — no re-download needed) P3 — homeai-voice: - Wyoming STT: wyoming-faster-whisper, large-v3 model, port 10300 - Wyoming TTS: custom Kokoro ONNX server (wyoming_kokoro_server.py), port 10301 Voice af_heart; models at ~/models/kokoro/ - Wake word: openWakeWord daemon (hey_jarvis), notifies OpenClaw at /wake - launchd plists for all three services + load-all-launchd.sh helper - Smoke test: wyoming/test-pipeline.sh — 3/3 passing HA Wyoming integration pending manual UI config (STT 10.0.0.200:10300, TTS 10.0.0.200:10301). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
7
homeai-llm/modelfiles/Codestral-22B
Normal file
7
homeai-llm/modelfiles/Codestral-22B
Normal file
@@ -0,0 +1,7 @@
|
||||
FROM /Users/aodhan/Models/LLM/Codestral-22B-v0.1-GGUF/Codestral-22B-v0.1-Q4_K_M.gguf
|
||||
|
||||
PARAMETER num_ctx 16384
|
||||
PARAMETER temperature 0.2
|
||||
PARAMETER top_p 0.95
|
||||
|
||||
SYSTEM "You are an expert coding assistant."
|
||||
7
homeai-llm/modelfiles/Llama-3.3-70B
Normal file
7
homeai-llm/modelfiles/Llama-3.3-70B
Normal file
@@ -0,0 +1,7 @@
|
||||
FROM /Users/aodhan/Models/LLM/Llama-3.3-70B-Instruct-GGUF/Llama-3.3-70B-Instruct-Q4_K_M.gguf
|
||||
|
||||
PARAMETER num_ctx 8192
|
||||
PARAMETER temperature 0.7
|
||||
PARAMETER top_p 0.9
|
||||
|
||||
SYSTEM "You are a helpful AI assistant."
|
||||
7
homeai-llm/modelfiles/Qwen3-32B
Normal file
7
homeai-llm/modelfiles/Qwen3-32B
Normal file
@@ -0,0 +1,7 @@
|
||||
FROM /Users/aodhan/Models/LLM/Qwen3-32B-GGUF/Qwen3-32B-Q4_K_M.gguf
|
||||
|
||||
PARAMETER num_ctx 8192
|
||||
PARAMETER temperature 0.7
|
||||
PARAMETER top_p 0.9
|
||||
|
||||
SYSTEM "You are a helpful AI assistant."
|
||||
Reference in New Issue
Block a user