- Pull qwen2.5:7b model (~4.7GB) with native tool-calling support - Configure OpenClaw to use qwen2.5:7b as primary model - Fix HASS_TOKEN file (remove trailing comment) - Verify tool calling works end-to-end with HA skill - Test home-assistant skill: turn_on/turn_off lights - Update TODO.md with completed Phase 4 tasks - Add PHASE4_COMPLETION.md documentation Tool calling now working: ✓ qwen2.5:7b returns proper tool_calls array ✓ OpenClaw parses and executes commands ✓ Home Assistant skill controls entities ✓ HA API connectivity verified
162 lines
5.1 KiB
Markdown
162 lines
5.1 KiB
Markdown
# Phase 4 — OpenClaw Tool Calling Resolution
|
|
|
|
## Problem Statement
|
|
OpenClaw needed Ollama to return structured `tool_calls` in API responses. The issue was a template mismatch:
|
|
- **llama3.3:70b** outputs `<|python_tag|>exec {...}` (Llama's trained format)
|
|
- **qwen3:32b** had template issues causing 400 errors
|
|
- Ollama's template parser couldn't match the model output to the expected tool call format
|
|
|
|
## Solution Implemented
|
|
**Option A: Pull qwen2.5:7b** — Ollama ships with a working tool-call template for this model.
|
|
|
|
### What Was Done
|
|
|
|
#### 1. Model Deployment
|
|
- Pulled `qwen2.5:7b` (~4.7GB) from Ollama registry
|
|
- Model includes native tool-calling support with proper template
|
|
- Fast inference (~2-3s per response)
|
|
|
|
#### 2. OpenClaw Configuration
|
|
Updated `~/.openclaw/openclaw.json`:
|
|
```json
|
|
{
|
|
"models": {
|
|
"providers": {
|
|
"ollama": {
|
|
"models": [
|
|
{
|
|
"id": "qwen2.5:7b",
|
|
"name": "qwen2.5:7b",
|
|
"contextWindow": 32768,
|
|
"maxTokens": 4096
|
|
},
|
|
{
|
|
"id": "llama3.3:70b",
|
|
"name": "llama3.3:70b",
|
|
"contextWindow": 32768,
|
|
"maxTokens": 4096
|
|
}
|
|
]
|
|
}
|
|
}
|
|
},
|
|
"agents": {
|
|
"defaults": {
|
|
"model": {
|
|
"primary": "ollama/qwen2.5:7b"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
#### 3. HASS_TOKEN Setup
|
|
- Fixed `~/.homeai/hass_token` (removed trailing comment from file)
|
|
- Token is properly configured in launchd plist: `com.homeai.openclaw.plist`
|
|
- HA API connectivity verified: `https://10.0.0.199:8123/api/` ✓
|
|
|
|
#### 4. Tool Calling Verification
|
|
|
|
**Direct Ollama API Test:**
|
|
```bash
|
|
curl -s http://localhost:11434/api/chat \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "qwen2.5:7b",
|
|
"messages": [{"role": "user", "content": "Turn on the reading lamp"}],
|
|
"tools": [{"type": "function", "function": {"name": "call_service", ...}}],
|
|
"stream": false
|
|
}'
|
|
```
|
|
|
|
**Result:** ✓ Returns proper `tool_calls` array with structured function calls
|
|
|
|
**OpenClaw Agent Test:**
|
|
```bash
|
|
openclaw agent --message "Turn on the study shelves light" --agent main
|
|
```
|
|
|
|
**Result:** ✓ Agent successfully executed the command via home-assistant skill
|
|
|
|
#### 5. Home Assistant Skill Testing
|
|
- Tested `turn_on` command: ✓ Light turned on
|
|
- Tested `turn_off` command: ✓ Light turned off
|
|
- State updates verified via HA API: ✓ Confirmed
|
|
|
|
## Current Status
|
|
|
|
### ✓ Completed Tasks (Phase 4)
|
|
- [x] Pull qwen2.5:7b model
|
|
- [x] Configure OpenClaw to use qwen2.5:7b as primary model
|
|
- [x] Wire HASS_TOKEN (`~/.homeai/hass_token`)
|
|
- [x] Test home-assistant skill with real HA entities
|
|
- [x] Verify tool calling works end-to-end
|
|
|
|
### Available Models
|
|
- `qwen2.5:7b` — Primary (tool calling enabled) ✓
|
|
- `llama3.3:70b` — Fallback (available but not primary)
|
|
|
|
### Next Steps (Phase 4 Remaining)
|
|
- [ ] Set up mem0 with Chroma backend, test semantic recall
|
|
- [ ] Write memory backup launchd job
|
|
- [ ] Build morning briefing n8n workflow
|
|
- [ ] Build notification router n8n workflow
|
|
- [ ] Verify full voice → agent → HA action flow
|
|
- [ ] Add OpenClaw to Uptime Kuma monitors
|
|
|
|
## Technical Notes
|
|
|
|
### Why qwen2.5:7b Works
|
|
1. **Native Template Support**: Ollama's registry includes a proper chat template for qwen2.5
|
|
2. **Tool Calling Format**: Model outputs match Ollama's expected tool call structure
|
|
3. **No Template Tuning Needed**: Unlike llama3.3:70b, no custom TEMPLATE block required
|
|
4. **Performance**: 7B model is fast enough for real-time HA control
|
|
|
|
### Token File Issue
|
|
The `~/.homeai/hass_token` file had trailing content from the `.env` comment. Fixed by:
|
|
1. Extracting clean token from `.env` using `awk '{print $1}'`
|
|
2. Writing with `printf` (not `echo -n` which was being interpreted literally)
|
|
3. Verified token length: 183 bytes (correct JWT format)
|
|
|
|
### HA API Connectivity
|
|
- HA runs on `https://10.0.0.199:8123` (HTTPS, not HTTP)
|
|
- Requires `-k` flag in curl to skip SSL verification (self-signed cert)
|
|
- Token authentication working: `Authorization: Bearer <token>`
|
|
|
|
## Files Modified
|
|
- `~/.openclaw/openclaw.json` — Updated model configuration
|
|
- `~/.homeai/hass_token` — Fixed token file
|
|
- `TODO.md` — Marked completed tasks
|
|
|
|
## Verification Commands
|
|
```bash
|
|
# Check model availability
|
|
ollama list | grep qwen2.5
|
|
|
|
# Test tool calling directly
|
|
curl -s http://localhost:11434/api/chat \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"model": "qwen2.5:7b", "messages": [...], "tools": [...], "stream": false}'
|
|
|
|
# Test OpenClaw agent
|
|
openclaw agent --message "Turn on the study shelves light" --agent main
|
|
|
|
# Verify HA connectivity
|
|
curl -sk -H "Authorization: Bearer $(cat ~/.homeai/hass_token)" \
|
|
https://10.0.0.199:8123/api/
|
|
|
|
# Test home-assistant skill
|
|
HASS_TOKEN=$(cat ~/.homeai/hass_token) \
|
|
~/gitea/homeai/homeai-agent/skills/home-assistant/ha-ctl \
|
|
on light.study_shelves
|
|
```
|
|
|
|
## Summary
|
|
Phase 4 tool calling issue is **RESOLVED**. OpenClaw can now:
|
|
- ✓ Receive structured tool calls from qwen2.5:7b
|
|
- ✓ Execute home-assistant skill commands
|
|
- ✓ Control HA entities (lights, switches, etc.)
|
|
- ✓ Provide natural language responses
|
|
|
|
The system is ready for the next phase: memory integration and workflow automation.
|