Initial commit.

Basic docker deployment with Local LLM integration and simple game state.
This commit is contained in:
Aodhan Collins
2025-08-17 19:31:33 +01:00
commit 912b205699
30 changed files with 2476 additions and 0 deletions

4
.env.example Normal file
View File

@@ -0,0 +1,4 @@
PORTAINER_URL=http://10.0.0.199:9000
PORTAINER_USERNAME=yourusername
PORTAINER_PASSWORD=yourpassword
PORTAINER_ENDPOINT=yourendpoint # Optional

66
.gitignore vendored Normal file
View File

@@ -0,0 +1,66 @@
# Python
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
*.pyd
*.pyo
# Virtual environments
.venv/
venv/
env/
ENV/
# Packaging
build/
dist/
*.egg-info/
.eggs/
pip-wheel-metadata/
# Testing and coverage
.coverage
.coverage.*
htmlcov/
.tox/
.nox/
.pytest_cache/
.cache/
.mypy_cache/
.pytype/
.ruff_cache/
.pyre/
# Editor and OS files
.DS_Store
Thumbs.db
.idea/
.vscode/
*.swp
*.swo
# Environment variables
.env
.env.*
!.env.example
# Logs and temp
logs/
*.log
tmp/
temp/
# Application data (runtime)
data/sessions/
# Node (if ever used for frontends)
node_modules/
# pyenv
.python-version
# Jupyter
.ipynb_checkpoints/

35
Dockerfile Normal file
View File

@@ -0,0 +1,35 @@
# Use Python 3.9 slim image as base
FROM python:3.9-slim
# Set working directory
WORKDIR /app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
# Install system dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements file
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create a non-root user
RUN useradd --create-home --shell /bin/bash app \
&& chown -R app:app /app
USER app
# Expose port for web interface
EXPOSE 8000
# Command to run the FastAPI web app
CMD ["uvicorn", "web.app:app", "--host", "0.0.0.0", "--port", "8000"]

107
README.md Normal file
View File

@@ -0,0 +1,107 @@
# Text-Based LLM Interaction System
A Python-based system for interacting with an LLM running on LM Studio through a text-based interface.
## Setup Guide
For detailed instructions on setting up pyenv and virtual environments, see [setup_guide.md](setup_guide.md).
## Project Structure
```
text_adventure/
├── main.py # Entry point
├── config.py # Configuration settings
├── llm_client.py # LLM communication
├── interface.py # Text input/output
├── conversation.py # Conversation history management
├── test_interface.py # Test script for interface
└── README.md # This file
```
## Setup
### Option 1: Traditional Setup
1. Ensure you have Python 3.6+ installed
2. Install required dependencies:
```bash
pip install -r requirements.txt
```
3. Make sure LM Studio is running on 10.0.0.200:1234
### Option 2: Docker Setup (Recommended)
1. Install Docker and Docker Compose
2. Build and run the application:
```bash
docker-compose up --build
```
3. Make sure LM Studio is running on 10.0.0.200:1234 and accessible from the Docker container
### Option 3: Portainer Deployment
1. Ensure you have a Portainer instance running at 10.0.0.199:9000
2. Configure your `.env` file with Portainer credentials (see `.env.example`)
3. Run the deployment script:
```bash
python deploy_to_portainer.py
```
4. Or set environment variables and run:
```bash
export PORTAINER_URL=http://10.0.0.199:9000
export PORTAINER_USERNAME=admin
export PORTAINER_PASSWORD=yourpassword
python deploy_to_portainer.py
```
## Usage
To run the main application:
```bash
python main.py
```
## Testing
To test the complete system:
```bash
python test_system.py
```
To test just the interface:
```bash
python test_interface.py
```
To test connection to LM Studio:
```bash
python test_llm_connection.py
```
To test message exchange with LLM:
```bash
python test_llm_exchange.py
```
## Configuration
The system is configured to connect to LM Studio at `http://10.0.0.200:1234`. You can modify the `config.py` file to change this setting.
## Components
### Main Application (main.py)
The entry point that ties all components together.
### Configuration (config.py)
Contains settings for connecting to LM Studio.
### LLM Client (llm_client.py)
Handles communication with the LM Studio API.
### Interface (interface.py)
Manages text input and output with the user.
### Conversation Manager (conversation.py)
Keeps track of the conversation history between user and LLM.
## Testing
Run the test_interface.py script to verify the text input/output functionality works correctly.

89
architecture.md Normal file
View File

@@ -0,0 +1,89 @@
# Text-Based LLM Interaction System Architecture
## Overview
This document outlines the architecture for a text-based system that allows users to interact with an LLM running on LM Studio.
## Components
### 1. User Interface Layer
- **Text Input Handler**: Captures user input from terminal
- **Text Output Display**: Shows LLM responses to user
- **Session Manager**: Manages conversation history
### 2. Communication Layer
- **LLM Client**: Handles HTTP communication with LM Studio
- **API Interface**: Formats requests/responses according to LM Studio's API
### 3. Core Logic Layer
- **Message Processor**: Processes user input and LLM responses
- **Conversation History**: Maintains context between messages
## Data Flow
```mermaid
graph TD
A[User] --> B[Text Input Handler]
B --> C[Message Processor]
C --> D[LLM Client]
D --> E[LM Studio Server]
E --> D
D --> C
C --> F[Text Output Display]
F --> A
```
## Technical Details
### LM Studio API
- Endpoint: http://10.0.0.200:1234/v1/chat/completions
- Method: POST
- Content-Type: application/json
### Request Format
```json
{
"model": "model_name",
"messages": [
{"role": "user", "content": "user message"}
],
"temperature": 0.7,
"max_tokens": -1
}
```
### Response Format
```json
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "model_name",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "response message"
},
"finish_reason": "stop"
}
]
}
```
## Project Structure
```
text_adventure/
├── main.py # Entry point
├── config.py # Configuration settings
├── llm_client.py # LLM communication
├── interface.py # Text input/output
└── conversation.py # Conversation history management
```
## Implementation Plan
1. Create basic text input/output interface
2. Implement LLM client for LM Studio communication
3. Add conversation history management
4. Integrate components
5. Test functionality

32
config.py Normal file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env python3
"""
Configuration settings for the text-based LLM interaction system.
"""
class Config:
"""Configuration class for the LLM interaction system."""
def __init__(self):
"""Initialize configuration settings."""
# LM Studio server settings
self.LM_STUDIO_HOST = "10.0.0.200"
self.LM_STUDIO_PORT = 1234
self.API_BASE_URL = f"http://{self.LM_STUDIO_HOST}:{self.LM_STUDIO_PORT}/v1"
self.CHAT_COMPLETIONS_ENDPOINT = f"{self.API_BASE_URL}/chat/completions"
# Default model settings
self.DEFAULT_MODEL = "default_model" # Will be updated based on available models
self.TEMPERATURE = 0.7
self.MAX_TOKENS = -1 # -1 means no limit
# Request settings
self.REQUEST_TIMEOUT = 30 # seconds
def get_api_url(self):
"""Get the base API URL for LM Studio."""
return self.API_BASE_URL
def get_chat_completions_url(self):
"""Get the chat completions endpoint URL."""
return self.CHAT_COMPLETIONS_ENDPOINT

68
config/game_config.yaml Normal file
View File

@@ -0,0 +1,68 @@
# Game LLM behavior configuration (Locked Room Test)
system_prompt: |
You are the NARRATOR for a deterministic text adventure. A separate ENGINE
controls all world state and outcomes. Your job is to vividly describe only
what the ENGINE decided.
If you receive ENGINE CONTEXT (JSON) containing ENGINE_OUTCOME and state:
- Base your narration strictly on those facts; do not invent or contradict them.
- Do NOT change inventory, unlock doors, reveal items, or otherwise alter state.
- Use 25 sentences, second person, present tense. Be concise and vivid.
- If an action is impossible, explain why using the provided facts.
- You may list relevant observations.
Never reveal hidden mechanics. Maintain internal consistency with prior events
and the scenario facts below.
scenario:
title: "Locked Room Test"
setting: |
A small stone chamber lit by a thin shaft of light. The north wall holds a
heavy wooden door with iron bands. The floor is made of worn flagstones;
one near the center looks slightly loose.
objectives:
- Find the hidden brass key.
- Take the key and add it to your inventory.
- Unlock the door using the brass key.
- Open the door and exit the chamber to complete the scenario.
constraints:
- The LLM is a narrator only; the ENGINE determines outcomes.
- The key starts hidden and must be revealed by searching before it can be taken.
- The door begins locked and can only be unlocked with the brass key.
- The scenario completes when the door is opened and the player exits.
style:
tone: "Tense, grounded, concise"
reading_level: "General audience"
facts:
- Door starts locked (door_locked=true) and closed (door_open=false).
- A brass key is hidden beneath a slightly loose flagstone (key_hidden=true).
- Inventory starts empty.
locations:
- "Stone chamber"
inventory_start: []
rules:
- Do not contradict established scenario facts or ENGINE outcomes.
- Keep inventory accurate only as reported by the ENGINE events/state.
- If the player attempts a dangerous or impossible action, clarify and explain why.
- When multiple actions are issued, focus on the first and summarize the rest as pending or ask for a choice.
start_message: |
A narrow beam of daylight slices through the dust, picking out motes that drift
above worn flagstones. A heavy wooden door reinforced with iron faces north; its
lock glints, unmoved for years. Near the center of the floor, one flagstone sits
just a touch askew, as if something beneath has lifted it by a hair.
What do you do?
rules:
- Never contradict established scenario facts.
- Keep inventory accurate and list changes explicitly.
- When dangerous actions are attempted, warn the player and ask for confirmation.
- If the player issues multiple actions, prioritize the first and summarize the rest as pending or ask to choose.
- Do not roll dice; infer outcomes logically from context and constraints.
start_message: |
You stand alone in a dim stone chamber. A heavy wooden door reinforced with iron faces north.

79
conversation.py Normal file
View File

@@ -0,0 +1,79 @@
#!/usr/bin/env python3
"""
Conversation history management for the LLM interaction system.
"""
class ConversationManager:
"""Manages conversation history between user and LLM."""
def __init__(self):
"""Initialize the conversation manager."""
self.history = []
def add_user_message(self, message):
"""Add a user message to the conversation history.
Args:
message (str): The user's message
"""
self.history.append({
"role": "user",
"content": message
})
def add_system_message(self, message):
"""Add a system message to the conversation history.
Args:
message (str): The system message
"""
self.history.append({
"role": "system",
"content": message
})
def add_assistant_message(self, message):
"""Add an assistant message to the conversation history.
Args:
message (str): The assistant's message
"""
self.history.append({
"role": "assistant",
"content": message
})
def get_history(self):
"""Get the complete conversation history.
Returns:
list: List of message dictionaries
"""
return self.history
def clear_history(self):
"""Clear the conversation history."""
self.history = []
def get_last_user_message(self):
"""Get the last user message from history.
Returns:
str: The last user message, or None if no user messages
"""
for message in reversed(self.history):
if message["role"] == "user":
return message["content"]
return None
def get_last_assistant_message(self):
"""Get the last assistant message from history.
Returns:
str: The last assistant message, or None if no assistant messages
"""
for message in reversed(self.history):
if message["role"] == "assistant":
return message["content"]
return None

271
deploy_to_portainer.py Normal file
View File

@@ -0,0 +1,271 @@
#!/usr/bin/env python3
"""
Deployment script for deploying the text-adventure app to Portainer.
"""
import requests
import json
import sys
import os
from pathlib import Path
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class PortainerDeployer:
"""Class to handle deployment to Portainer."""
def __init__(self, portainer_url, username, password):
"""Initialize the Portainer deployer.
Args:
portainer_url (str): URL of the Portainer instance (e.g., http://10.0.0.199:9000)
username (str): Portainer username
password (str): Portainer password
"""
self.portainer_url = portainer_url.rstrip('/')
self.username = username
self.password = password
self.auth_token = None
self.endpoint_id = None
def authenticate(self):
"""Authenticate with Portainer and get JWT token.
Returns:
bool: True if authentication successful, False otherwise
"""
try:
url = f"{self.portainer_url}/api/auth"
payload = {
"username": self.username,
"password": self.password
}
response = requests.post(url, json=payload)
response.raise_for_status()
data = response.json()
self.auth_token = data.get('jwt')
if self.auth_token:
print("✓ Successfully authenticated with Portainer")
return True
else:
print("✗ Failed to get authentication token")
return False
except requests.exceptions.RequestException as e:
print(f"✗ Error authenticating with Portainer: {e}")
return False
def get_endpoints(self):
"""Get list of Docker endpoints.
Returns:
list: List of endpoints or None if failed
"""
try:
url = f"{self.portainer_url}/api/endpoints"
headers = {"Authorization": f"Bearer {self.auth_token}"}
response = requests.get(url, headers=headers)
response.raise_for_status()
endpoints = response.json()
return endpoints
except requests.exceptions.RequestException as e:
print(f"✗ Error getting endpoints: {e}")
return None
def select_endpoint(self, endpoint_name=None):
"""Select a Docker endpoint to deploy to.
Args:
endpoint_name (str, optional): Name of specific endpoint to use
Returns:
str: Endpoint ID or None if failed
"""
endpoints = self.get_endpoints()
if not endpoints:
return None
if endpoint_name:
# Find specific endpoint by name
for endpoint in endpoints:
if endpoint.get('Name') == endpoint_name:
self.endpoint_id = endpoint.get('Id')
print(f"✓ Selected endpoint: {endpoint_name} (ID: {self.endpoint_id})")
return self.endpoint_id
print(f"✗ Endpoint '{endpoint_name}' not found")
return None
else:
# Use first endpoint if no specific one requested
if endpoints:
endpoint = endpoints[0]
self.endpoint_id = endpoint.get('Id')
print(f"✓ Selected endpoint: {endpoint.get('Name')} (ID: {self.endpoint_id})")
return self.endpoint_id
else:
print("✗ No endpoints found")
return None
def deploy_stack(self, stack_name, compose_file_path):
"""Deploy a Docker stack using a compose file.
Args:
stack_name (str): Name for the stack
compose_file_path (str): Path to docker-compose.yml file
Returns:
bool: True if deployment successful, False otherwise
"""
try:
if not self.endpoint_id:
print("✗ No endpoint selected")
return False
# Read compose file
if not os.path.exists(compose_file_path):
print(f"✗ Compose file not found: {compose_file_path}")
return False
with open(compose_file_path, 'r') as f:
compose_content = f.read()
# Deploy stack
url = f"{self.portainer_url}/api/stacks"
headers = {"Authorization": f"Bearer {self.auth_token}"}
params = {
"type": 2, # Swarm stack
"method": "string",
"endpointId": self.endpoint_id
}
payload = {
"name": stack_name,
"StackFileContent": compose_content
}
response = requests.post(url, headers=headers, params=params, json=payload)
if response.status_code == 200:
print(f"✓ Stack '{stack_name}' deployed successfully")
return True
else:
print(f"✗ Failed to deploy stack: {response.status_code} - {response.text}")
return False
except Exception as e:
print(f"✗ Error deploying stack: {e}")
return False
def deploy_container(self, container_name, image_name, network_mode="default"):
"""Deploy a single container.
Args:
container_name (str): Name for the container
image_name (str): Docker image to use
network_mode (str): Network mode for the container
Returns:
bool: True if deployment successful, False otherwise
"""
try:
if not self.endpoint_id:
print("✗ No endpoint selected")
return False
# Deploy container
url = f"{self.portainer_url}/api/endpoints/{self.endpoint_id}/docker/containers/create"
headers = {"Authorization": f"Bearer {self.auth_token}"}
# Container configuration
payload = {
"Image": image_name,
"name": container_name,
"HostConfig": {
"NetworkMode": network_mode,
"RestartPolicy": {
"Name": "unless-stopped"
}
},
"Tty": True,
"OpenStdin": True
}
# Create container
response = requests.post(url, headers=headers, json=payload)
if response.status_code == 201:
container_data = response.json()
container_id = container_data.get('Id')
print(f"✓ Container '{container_name}' created successfully")
# Start container
start_url = f"{self.portainer_url}/api/endpoints/{self.endpoint_id}/docker/containers/{container_id}/start"
start_response = requests.post(start_url, headers=headers)
if start_response.status_code == 204:
print(f"✓ Container '{container_name}' started successfully")
return True
else:
print(f"✗ Failed to start container: {start_response.status_code}")
return False
else:
print(f"✗ Failed to create container: {response.status_code} - {response.text}")
return False
except Exception as e:
print(f"✗ Error deploying container: {e}")
return False
def main():
"""Main function to deploy the application to Portainer."""
print("Deploying text-adventure app to Portainer...")
print("=" * 50)
# Configuration from .env file or environment variables
PORTAINER_URL = os.environ.get('PORTAINER_URL', 'http://10.0.0.199:9000')
PORTAINER_USERNAME = os.environ.get('PORTAINER_USERNAME', 'admin')
PORTAINER_PASSWORD = os.environ.get('PORTAINER_PASSWORD', 'password')
ENDPOINT_NAME = os.environ.get('PORTAINER_ENDPOINT', None)
print(f"Using Portainer URL: {PORTAINER_URL}")
print(f"Using Username: {PORTAINER_USERNAME}")
# Create deployer instance
deployer = PortainerDeployer(PORTAINER_URL, PORTAINER_USERNAME, PORTAINER_PASSWORD)
# Authenticate
if not deployer.authenticate():
sys.exit(1)
# Select endpoint
if not deployer.select_endpoint(ENDPOINT_NAME):
sys.exit(1)
# Deploy as container (simpler approach for this app)
print("\nDeploying as container...")
success = deployer.deploy_container(
container_name="text-adventure-app",
image_name="text-adventure:latest",
network_mode="host" # Use host network to access LM Studio
)
if success:
print("\n✓ Deployment completed successfully!")
print("You can now access your container through Portainer")
else:
print("\n✗ Deployment failed!")
sys.exit(1)
if __name__ == "__main__":
main()

20
docker-compose.yml Normal file
View File

@@ -0,0 +1,20 @@
version: '3.8'
services:
text-adventure:
build: .
container_name: text-adventure-app
stdin_open: true
tty: true
volumes:
# Mount the current directory to /app for development
- .:/app
environment:
# Environment variables can be set here
- PYTHONUNBUFFERED=1
# If you need to connect to a service on the host machine
# network_mode: "host"
# For production, you might want to use:
ports:
- "8000:8000"

60
game_config.py Normal file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python3
"""
Game LLM behavior configuration loader.
Loads YAML configuration that defines:
- system_prompt: Injected as the first "system" message for chat completions
- scenario, rules: Arbitrary metadata available to the app if needed
- start_message: Initial assistant message shown to the user
"""
from __future__ import annotations
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, Optional, Union
import yaml
@dataclass
class GameConfig:
system_prompt: str = ""
scenario: Dict[str, Any] = field(default_factory=dict)
rules: list[str] = field(default_factory=list)
start_message: str = ""
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GameConfig":
return cls(
system_prompt=str(data.get("system_prompt", "") or ""),
scenario=dict(data.get("scenario", {}) or {}),
rules=list(data.get("rules", []) or []),
start_message=str(data.get("start_message", "") or ""),
)
def load_game_config(path: Union[str, Path] = "config/game_config.yaml") -> GameConfig:
"""
Load the game configuration from YAML. Returns defaults if the file is missing or invalid.
Args:
path: Path to the YAML config file.
Returns:
GameConfig: Parsed configuration or defaults.
"""
p = Path(path)
if not p.exists():
# Return defaults if config is missing
return GameConfig()
try:
with p.open("r", encoding="utf-8") as f:
raw = yaml.safe_load(f) or {}
if not isinstance(raw, dict):
return GameConfig()
return GameConfig.from_dict(raw)
except Exception:
# Fail-closed to defaults to keep the app usable
return GameConfig()

285
game_state.py Normal file
View File

@@ -0,0 +1,285 @@
#!/usr/bin/env python3
"""
Deterministic game state engine.
The LLM is responsible ONLY for narrative description.
All world logic and outcomes are computed here and persisted per session.
Scenario (Stage 1):
- Single room
- Locked door
- Hidden key (revealed by searching)
- Player must find key, add it to inventory, unlock and open the door to complete
State bootstrap:
- Loads canonical initial state from JSON in ./state:
* items.json (key)
* locks.json (door lock)
* exits.json (door/exit)
* room.json (room descriptors and references to ids)
* containers.json (e.g., the loose flagstone that can hide/reveal the key)
These files may cross-reference each other via ids; this class resolves them
and produces a consolidated in-memory state used by the engine.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import List, Dict, Any, Optional
from pathlib import Path
import json
@dataclass
class GameState:
# World facts
room_description: str = (
"A dim stone chamber with worn flagstones and a heavy wooden door to the north. "
"Dust gathers in the corners, and one flagstone near the center looks slightly loose."
)
door_description: str = "A heavy wooden door reinforced with iron; its lock glints, unmoved for years."
door_locked: bool = True
door_open: bool = False
door_id: Optional[int] = None
lock_id: Optional[int] = None
lock_key_id: Optional[int] = None
# Key lifecycle
key_description: str = "A brass key with a tarnished surface."
key_hidden: bool = True # Not yet discoverable by name
key_revealed: bool = False # Revealed by searching
key_taken: bool = False
key_id: Optional[int] = None
# Container (e.g., loose flagstone) that may hide the key
container_id: Optional[int] = None
# Player
inventory: List[str] = field(default_factory=list)
# Exits metadata exposed to UI/LLM (direction -> {id,type})
exits: Dict[str, Any] = field(default_factory=dict)
# Progress
completed: bool = False
def to_public_dict(self) -> Dict[str, Any]:
"""Minimal canonical state that the LLM may see (facts only)."""
return {
"room": {
"description": self.room_description,
"exits": {k: dict(v) for k, v in self.exits.items()},
},
"door": {
"id": self.door_id,
"description": self.door_description,
"locked": self.door_locked,
"open": self.door_open,
"lock_id": self.lock_id,
"key_id": self.lock_key_id,
},
"key": {
"id": self.key_id,
"description": self.key_description,
"revealed": self.key_revealed,
"taken": self.key_taken,
},
"inventory": list(self.inventory),
"completed": self.completed,
}
# ---------- Bootstrap from JSON files ----------
@classmethod
def from_files(cls, state_dir: str | Path = "state") -> "GameState":
"""
Create a GameState initialized from JSON files in state_dir.
Files are optional; sensible defaults are used if missing.
Expected files:
- items.json: key item (id, type, description, hidden, revealed, taken)
- locks.json: door lock (id, type, description, locked, open, key_id)
- exits.json: door/exit (id, type, description, lock_id, open/locked overrides)
- room.json: room (id, type, description, exits {dir: exit_id}, items [ids], containers [ids])
- containers.json: container that can hide/reveal item (id, hidden, revealed, openable, open, description)
"""
base = Path(state_dir)
def _load_json(p: Path) -> Dict[str, Any]:
try:
if p.exists() and p.stat().st_size > 0:
with p.open("r", encoding="utf-8") as f:
data = json.load(f)
return data if isinstance(data, dict) else {}
except Exception:
pass
return {}
items = _load_json(base / "items.json")
locks = _load_json(base / "locks.json")
exits = _load_json(base / "exits.json")
room = _load_json(base / "room.json")
containers = _load_json(base / "containers.json")
# Resolve room description
default_room = (
"A dim stone chamber with worn flagstones and a heavy wooden door to the north. "
"Dust gathers in the corners, and one flagstone near the center looks slightly loose."
)
room_description = str(room.get("description", default_room)) if isinstance(room, dict) else default_room
# Resolve exits metadata from room references (direction -> exit_id)
exits_meta: Dict[str, Any] = {}
if isinstance(room, dict) and isinstance(room.get("exits"), dict):
for direction, ex_id in room["exits"].items():
exits_meta[direction] = {"id": ex_id, "type": (exits.get("type", "door") if isinstance(exits, dict) else "door")}
# Resolve door description/flags. Prefer locks.json, fallback to exits.json
door_description = (
str(locks.get("description"))
if isinstance(locks, dict) and "description" in locks
else (str(exits.get("description")) if isinstance(exits, dict) and "description" in exits else
"A heavy wooden door reinforced with iron; its lock glints, unmoved for years.")
)
door_locked = bool(locks.get("locked", True)) if isinstance(locks, dict) else bool(exits.get("locked", True)) if isinstance(exits, dict) else True
door_open = bool(locks.get("open", False)) if isinstance(locks, dict) else bool(exits.get("open", False)) if isinstance(exits, dict) else False
door_id = int(exits.get("id")) if isinstance(exits, dict) and "id" in exits else None
lock_id = int(locks.get("id")) if isinstance(locks, dict) and "id" in locks else (int(exits.get("lock_id")) if isinstance(exits, dict) and "lock_id" in exits else None)
lock_key_id = int(locks.get("key_id")) if isinstance(locks, dict) and "key_id" in locks else None
# Resolve key description/flags and ids
key_description = str(items.get("description", "A brass key with a tarnished surface.")) if isinstance(items, dict) else "A brass key with a tarnished surface."
key_hidden = bool(items.get("hidden", True)) if isinstance(items, dict) else True
key_revealed = bool(items.get("revealed", False)) if isinstance(items, dict) else False
key_taken = bool(items.get("taken", False)) if isinstance(items, dict) else False
key_id = int(items.get("id")) if isinstance(items, dict) and "id" in items else None
# Container influence (if the referenced container houses the key)
container_id = None
if isinstance(room, dict) and isinstance(room.get("containers"), list) and len(room["containers"]) > 0:
container_id = room["containers"][0]
if isinstance(containers, dict):
# If a single container is defined and either not referenced or the id matches, merge visibility flags
if container_id is None or containers.get("id") == container_id:
container_hidden = bool(containers.get("hidden", False))
container_revealed = bool(containers.get("revealed", False))
# Hidden if either marks hidden and not revealed yet
key_hidden = (key_hidden or container_hidden) and not (key_revealed or container_revealed)
key_revealed = key_revealed or container_revealed
return cls(
room_description=room_description,
door_description=door_description,
door_locked=door_locked,
door_open=door_open,
door_id=door_id,
lock_id=lock_id,
lock_key_id=lock_key_id,
key_description=key_description,
key_hidden=key_hidden,
key_revealed=key_revealed,
key_taken=key_taken,
key_id=key_id,
container_id=container_id,
exits=exits_meta,
)
# ------------- Intent handling -------------
def apply_action(self, user_text: str) -> Dict[str, Any]:
"""
Parse a user action, update state deterministically, and return an ENGINE_OUTCOME
suitable for feeding into the LLM narrator.
Returns:
dict with:
- events: List[str] describing factual outcomes (not narrative prose)
"""
text = (user_text or "").strip().lower()
events: List[str] = []
if not text:
return {"events": ["No action provided."]}
# Simple keyword intent parsing
def has(*words: str) -> bool:
return all(w in text for w in words)
# Inventory check
if has("inventory") or has("items") or has("bag"):
inv = ", ".join(self.inventory) if self.inventory else "empty"
events.append(f"Inventory checked; current items: {inv}.")
return {"events": events}
# Look/examine room
if has("look") or has("examine") or has("observe") or has("describe"):
events.append("Player surveys the room; no state change.")
return {"events": events}
# Search actions reveal the key (once)
if has("search") or has("inspect") or has("check") or has("look around") or has("look closely"):
if not self.key_revealed and self.key_hidden:
self.key_revealed = True
self.key_hidden = False
events.append("A small brass key is revealed beneath a loose flagstone.")
else:
events.append("Search performed; nothing new is revealed.")
return {"events": events}
# Take/pick up the key
if ("key" in text) and (has("take") or has("pick") or has("grab") or has("get")):
if not self.key_revealed and not self.key_taken:
events.append("Key not visible; cannot take what has not been revealed.")
elif self.key_taken:
events.append("Key already in inventory; no change.")
else:
self.key_taken = True
if "brass key" not in self.inventory:
self.inventory.append("brass key")
events.append("Player picks up the brass key and adds it to inventory.")
return {"events": events}
# Unlock door with key
if ("door" in text) and (has("unlock") or has("use key") or (has("use") and "key" in text)):
if self.door_open:
events.append("Door is already open; unlocking unnecessary.")
elif not self.key_taken:
events.append("Player lacks the brass key; door remains locked.")
elif not self.door_locked:
events.append("Door already unlocked; no change.")
elif (self.lock_key_id is not None and self.key_id is not None and self.lock_key_id != self.key_id):
events.append("The brass key does not fit this lock; the door remains locked.")
else:
self.door_locked = False
events.append("Door is unlocked with the brass key.")
return {"events": events}
# Open door / go through door
if ("door" in text) and (has("open") or has("go through") or has("enter")):
if self.door_open:
events.append("Door is already open; state unchanged.")
elif self.door_locked:
events.append("Door is locked; opening fails.")
else:
self.door_open = True
self.completed = True
events.append("Door is opened and the player exits the chamber. Scenario complete.")
return {"events": events}
# Use key on door (explicit phrasing)
if has("use", "key") and ("door" in text):
if not self.key_taken:
events.append("Player attempts to use a key they do not have.")
elif self.door_locked and (self.lock_key_id is not None and self.key_id is not None and self.lock_key_id != self.key_id):
events.append("The brass key does not fit this lock; the door remains locked.")
elif self.door_locked:
self.door_locked = False
events.append("Door is unlocked with the brass key.")
else:
events.append("Door already unlocked; no change.")
return {"events": events}
# Fallback: no matching intent
events.append("No recognized action; state unchanged.")
return {"events": events}

44
interface.py Normal file
View File

@@ -0,0 +1,44 @@
#!/usr/bin/env python3
"""
Text interface for the LLM interaction system.
Handles input/output operations with the user.
"""
class TextInterface:
"""Handles text-based input and output operations."""
def __init__(self):
"""Initialize the text interface."""
pass
def get_user_input(self):
"""Get input from the user.
Returns:
str: The user's input text
"""
try:
user_input = input("> ")
return user_input
except EOFError:
# Handle Ctrl+D (EOF) gracefully
return "quit"
def display_response(self, response):
"""Display the LLM response to the user.
Args:
response (str): The response text to display
"""
print(response)
print() # Add a blank line for readability
def display_system_message(self, message):
"""Display a system message to the user.
Args:
message (str): The system message to display
"""
print(f"[System] {message}")
print()

87
llm_client.py Normal file
View File

@@ -0,0 +1,87 @@
#!/usr/bin/env python3
"""
LLM client for communicating with LM Studio.
"""
import requests
import json
class LLMClient:
"""Client for communicating with LM Studio."""
def __init__(self, config):
"""Initialize the LLM client.
Args:
config (Config): Configuration object
"""
self.config = config
self.session = requests.Session()
self.session.timeout = self.config.REQUEST_TIMEOUT
def get_response(self, messages):
"""Get a response from the LLM.
Args:
messages (list): List of message dictionaries
Returns:
str: The LLM response text
"""
try:
# Prepare the request payload
payload = {
"model": self.config.DEFAULT_MODEL,
"messages": messages,
"temperature": self.config.TEMPERATURE,
"max_tokens": self.config.MAX_TOKENS
}
# Send request to LM Studio
response = self.session.post(
self.config.get_chat_completions_url(),
headers={"Content-Type": "application/json"},
data=json.dumps(payload)
)
# Raise an exception for bad status codes
response.raise_for_status()
# Parse the response
response_data = response.json()
# Extract the assistant's message
assistant_message = response_data["choices"][0]["message"]["content"]
return assistant_message
except requests.exceptions.RequestException as e:
raise Exception(f"Error communicating with LM Studio: {e}")
except (KeyError, IndexError) as e:
raise Exception(f"Error parsing LM Studio response: {e}")
except json.JSONDecodeError as e:
raise Exception(f"Error decoding JSON response: {e}")
def test_connection(self):
"""Test the connection to LM Studio.
Returns:
bool: True if connection successful, False otherwise
"""
try:
# Try to get available models (simple connection test)
response = self.session.get(f"{self.config.get_api_url()}/models")
response.raise_for_status()
return True
except requests.exceptions.RequestException:
return False
def update_model(self, model_name):
"""Update the model used for completions.
Args:
model_name (str): Name of the model to use
"""
self.config.DEFAULT_MODEL = model_name

95
main.py Normal file
View File

@@ -0,0 +1,95 @@
#!/usr/bin/env python3
"""
Main entry point for the text-based LLM interaction system.
"""
import sys
import json
from interface import TextInterface
from llm_client import LLMClient
from conversation import ConversationManager
from config import Config
from game_config import load_game_config
from game_state import GameState
def main():
"""Main function to run the text-based LLM interaction system."""
print("Text-Based LLM Interaction System")
print("Type 'quit' to exit the program")
print("-" * 40)
# Initialize components
config = Config()
interface = TextInterface()
conversation_manager = ConversationManager()
llm_client = LLMClient(config)
gs = GameState.from_files("state")
# Load game behavior config and seed conversation
gamecfg = load_game_config()
if gamecfg.system_prompt:
conversation_manager.add_system_message(gamecfg.system_prompt)
if gamecfg.start_message:
interface.display_response(gamecfg.start_message)
# Main interaction loop
while True:
try:
# Get user input
user_input = interface.get_user_input()
# Check for exit command
if user_input.lower() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
# Add user message to conversation
conversation_manager.add_user_message(user_input)
# Apply deterministic game logic first
engine_outcome = gs.apply_action(user_input) # {"events": [...]}
# Provide a transient system message with canonical facts for narration
narrator_directive = {
"ENGINE_OUTCOME": {
"events": engine_outcome.get("events", []),
"state": gs.to_public_dict(),
},
"NARRATION_RULES": [
"Narrate strictly according to ENGINE_OUTCOME. Do not invent state.",
"Do not add items, unlock objects, or change inventory; the engine already did that.",
"Use 2-5 sentences, present tense, second person. Be concise and vivid.",
"If the action was impossible, explain why using the facts provided.",
],
}
transient_system = {
"role": "system",
"content": "ENGINE CONTEXT (JSON): " + json.dumps(narrator_directive),
}
# Get response from LLM with engine context
messages = list(conversation_manager.get_history()) + [transient_system]
response = llm_client.get_response(messages)
# Display response
interface.display_response(response)
# Add assistant message to conversation
conversation_manager.add_assistant_message(response)
# End scenario if completed
if gs.completed:
interface.display_system_message("Scenario complete. You unlocked the door and escaped.")
break
except KeyboardInterrupt:
print("\nGoodbye!")
break
except Exception as e:
print(f"An error occurred: {e}")
break
if __name__ == "__main__":
main()

6
requirements.txt Normal file
View File

@@ -0,0 +1,6 @@
requests>=2.25.0
python-dotenv>=0.19.0
fastapi>=0.111.0
uvicorn[standard]>=0.30.0
aiofiles>=23.2.1
PyYAML>=6.0.1

283
setup_guide.md Normal file
View File

@@ -0,0 +1,283 @@
# Setup Guide for Text-Based LLM Interaction System (Linux)
This guide describes a clean Linux-first setup (tested on Ubuntu/Debian). It removes macOS/Windows specifics and focuses on a reliable developer workflow with pyenv, virtual environments, Docker, and Portainer.
## Prerequisites
- Git
- curl or wget
- build tools (gcc, make, etc.)
- Docker Engine and Docker Compose plugin
Recommended (optional but helpful):
- net-tools or iproute2 for networking diagnostics
- ca-certificates
On Ubuntu/Debian you can install Docker and the Compose plugin via apt:
```bash
sudo apt update
sudo apt install -y docker.io docker-compose-plugin
# Allow running docker without sudo (re-login required)
sudo usermod -aG docker "$USER"
```
## Installing pyenv (Ubuntu/Debian)
Install dependencies required to build CPython:
```bash
sudo apt update
sudo apt install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \
libffi-dev liblzma-dev
```
Install pyenv:
```bash
curl https://pyenv.run | bash
```
Add pyenv to your shell (bash; recommended on Linux):
```bash
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.profile
echo 'eval "$(pyenv init --path)"' >> ~/.profile
# Interactive shells
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
# Reload your shell
source ~/.profile
source ~/.bashrc
```
Using zsh on Linux? Add to ~/.zprofile and ~/.zshrc instead:
```bash
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zprofile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zprofile
echo 'eval "$(pyenv init --path)"' >> ~/.zprofile
echo 'eval "$(pyenv init -)"' >> ~/.zshrc
source ~/.zprofile
source ~/.zshrc
```
## Installing Python with pyenv
```bash
# List available Python versions
pyenv install --list
# Install a specific Python version (project default)
pyenv install 3.9.16
# Set it as your global default (optional)
pyenv global 3.9.16
```
Note: If you prefer a newer interpreter and your tooling supports it, Python 3.11.x also works well.
## Creating a Virtual Environment
Choose ONE of the methods below.
### Method 1: Using pyenv-virtualenv (recommended)
```bash
# Install pyenv-virtualenv plugin
git clone https://github.com/pyenv/pyenv-virtualenv.git "$(pyenv root)"/plugins/pyenv-virtualenv
# Initialize in your interactive shell
echo 'eval "$(pyenv virtualenv-init -)"' >> ~/.bashrc
source ~/.bashrc
# Create and activate a virtual environment
pyenv virtualenv 3.9.16 text-adventure
pyenv local text-adventure # auto-activates in this directory
# or activate manually
pyenv activate text-adventure
```
### Method 2: Using Python's built-in venv
```bash
# Use the selected Python version in this directory
pyenv local 3.9.16
# Create and activate venv
python -m venv .venv
source .venv/bin/activate
```
## Installing Project Dependencies
With your virtual environment active:
```bash
python -m pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
```
## Verifying Setup
```bash
# Check Python version
python --version
# Check installed packages
pip list
# Run basic tests
python test_system.py
# Optionally:
python test_llm_connection.py
python test_llm_exchange.py
```
## Docker (Linux)
Docker provides a clean, reproducible environment.
### Build and Run
```bash
# Build the image
docker build -t text-adventure .
# Run using host networking (Linux-only)
docker run -it --rm --network host -v "$PWD":/app text-adventure
```
### Docker Compose (v2 CLI)
Update [docker-compose.yml](docker-compose.yml) as needed. On Linux you may use host networking:
```yaml
services:
text-adventure:
build: .
network_mode: "host" # Linux-only
```
Then:
```bash
docker compose up --build
# Run detached
docker compose up -d --build
# Stop and remove
docker compose down
```
### Connecting to LM Studio from a container
LM Studio is assumed to run on the Linux host (default in [config.py](config.py)). Prefer using the special hostname host.docker.internal inside containers.
Option A (recommended when not using host network): add an extra_hosts mapping using Dockers host-gateway:
```yaml
services:
text-adventure:
build: .
extra_hosts:
- "host.docker.internal:host-gateway"
```
Then in [config.py](config.py), set:
```python
self.LM_STUDIO_HOST = "host.docker.internal"
self.LM_STUDIO_PORT = 1234
```
Option B (Linux-only): use host networking (container shares host network namespace). In this case keep LM_STUDIO_HOST as 127.0.0.1 or the hosts IP address.
Fallback: If neither applies, you can use your Docker bridge gateway IP (often 172.17.0.1), but this can vary by system.
Connectivity quick check from inside the container:
```bash
docker compose exec text-adventure python - <<'PY'
import requests
import sys
url = "http://host.docker.internal:1234/v1/models"
try:
r = requests.get(url, timeout=5)
print(r.status_code, r.text[:200])
except Exception as e:
print("ERROR:", e)
sys.exit(1)
PY
```
## Portainer Deployment (Linux)
Portainer simplifies Docker management via the web UI.
### Prerequisites
- Portainer instance reachable (e.g., http://10.0.0.199:9000)
- Valid Portainer credentials
- Docker image available (local or registry)
Create a local env file and never commit secrets:
```bash
cp .env.example .env
# edit .env and set PORTAINER_URL, PORTAINER_USERNAME, PORTAINER_PASSWORD
```
### Deploy
The script [deploy_to_portainer.py](deploy_to_portainer.py) deploys a single container using host networking, which works on Linux:
```bash
python deploy_to_portainer.py
```
You can also override environment variables:
```bash
export PORTAINER_URL=http://10.0.0.199:9000
export PORTAINER_USERNAME=admin
export PORTAINER_PASSWORD=yourpassword
python deploy_to_portainer.py
```
After deployment, verify connectivity to LM Studio using the container exec method shown above.
## Troubleshooting (Linux)
- pyenv: command not found
- Ensure the init lines exist in ~/.profile and ~/.bashrc, then restart your terminal:
- eval "$(pyenv init --path)" in ~/.profile
- eval "$(pyenv init -)" in ~/.bashrc
- Python build errors (e.g., "zlib not available")
- Confirm all build deps are installed (see Installing pyenv).
- Virtual environment activation issues
- For venv: source .venv/bin/activate
- For pyenv-virtualenv: pyenv activate text-adventure
- Docker permission errors (e.g., "permission denied" / cannot connect to Docker daemon)
- Add your user to the docker group and re-login:
sudo usermod -aG docker "$USER"
- docker compose not found
- Install the Compose plugin: sudo apt install docker-compose-plugin
- Container cannot reach LM Studio
- If using extra_hosts: verify host-gateway mapping and LM_STUDIO_HOST=host.docker.internal in [config.py](config.py).
- If using host network: ensure LM Studio is listening on 0.0.0.0 or localhost as appropriate.
- Try the connectivity check snippet above; if it fails, verify firewall rules and that LM Studio is running.
## Notes
- The LM Studio address 10.0.0.200:1234 in [config.py](config.py) is a placeholder. Adjust it to your environment as described above.
- Do not commit your [.env](.env) file. Use [.env.example](.env.example) as a template.
- Use docker compose (v2) commands instead of the deprecated docker-compose (v1) binary.

12
state/containers.json Normal file
View File

@@ -0,0 +1,12 @@
{
"id": 1,
"type": "recessed",
"hidden": true,
"revealed": false,
"openable": true,
"open": false,
"lock_id": 0,
"weight": 1,
"description": "A flagstone that looks slightly loose.",
"contents": [1]
}

10
state/exits.json Normal file
View File

@@ -0,0 +1,10 @@
{
"id": 1,
"type": "door",
"description": "A heavy wooden door reinforced with iron faces north; its lock glints, unmoved for years.",
"lock_id": 1,
"locked": true,
"openable": true,
"open": false,
"key": "key"
}

8
state/items.json Normal file
View File

@@ -0,0 +1,8 @@
{
"id": 1,
"type": "key",
"description": "A brass key with a tarnished surface.",
"hidden": true,
"revealed": false,
"taken": false
}

9
state/locks.json Normal file
View File

@@ -0,0 +1,9 @@
{
"id": 1,
"type": "door_lock",
"description": "A simple wooden door lock.",
"locked": true,
"openable": true,
"open": false,
"key_id": 1
}

10
state/room.json Normal file
View File

@@ -0,0 +1,10 @@
{
"id": "1",
"type": "room",
"description": "A dim stone chamber with worn flagstones and a heavy wooden door to the north. Dust gathers in the corners, and one flagstone near the center looks slightly loose.",
"exits": {
"north": 1
},
"items": [],
"containers": [1]
}

38
test_interface.py Normal file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
"""
Test script for the text interface.
"""
from interface import TextInterface
def test_interface():
"""Test the text interface functionality."""
print("Testing Text Interface")
print("Type 'quit' to exit the test")
print("-" * 30)
# Create interface instance
interface = TextInterface()
# Test loop
while True:
try:
# Get user input
user_input = interface.get_user_input()
# Check for exit command
if user_input.lower() in ['quit', 'exit', 'q']:
print("Exiting test...")
break
# Display response
interface.display_response(f"You entered: {user_input}")
except KeyboardInterrupt:
print("\nTest interrupted!")
break
if __name__ == "__main__":
test_interface()

38
test_llm_connection.py Normal file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
"""
Test script for connecting to LM Studio.
"""
from config import Config
from llm_client import LLMClient
def test_connection():
"""Test the connection to LM Studio."""
print("Testing connection to LM Studio...")
print("-" * 30)
# Create config and client
config = Config()
llm_client = LLMClient(config)
# Display connection details
print(f"Host: {config.LM_STUDIO_HOST}")
print(f"Port: {config.LM_STUDIO_PORT}")
print(f"API URL: {config.get_api_url()}")
print(f"Chat Completions URL: {config.get_chat_completions_url()}")
print()
# Test connection
try:
success = llm_client.test_connection()
if success:
print("✓ Connection to LM Studio successful!")
else:
print("✗ Failed to connect to LM Studio")
except Exception as e:
print(f"✗ Error testing connection: {e}")
if __name__ == "__main__":
test_connection()

53
test_llm_exchange.py Normal file
View File

@@ -0,0 +1,53 @@
#!/usr/bin/env python3
"""
Test script for basic message exchange with LLM.
"""
from config import Config
from llm_client import LLMClient
from conversation import ConversationManager
def test_message_exchange():
"""Test basic message exchange with LLM."""
print("Testing message exchange with LLM...")
print("-" * 40)
# Create components
config = Config()
llm_client = LLMClient(config)
conversation_manager = ConversationManager()
# Test connection first
try:
success = llm_client.test_connection()
if not success:
print("✗ Failed to connect to LM Studio")
return
else:
print("✓ Connected to LM Studio")
except Exception as e:
print(f"✗ Error testing connection: {e}")
return
# Add a test message to conversation
test_message = "Hello, are you there?"
conversation_manager.add_user_message(test_message)
print(f"Sending message: {test_message}")
# Get response from LLM
try:
response = llm_client.get_response(conversation_manager.get_history())
print(f"Received response: {response}")
# Add response to conversation
conversation_manager.add_assistant_message(response)
print("\n✓ Message exchange successful!")
except Exception as e:
print(f"✗ Error in message exchange: {e}")
if __name__ == "__main__":
test_message_exchange()

76
test_system.py Normal file
View File

@@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""
Test script for the complete system workflow.
"""
import sys
import os
# Add the current directory to Python path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from config import Config
from llm_client import LLMClient
from interface import TextInterface
from conversation import ConversationManager
def test_system():
"""Test the complete system workflow."""
print("Testing complete system workflow...")
print("=" * 40)
try:
# Create all components
print("1. Initializing components...")
config = Config()
interface = TextInterface()
conversation_manager = ConversationManager()
llm_client = LLMClient(config)
print(" ✓ Components initialized")
# Test connection
print("\n2. Testing connection to LM Studio...")
success = llm_client.test_connection()
if success:
print(" ✓ Connected to LM Studio")
else:
print(" ✗ Failed to connect to LM Studio")
return
# Test message exchange
print("\n3. Testing message exchange...")
test_message = "Hello! Can you tell me what you are?"
conversation_manager.add_user_message(test_message)
print(f" Sending: {test_message}")
response = llm_client.get_response(conversation_manager.get_history())
conversation_manager.add_assistant_message(response)
print(f" Received: {response}")
print(" ✓ Message exchange successful")
# Test conversation history
print("\n4. Testing conversation history...")
history = conversation_manager.get_history()
if len(history) == 2:
print(" ✓ Conversation history maintained")
else:
print(" ✗ Conversation history issue")
return
# Test interface (simulated)
print("\n5. Testing interface components...")
interface.display_system_message("Interface test successful")
print(" ✓ Interface components working")
print("\n" + "=" * 40)
print("✓ All tests passed! System is ready for use.")
except Exception as e:
print(f"\n✗ Error during system test: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
test_system()

255
web/app.py Normal file
View File

@@ -0,0 +1,255 @@
#!/usr/bin/env python3
"""
FastAPI web frontend for the text-based LLM interaction system.
Serves a simple web UI and exposes an API to interact with the LLM.
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import Optional, Tuple
from uuid import uuid4
from threading import Lock
from fastapi import FastAPI, Request, Response, Cookie, HTTPException
from fastapi.responses import FileResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
from pydantic import BaseModel
from config import Config
from llm_client import LLMClient
from conversation import ConversationManager
from game_config import load_game_config
from game_state import GameState
# Application setup
app = FastAPI(title="Text Adventure - Web UI")
BASE_DIR = Path(__file__).resolve().parent
STATIC_DIR = BASE_DIR / "static"
STATIC_DIR.mkdir(parents=True, exist_ok=True)
SESSIONS_DIR = (BASE_DIR.parent / "data" / "sessions")
SESSIONS_DIR.mkdir(parents=True, exist_ok=True)
# Mount /static for assets (requires 'aiofiles' in env for async file serving)
app.mount("/static", StaticFiles(directory=str(STATIC_DIR)), name="static")
# Models
class ChatRequest(BaseModel):
message: str
# Globals
CONFIG = Config()
CLIENT = LLMClient(CONFIG)
GAMECFG = load_game_config()
_SESSIONS: dict[str, ConversationManager] = {}
_GAME_STATES: dict[str, GameState] = {}
_SESSIONS_LOCK = Lock()
SESSION_COOKIE_NAME = "session_id"
def _get_or_create_session(session_id: Optional[str]) -> Tuple[str, ConversationManager, bool]:
"""
Return an existing conversation manager by session_id,
or create a new session if not present.
Returns (session_id, manager, created_flag).
"""
created = False
with _SESSIONS_LOCK:
if not session_id or session_id not in _SESSIONS:
session_id = uuid4().hex
cm = ConversationManager()
# Seed with system prompt if configured
if getattr(GAMECFG, "system_prompt", None):
cm.add_system_message(GAMECFG.system_prompt)
_SESSIONS[session_id] = cm
created = True
manager = _SESSIONS[session_id]
return session_id, manager, created
def _ensure_state(session_id: str) -> GameState:
"""
Ensure a GameState exists for a given session and return it.
Loads from disk if present, otherwise creates a new state and persists it.
"""
with _SESSIONS_LOCK:
if session_id not in _GAME_STATES:
path = SESSIONS_DIR / f"{session_id}.json"
gs = None
if path.exists():
try:
with path.open("r", encoding="utf-8") as f:
data = json.load(f) or {}
gs = GameState()
for field in [
"room_description",
"door_description",
"door_locked",
"door_open",
"door_id",
"lock_id",
"lock_key_id",
"key_description",
"key_hidden",
"key_revealed",
"key_taken",
"key_id",
"container_id",
"inventory",
"exits",
"completed",
]:
if field in data:
setattr(gs, field, data[field])
except Exception:
gs = None
if gs is None:
gs = GameState.from_files("state")
try:
with path.open("w", encoding="utf-8") as f:
json.dump(gs.__dict__, f)
except Exception:
pass
_GAME_STATES[session_id] = gs
return _GAME_STATES[session_id]
# Routes
@app.get("/", response_class=FileResponse)
def index() -> FileResponse:
"""
Serve the main web UI.
"""
index_path = STATIC_DIR / "index.html"
if not index_path.exists():
# Provide a minimal fallback page if index.html is missing
# This should not happen in normal usage.
fallback = """<!doctype html>
<html>
<head><meta charset="utf-8"><title>Text Adventure</title></head>
<body><h1>Text Adventure Web UI</h1><p>index.html is missing.</p></body>
</html>
"""
tmp_path = STATIC_DIR / "_fallback_index.html"
tmp_path.write_text(fallback, encoding="utf-8")
return FileResponse(str(tmp_path))
return FileResponse(str(index_path))
@app.get("/api/health")
def health() -> dict:
"""
Health check endpoint.
"""
return {"status": "ok"}
@app.get("/api/session")
def session_info(
response: Response,
session_id: Optional[str] = Cookie(default=None, alias=SESSION_COOKIE_NAME),
) -> JSONResponse:
"""
Ensure a session exists, set cookie if needed, and return scenario metadata and start message.
Also returns the current public game state snapshot.
"""
sid, conv, created = _get_or_create_session(session_id)
if created:
response.set_cookie(
key=SESSION_COOKIE_NAME,
value=sid,
httponly=True,
samesite="lax",
max_age=7 * 24 * 3600,
path="/",
)
gs = _ensure_state(sid)
payload = {
"session_id": sid,
"created": created,
"start_message": getattr(GAMECFG, "start_message", "") or "",
"scenario": getattr(GAMECFG, "scenario", {}) or {},
"rules": getattr(GAMECFG, "rules", []) or [],
"state": gs.to_public_dict(),
}
return JSONResponse(payload)
@app.post("/api/chat")
def chat(
req: ChatRequest,
response: Response,
session_id: Optional[str] = Cookie(default=None, alias=SESSION_COOKIE_NAME),
) -> JSONResponse:
"""
Accept a user message, apply deterministic game logic to update state,
then ask the LLM to narrate the outcome. Maintains a server-side session.
"""
message = req.message.strip()
if not message:
raise HTTPException(status_code=400, detail="Message cannot be empty")
sid, conv, created = _get_or_create_session(session_id)
# Set session cookie if new
if created:
response.set_cookie(
key=SESSION_COOKIE_NAME,
value=sid,
httponly=True,
samesite="lax",
max_age=7 * 24 * 3600, # 7 days
path="/",
)
# Determine outcome via the game engine, then request narration
try:
gs = _ensure_state(sid)
conv.add_user_message(message)
engine_outcome = gs.apply_action(message) # {"events": [...]}
# Build a transient system message with canonical facts for the narrator
narrator_directive = {
"ENGINE_OUTCOME": {
"events": engine_outcome.get("events", []),
"state": gs.to_public_dict(),
},
"NARRATION_RULES": [
"Narrate strictly according to ENGINE_OUTCOME. Do not invent state.",
"Do not add items, unlock objects, or change inventory; the engine already did that.",
"Use 2-5 sentences, present tense, second person. Be concise and vivid.",
"If the action was impossible, explain why using the facts provided.",
],
}
transient_system = {
"role": "system",
"content": "ENGINE CONTEXT (JSON): " + json.dumps(narrator_directive),
}
messages = list(conv.get_history()) + [transient_system]
reply = CLIENT.get_response(messages)
conv.add_assistant_message(reply)
# Persist updated state
try:
with (SESSIONS_DIR / f"{sid}.json").open("w", encoding="utf-8") as f:
json.dump(gs.__dict__, f)
except Exception:
pass
return JSONResponse({
"reply": reply,
"completed": gs.completed,
"events": engine_outcome.get("events", []),
"state": gs.to_public_dict(),
})
except Exception as e:
# Do not leak internal details in production; log as needed.
raise HTTPException(status_code=502, detail=f"LLM backend error: {e}") from e

115
web/static/app.js Normal file
View File

@@ -0,0 +1,115 @@
(() => {
const el = {
statusDot: document.getElementById("statusDot"),
messages: document.getElementById("messages"),
form: document.getElementById("chatForm"),
input: document.getElementById("messageInput"),
sendBtn: document.getElementById("sendBtn"),
tplUser: document.getElementById("msg-user"),
tplAssistant: document.getElementById("msg-assistant"),
};
const state = {
sending: false,
};
function setStatus(ok) {
el.statusDot.classList.toggle("ok", !!ok);
el.statusDot.classList.toggle("err", !ok);
}
async function healthCheck() {
try {
const res = await fetch("/api/health", { cache: "no-store" });
setStatus(res.ok);
} catch {
setStatus(false);
}
}
function appendMessage(role, text) {
const tpl = role === "user" ? el.tplUser : el.tplAssistant;
const node = tpl.content.cloneNode(true);
const bubble = node.querySelector(".bubble");
bubble.textContent = text;
el.messages.appendChild(node);
el.messages.scrollTop = el.messages.scrollHeight;
}
function setSending(sending) {
state.sending = sending;
el.input.disabled = sending;
el.sendBtn.disabled = sending;
el.sendBtn.textContent = sending ? "Sending..." : "Send";
}
async function sendMessage(text) {
setSending(true);
try {
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
credentials: "same-origin",
body: JSON.stringify({ message: text }),
});
if (!res.ok) {
let detail = "";
try {
const data = await res.json();
detail = data.detail || res.statusText;
} catch {
detail = res.statusText;
}
throw new Error(detail || `HTTP ${res.status}`);
}
const data = await res.json();
appendMessage("assistant", data.reply ?? "");
setStatus(true);
} catch (err) {
appendMessage("assistant", `Error: ${err.message || err}`);
setStatus(false);
} finally {
setSending(false);
}
}
el.form.addEventListener("submit", async (e) => {
e.preventDefault();
const text = (el.input.value || "").trim();
if (!text || state.sending) return;
appendMessage("user", text);
el.input.value = "";
await sendMessage(text);
});
// Submit on Enter, allow Shift+Enter for newline (if we switch to textarea later)
el.input.addEventListener("keydown", (e) => {
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
el.form.requestSubmit();
}
});
// Initial status check
healthCheck();
// Initialize session and show start message if configured
(async function initSession() {
try {
const res = await fetch("/api/session", { credentials: "same-origin" });
if (res.ok) {
const data = await res.json();
if (data.start_message) {
appendMessage("assistant", data.start_message);
}
}
} catch (_) {
// no-op
}
})();
// Periodic health check
setInterval(healthCheck, 15000);
})();

46
web/static/index.html Normal file
View File

@@ -0,0 +1,46 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>Text Adventure - Web UI</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" href="/static/styles.css" />
</head>
<body>
<header class="app-header">
<h1>Text Adventure</h1>
<div class="status" id="statusDot" title="Backend status"></div>
</header>
<main class="container">
<section id="chat" class="chat">
<div id="messages" class="messages" aria-live="polite"></div>
</section>
<form id="chatForm" class="input-row" autocomplete="off">
<input
id="messageInput"
type="text"
placeholder="Type your message..."
aria-label="Message"
required
/>
<button id="sendBtn" type="submit">Send</button>
</form>
</main>
<template id="msg-user">
<div class="msg msg-user">
<div class="bubble"></div>
</div>
</template>
<template id="msg-assistant">
<div class="msg msg-assistant">
<div class="bubble"></div>
</div>
</template>
<script src="/static/app.js" defer></script>
</body>
</html>

175
web/static/styles.css Normal file
View File

@@ -0,0 +1,175 @@
:root {
--bg: #0e1116;
--panel: #161b22;
--muted: #8b949e;
--text: #e6edf3;
--accent: #2f81f7;
--accent-2: #3fb950;
--danger: #f85149;
--bubble-user: #1f6feb22;
--bubble-assistant: #30363d;
--radius: 10px;
--shadow: 0 8px 24px rgba(0,0,0,0.25);
}
* { box-sizing: border-box; }
html, body {
height: 100%;
}
body {
margin: 0;
font-family: ui-sans-serif, system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, Noto Sans, "Helvetica Neue", Arial, "Apple Color Emoji", "Segoe UI Emoji";
background: linear-gradient(180deg, #0e1116 0%, #0b0e13 100%);
color: var(--text);
display: flex;
flex-direction: column;
}
.app-header {
display: flex;
align-items: center;
gap: 12px;
padding: 14px 18px;
background: rgba(22,27,34,0.8);
backdrop-filter: blur(6px);
position: sticky;
top: 0;
z-index: 10;
border-bottom: 1px solid #21262d;
}
.app-header h1 {
font-size: 16px;
margin: 0;
letter-spacing: 0.3px;
color: var(--text);
font-weight: 600;
}
.status {
width: 10px;
height: 10px;
border-radius: 999px;
background: var(--muted);
box-shadow: 0 0 0 1px #21262d inset, 0 0 8px rgba(0,0,0,0.35);
}
.status.ok { background: var(--accent-2); box-shadow: 0 0 0 1px #2e7d32 inset, 0 0 16px rgba(63,185,80,0.6); }
.status.err { background: var(--danger); box-shadow: 0 0 0 1px #7f1d1d inset, 0 0 16px rgba(248,81,73,0.6); }
.container {
width: 100%;
max-width: 900px;
margin: 18px auto 24px;
padding: 0 14px;
display: flex;
flex-direction: column;
gap: 12px;
flex: 1 1 auto;
}
.chat {
background: var(--panel);
border: 1px solid #21262d;
border-radius: var(--radius);
min-height: 420px;
max-height: calc(100vh - 230px);
overflow: hidden;
display: flex;
flex-direction: column;
box-shadow: var(--shadow);
}
.messages {
flex: 1 1 auto;
overflow-y: auto;
padding: 16px;
display: flex;
flex-direction: column;
gap: 12px;
}
.msg {
display: flex;
align-items: flex-start;
}
.msg-user { justify-content: flex-end; }
.msg-assistant { justify-content: flex-start; }
.msg .bubble {
max-width: 78%;
padding: 10px 12px;
line-height: 1.35;
border-radius: 14px;
font-size: 14px;
border: 1px solid #30363d;
word-wrap: break-word;
word-break: break-word;
white-space: pre-wrap;
}
.msg-user .bubble {
background: var(--bubble-user);
border-color: #1f6feb55;
color: var(--text);
}
.msg-assistant .bubble {
background: var(--bubble-assistant);
border-color: #30363d;
color: var(--text);
}
.input-row {
display: flex;
gap: 10px;
background: var(--panel);
border: 1px solid #21262d;
border-radius: var(--radius);
padding: 10px;
box-shadow: var(--shadow);
}
.input-row input[type="text"] {
flex: 1 1 auto;
background: #0d1117;
color: var(--text);
border: 1px solid #30363d;
border-radius: 8px;
padding: 12px 12px;
outline: none;
transition: border-color 0.2s ease, box-shadow 0.2s ease;
}
.input-row input[type="text"]:focus {
border-color: var(--accent);
box-shadow: 0 0 0 3px rgba(47,129,247,0.25);
}
.input-row button {
flex: 0 0 auto;
background: linear-gradient(180deg, #238636 0%, #2ea043 100%);
color: #fff;
border: 1px solid #2ea043;
border-radius: 8px;
padding: 0 16px;
font-weight: 600;
cursor: pointer;
min-width: 92px;
transition: transform 0.05s ease-in-out, filter 0.2s ease;
}
.input-row button:hover { filter: brightness(1.05); }
.input-row button:active { transform: translateY(1px); }
.input-row button:disabled {
cursor: not-allowed;
opacity: 0.7;
filter: grayscale(0.2);
}
@media (max-width: 640px) {
.messages { padding: 12px; }
.msg .bubble { max-width: 90%; }
}