2.1 KiB
2.1 KiB
Text-Based LLM Interaction System Architecture
Overview
This document outlines the architecture for a text-based system that allows users to interact with an LLM running on LM Studio.
Components
1. User Interface Layer
- Text Input Handler: Captures user input from terminal
- Text Output Display: Shows LLM responses to user
- Session Manager: Manages conversation history
2. Communication Layer
- LLM Client: Handles HTTP communication with LM Studio
- API Interface: Formats requests/responses according to LM Studio's API
3. Core Logic Layer
- Message Processor: Processes user input and LLM responses
- Conversation History: Maintains context between messages
Data Flow
graph TD
A[User] --> B[Text Input Handler]
B --> C[Message Processor]
C --> D[LLM Client]
D --> E[LM Studio Server]
E --> D
D --> C
C --> F[Text Output Display]
F --> A
Technical Details
LM Studio API
- Endpoint: http://10.0.0.200:1234/v1/chat/completions
- Method: POST
- Content-Type: application/json
Request Format
{
"model": "model_name",
"messages": [
{"role": "user", "content": "user message"}
],
"temperature": 0.7,
"max_tokens": -1
}
Response Format
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "model_name",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "response message"
},
"finish_reason": "stop"
}
]
}
Project Structure
text_adventure/
├── main.py # Entry point
├── config.py # Configuration settings
├── llm_client.py # LLM communication
├── interface.py # Text input/output
└── conversation.py # Conversation history management
Implementation Plan
- Create basic text input/output interface
- Implement LLM client for LM Studio communication
- Add conversation history management
- Integrate components
- Test functionality