Initial commit
This commit is contained in:
146
docs/setup/OPENROUTER_SETUP.md
Normal file
146
docs/setup/OPENROUTER_SETUP.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# OpenRouter Integration Guide
|
||||
|
||||
EVE now uses **OpenRouter** as the unified AI provider, giving you access to multiple models through a single API key.
|
||||
|
||||
## Why OpenRouter?
|
||||
|
||||
- **One API Key**: Access GPT-4, Claude, Llama, Gemini, and 100+ other models
|
||||
- **Pay-as-you-go**: Only pay for what you use, no subscriptions
|
||||
- **Model Flexibility**: Switch between models in real-time
|
||||
- **Cost Effective**: Competitive pricing across all providers
|
||||
|
||||
## Getting Your API Key
|
||||
|
||||
1. Visit [OpenRouter](https://openrouter.ai/keys)
|
||||
2. Sign in with Google or GitHub
|
||||
3. Create a new API key
|
||||
4. Copy the key (starts with `sk-or-v1-...`)
|
||||
|
||||
## Setting Up EVE
|
||||
|
||||
### Option 1: Using the UI (Recommended)
|
||||
|
||||
1. Launch EVE: `npm run tauri:dev`
|
||||
2. Click "Configure Settings" on the welcome screen
|
||||
3. Paste your OpenRouter API key
|
||||
4. Click "Save & Close"
|
||||
|
||||
### Option 2: Using .env File
|
||||
|
||||
1. Copy the example file:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. Edit `.env` and add your key:
|
||||
```
|
||||
VITE_OPENROUTER_API_KEY=sk-or-v1-your-actual-key-here
|
||||
```
|
||||
|
||||
3. Restart the application
|
||||
|
||||
## Available Models
|
||||
|
||||
EVE provides quick access to these popular models:
|
||||
|
||||
### OpenAI
|
||||
- **GPT-4 Turbo**: Best overall performance
|
||||
- **GPT-4**: High quality, slower
|
||||
- **GPT-3.5 Turbo**: Fast and cost-effective
|
||||
|
||||
### Anthropic (Claude)
|
||||
- **Claude 3 Opus**: Best for complex tasks
|
||||
- **Claude 3 Sonnet**: Balanced performance
|
||||
- **Claude 3 Haiku**: Fast responses
|
||||
|
||||
### Google
|
||||
- **Gemini Pro**: Google's latest model
|
||||
- **Gemini Pro Vision**: Supports images (future feature)
|
||||
|
||||
### Meta (Llama)
|
||||
- **Llama 3 70B**: Powerful open model
|
||||
- **Llama 3 8B**: Very fast responses
|
||||
|
||||
### Other
|
||||
- **Mistral Medium**: European alternative
|
||||
- **Mixtral 8x7B**: Mixture of experts model
|
||||
|
||||
## Usage
|
||||
|
||||
1. **Select a Model**: Click the model selector in the header
|
||||
2. **Start Chatting**: Type your message and press Enter
|
||||
3. **Switch Models**: Change models anytime mid-conversation
|
||||
|
||||
## Pricing
|
||||
|
||||
OpenRouter uses pay-per-token pricing. Approximate costs:
|
||||
|
||||
- **GPT-3.5 Turbo**: ~$0.002 per 1K tokens
|
||||
- **GPT-4 Turbo**: ~$0.01 per 1K tokens
|
||||
- **Claude 3 Haiku**: ~$0.0008 per 1K tokens
|
||||
- **Llama 3**: Often free or very cheap
|
||||
|
||||
Check current pricing: [OpenRouter Pricing](https://openrouter.ai/models)
|
||||
|
||||
## Features Implemented
|
||||
|
||||
✅ **Chat Interface**: Full conversation support with history
|
||||
✅ **Model Selection**: Switch between 10+ popular models
|
||||
✅ **Settings Panel**: Configure API keys and parameters
|
||||
✅ **Temperature Control**: Adjust response creativity
|
||||
✅ **Max Tokens**: Control response length
|
||||
✅ **Persistent Settings**: Settings saved locally
|
||||
|
||||
## API Client Features
|
||||
|
||||
The `OpenRouterClient` in `src/lib/openrouter.ts` provides:
|
||||
|
||||
- **Simple Chat**: One-line method for quick responses
|
||||
- **Streaming**: Real-time token-by-token responses (ready for future use)
|
||||
- **Error Handling**: Graceful error messages
|
||||
- **Model Discovery**: Fetch all available models dynamically
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "OpenRouter API key not found"
|
||||
- Make sure you've set the key in Settings or `.env`
|
||||
- Key should start with `sk-or-v1-`
|
||||
- Restart the app after adding the key
|
||||
|
||||
### "API error: 401"
|
||||
- Your API key is invalid
|
||||
- Get a new key from [OpenRouter](https://openrouter.ai/keys)
|
||||
|
||||
### "API error: 429"
|
||||
- Rate limit exceeded
|
||||
- Wait a moment and try again
|
||||
- Check your OpenRouter account balance
|
||||
|
||||
### Response is cut off
|
||||
- Increase "Max Tokens" in Settings
|
||||
- Some models have lower limits
|
||||
|
||||
## Next Steps
|
||||
|
||||
With OpenRouter integrated, you can now:
|
||||
|
||||
1. ✅ Chat with multiple AI models
|
||||
2. 🚧 Add voice integration (Phase 2)
|
||||
3. 🚧 Add avatar system (Phase 3)
|
||||
4. 🚧 Implement streaming responses for real-time output
|
||||
5. 🚧 Add conversation export/import
|
||||
|
||||
## Development Notes
|
||||
|
||||
The OpenRouter integration includes:
|
||||
|
||||
- **Type-safe API client** (`src/lib/openrouter.ts`)
|
||||
- **Zustand state management** (`src/stores/chatStore.ts`, `src/stores/settingsStore.ts`)
|
||||
- **React components** (`src/components/ChatInterface.tsx`, etc.)
|
||||
- **Persistent storage** (Settings saved to localStorage via Zustand persist)
|
||||
|
||||
All model IDs and parameters are fully typed for autocomplete and safety.
|
||||
|
||||
---
|
||||
|
||||
**Ready to start chatting!** 🚀
|
||||
Reference in New Issue
Block a user