4.1 KiB
OpenRouter Integration Guide
EVE now uses OpenRouter as the unified AI provider, giving you access to multiple models through a single API key.
Why OpenRouter?
- One API Key: Access GPT-4, Claude, Llama, Gemini, and 100+ other models
- Pay-as-you-go: Only pay for what you use, no subscriptions
- Model Flexibility: Switch between models in real-time
- Cost Effective: Competitive pricing across all providers
Getting Your API Key
- Visit OpenRouter
- Sign in with Google or GitHub
- Create a new API key
- Copy the key (starts with
sk-or-v1-...)
Setting Up EVE
Option 1: Using the UI (Recommended)
- Launch EVE:
npm run tauri:dev - Click "Configure Settings" on the welcome screen
- Paste your OpenRouter API key
- Click "Save & Close"
Option 2: Using .env File
-
Copy the example file:
cp .env.example .env -
Edit
.envand add your key:VITE_OPENROUTER_API_KEY=sk-or-v1-your-actual-key-here -
Restart the application
Available Models
EVE provides quick access to these popular models:
OpenAI
- GPT-4 Turbo: Best overall performance
- GPT-4: High quality, slower
- GPT-3.5 Turbo: Fast and cost-effective
Anthropic (Claude)
- Claude 3 Opus: Best for complex tasks
- Claude 3 Sonnet: Balanced performance
- Claude 3 Haiku: Fast responses
- Gemini Pro: Google's latest model
- Gemini Pro Vision: Supports images (future feature)
Meta (Llama)
- Llama 3 70B: Powerful open model
- Llama 3 8B: Very fast responses
Other
- Mistral Medium: European alternative
- Mixtral 8x7B: Mixture of experts model
Usage
- Select a Model: Click the model selector in the header
- Start Chatting: Type your message and press Enter
- Switch Models: Change models anytime mid-conversation
Pricing
OpenRouter uses pay-per-token pricing. Approximate costs:
- GPT-3.5 Turbo: ~$0.002 per 1K tokens
- GPT-4 Turbo: ~$0.01 per 1K tokens
- Claude 3 Haiku: ~$0.0008 per 1K tokens
- Llama 3: Often free or very cheap
Check current pricing: OpenRouter Pricing
Features Implemented
✅ Chat Interface: Full conversation support with history
✅ Model Selection: Switch between 10+ popular models
✅ Settings Panel: Configure API keys and parameters
✅ Temperature Control: Adjust response creativity
✅ Max Tokens: Control response length
✅ Persistent Settings: Settings saved locally
API Client Features
The OpenRouterClient in src/lib/openrouter.ts provides:
- Simple Chat: One-line method for quick responses
- Streaming: Real-time token-by-token responses (ready for future use)
- Error Handling: Graceful error messages
- Model Discovery: Fetch all available models dynamically
Troubleshooting
"OpenRouter API key not found"
- Make sure you've set the key in Settings or
.env - Key should start with
sk-or-v1- - Restart the app after adding the key
"API error: 401"
- Your API key is invalid
- Get a new key from OpenRouter
"API error: 429"
- Rate limit exceeded
- Wait a moment and try again
- Check your OpenRouter account balance
Response is cut off
- Increase "Max Tokens" in Settings
- Some models have lower limits
Next Steps
With OpenRouter integrated, you can now:
- ✅ Chat with multiple AI models
- 🚧 Add voice integration (Phase 2)
- 🚧 Add avatar system (Phase 3)
- 🚧 Implement streaming responses for real-time output
- 🚧 Add conversation export/import
Development Notes
The OpenRouter integration includes:
- Type-safe API client (
src/lib/openrouter.ts) - Zustand state management (
src/stores/chatStore.ts,src/stores/settingsStore.ts) - React components (
src/components/ChatInterface.tsx, etc.) - Persistent storage (Settings saved to localStorage via Zustand persist)
All model IDs and parameters are fully typed for autocomplete and safety.
Ready to start chatting! 🚀