399 lines
13 KiB
Markdown
399 lines
13 KiB
Markdown
# Xendi AI - Autonomous AI Coding Assistant
|
|
|
|
<div align="center">
|
|
|
|
**Your intelligent coding companion powered by 8 AI providers**
|
|
|
|
[](https://marketplace.visualstudio.com/items?itemName=RL-Dev.xendi-ai)
|
|
[](https://opensource.org/licenses/MIT)
|
|
[](https://code.visualstudio.com/)
|
|
|
|
[Features](#features) • [Installation](#installation) • [Usage](#usage) • [Configuration](#configuration) • [Keyboard Shortcuts](#keyboard-shortcuts)
|
|
|
|
</div>
|
|
|
|
---
|
|
|
|
## 🚀 Features
|
|
|
|
### 🤖 **Autonomous Agent Mode**
|
|
- Works like GitHub Copilot - explores, analyzes, and fixes code autonomously
|
|
- Multi-step task execution without manual intervention
|
|
- Automatically reads files, finds bugs, and implements fixes
|
|
- Tests changes and iterates until task completion
|
|
- Continuously explains thinking process like a senior developer
|
|
|
|
### 💬 **Interactive Chat Mode**
|
|
- Ask questions about your code
|
|
- Get explanations and suggestions
|
|
- Code generation and refactoring assistance
|
|
- Maintains conversation context across all messages
|
|
- Full markdown rendering with syntax highlighting
|
|
- **NEW**: Real-time streaming responses
|
|
|
|
### 🌐 **8 AI Provider Support**
|
|
Choose from multiple AI providers to suit your needs:
|
|
- **LM Studio** - Run models locally on your machine
|
|
- **Ollama** - Simple local AI with easy model management
|
|
- **OpenAI** - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
|
|
- **OpenRouter** - Access to 100+ models (GPT-4, Claude, Gemini, etc.)
|
|
- **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus
|
|
- **Cohere** - Command R+, Command R
|
|
- **Groq** - Ultra-fast inference with Llama, Mixtral
|
|
- **Mistral** - Mistral Large, Mistral Medium
|
|
|
|
### ⚡ **Productivity Features (NEW!)**
|
|
- **Keyboard Shortcuts**: 15+ shortcuts for faster workflow
|
|
- `Ctrl+Enter` - Send message
|
|
- `Ctrl+K` - New chat
|
|
- `Ctrl+L` - Clear chat
|
|
- `Ctrl+/` - Focus input
|
|
- [See all shortcuts](#keyboard-shortcuts)
|
|
- **Token Counter**: Live token counting with cost estimation per model
|
|
- **Theme Sync**: Automatic light/dark theme sync with VS Code
|
|
- **Smart Context**: Preserved conversation history
|
|
|
|
### 🌍 **Universal Language Support**
|
|
Supports **all programming languages**:
|
|
- JavaScript/TypeScript, Python, Java, C++, C#, Go, Rust
|
|
- PHP, Ruby, Swift, Kotlin, Dart, and more
|
|
|
|
### 🛠️ **Powerful Tools**
|
|
- **File Operations**: Read, write, search, and modify files
|
|
- **Code Analysis**: Search patterns, find bugs, analyze structure
|
|
- **Shell Commands**: Run tests, builds, install packages
|
|
- **Web Search**: Google search and web content fetching
|
|
- **Memory**: Save and recall information across sessions
|
|
|
|
### 🔒 **Privacy First**
|
|
- Option to run **100% locally** with LM Studio or Ollama
|
|
- Your code never leaves your machine when using local providers
|
|
- No telemetry, no tracking
|
|
- Choose between local and cloud providers based on your needs
|
|
|
|
### 💾 **Smart Conversation Management**
|
|
- Auto-generated titles from first message
|
|
- Timestamp display (relative time or date/time)
|
|
- Delete conversations with confirmation
|
|
- Context preservation across all messages
|
|
- Quick "New Conversation" button in chat
|
|
|
|
### 🎨 **Multilingual Interface**
|
|
- English and German UI
|
|
- Easy language switching in settings
|
|
|
|
---
|
|
|
|
## 📦 Installation
|
|
|
|
### Prerequisites
|
|
1. **Visual Studio Code** (v1.83.0 or higher)
|
|
2. **Choose your AI provider**:
|
|
- **Local (Free)**:
|
|
- LM Studio: [lmstudio.ai](https://lmstudio.ai)
|
|
- Ollama: [ollama.ai](https://ollama.ai)
|
|
- **Cloud (API Key Required)**:
|
|
- OpenAI: [openai.com](https://openai.com)
|
|
- OpenRouter: [openrouter.ai](https://openrouter.ai)
|
|
- Anthropic: [anthropic.com](https://anthropic.com)
|
|
- Cohere: [cohere.com](https://cohere.com)
|
|
- Groq: [groq.com](https://groq.com)
|
|
- Mistral: [mistral.ai](https://mistral.ai)
|
|
|
|
### Install Extension
|
|
|
|
**Method 1: From VSIX File**
|
|
1. Download the `xendi-ai-1.2.1.vsix` file
|
|
2. Open VS Code
|
|
3. Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
|
|
4. Click the "..." menu (top right)
|
|
5. Select "Install from VSIX..."
|
|
6. Choose the downloaded `.vsix` file
|
|
|
|
**Method 2: From VS Code Marketplace** (when published)
|
|
1. Open VS Code
|
|
2. Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
|
|
3. Search for "Xendi AI"
|
|
4. Click Install
|
|
|
|
### Setup Local Providers (Free)
|
|
|
|
#### LM Studio
|
|
1. Download and install LM Studio
|
|
2. Download a model (recommended: `Qwen/Qwen2.5-Coder-7B-Instruct`)
|
|
3. Start the local server (default: `http://localhost:1234`)
|
|
4. In Xendi AI Settings, select "LM Studio" as AI Provider
|
|
5. Leave Server URL empty (uses default)
|
|
|
|
#### Ollama
|
|
1. Download and install Ollama
|
|
2. Pull a model: `ollama pull qwen2.5-coder:7b`
|
|
3. Ollama server runs automatically on `http://localhost:11434`
|
|
4. In Xendi AI Settings, select "Ollama" as AI Provider
|
|
5. Leave Server URL empty (uses default)
|
|
|
|
### Setup Cloud Providers (API Key Required)
|
|
|
|
#### OpenAI
|
|
1. Create an account on [openai.com](https://openai.com)
|
|
2. Get your API key from the dashboard
|
|
3. In Xendi AI Settings:
|
|
- Select "OpenAI" as AI Provider
|
|
- Enter your API key
|
|
- Leave Server URL empty (uses default)
|
|
|
|
#### Anthropic (Claude)
|
|
1. Create an account on [anthropic.com](https://anthropic.com)
|
|
2. Get your API key
|
|
3. In Xendi AI Settings:
|
|
- Select "Anthropic" as AI Provider
|
|
- Enter your API key
|
|
- Leave Server URL empty (uses default)
|
|
|
|
#### OpenRouter (Access 100+ Models)
|
|
1. Create an account on [openrouter.ai](https://openrouter.ai)
|
|
2. Get your API key from the dashboard
|
|
3. In Xendi AI Settings:
|
|
- Select "OpenRouter" as AI Provider
|
|
- Enter your API key
|
|
- Leave Server URL empty (uses default)
|
|
|
|
---
|
|
|
|
## 🎯 Usage
|
|
|
|
### Quick Start
|
|
|
|
1. **Open Xendi AI**: Click the Xendi AI icon in the Activity Bar (left sidebar)
|
|
2. **Configure Provider**: Go to Settings tab and select your AI provider
|
|
3. **Select Model**: Choose your loaded model from the dropdown
|
|
4. **Choose Mode**:
|
|
- **Chat Mode**: Ask questions and get answers with streaming responses
|
|
- **Agent Mode**: Let AI autonomously complete tasks with explanations
|
|
5. **Start Chatting**: Click the "+ Neu" button to create a new conversation
|
|
|
|
### Agent Mode Examples
|
|
|
|
**Fix bugs autonomously:**
|
|
```
|
|
Fix the TypeScript compilation errors
|
|
```
|
|
|
|
**Create new features:**
|
|
```
|
|
Create a REST API with Express.js including CRUD operations for users
|
|
```
|
|
|
|
**Refactor code:**
|
|
```
|
|
Refactor the authentication module to use async/await
|
|
```
|
|
|
|
**Debug issues:**
|
|
```
|
|
The login is not working, find and fix the issue
|
|
```
|
|
|
|
### Chat Mode Examples
|
|
|
|
**Code explanation:**
|
|
```
|
|
Explain what this function does
|
|
```
|
|
|
|
**Code generation:**
|
|
```
|
|
Write a function to validate email addresses
|
|
```
|
|
|
|
**Debugging help:**
|
|
```
|
|
Why am I getting a null pointer exception here?
|
|
```
|
|
|
|
---
|
|
|
|
## ⚙️ Configuration
|
|
|
|
### Settings
|
|
|
|
Access settings via the **Settings** tab in Xendi AI sidebar:
|
|
|
|
- **AI Provider**: Choose from 8 providers (LM Studio, Ollama, OpenAI, etc.)
|
|
- **API Key**: Your API key for cloud providers (required for OpenAI, Anthropic, etc.)
|
|
- **Server URL**: Custom server address (leave empty to use provider defaults)
|
|
- LM Studio: `http://localhost:1234` (default)
|
|
- Ollama: `http://localhost:11434` (default)
|
|
- OpenAI: `https://api.openai.com/v1` (default)
|
|
- OpenRouter: `https://openrouter.ai/api/v1` (default)
|
|
- Anthropic: `https://api.anthropic.com/v1` (default)
|
|
- Cohere: `https://api.cohere.ai/v1` (default)
|
|
- Groq: `https://api.groq.com/openai/v1` (default)
|
|
- Mistral: `https://api.mistral.ai/v1` (default)
|
|
- **Language**: Interface language (English/German)
|
|
- **Auto-approve Tools**: Enable automatic execution of specific tools in Agent mode
|
|
|
|
### Recommended Model Settings
|
|
|
|
**For Local Providers (LM Studio/Ollama):**
|
|
- **Temperature**: 0.1 - 0.3 (lower = more deterministic)
|
|
- **Max Tokens**: 2000+
|
|
- **Context Length**: 4096+ (higher = better for large files)
|
|
|
|
**For Cloud Providers:**
|
|
- Models are pre-configured with optimal settings
|
|
- Streaming enabled for real-time responses
|
|
|
|
### Best Models by Provider
|
|
|
|
| Provider | Model | Best For |
|
|
|----------|-------|----------|
|
|
| **LM Studio** | Qwen 2.5 Coder 7B | General coding, debugging |
|
|
| **Ollama** | qwen2.5-coder:7b | Code generation, analysis |
|
|
| **OpenAI** | GPT-4 Turbo | Complex reasoning, large context |
|
|
| **OpenRouter** | Claude 3.5 Sonnet | Code refactoring, architecture |
|
|
| **Anthropic** | Claude 3.5 Sonnet | Extended context, code review |
|
|
| **Groq** | llama-3.1-70b | Ultra-fast responses |
|
|
| **Cohere** | Command R+ | Multilingual code tasks |
|
|
| **Mistral** | Mistral Large | European data compliance |
|
|
|
|
---
|
|
|
|
## ⌨️ Keyboard Shortcuts
|
|
|
|
Boost your productivity with these shortcuts:
|
|
|
|
| Shortcut | Action | Description |
|
|
|----------|--------|-------------|
|
|
| `Ctrl+Enter` (Cmd+Enter) | Send Message | Send your prompt |
|
|
| `Ctrl+K` (Cmd+K) | New Chat | Start new conversation |
|
|
| `Ctrl+L` (Cmd+L) | Clear Chat | Clear current chat |
|
|
| `Ctrl+/` (Cmd+/) | Focus Input | Focus the input field |
|
|
| `Ctrl+Shift+C` | Copy Response | Copy last AI response |
|
|
| `Escape` | Cancel Request | Cancel ongoing request |
|
|
| `Ctrl+↑` (Cmd+↑) | Previous Message | Navigate message history up |
|
|
| `Ctrl+↓` (Cmd+↓) | Next Message | Navigate message history down |
|
|
| `Ctrl+1` (Cmd+1) | Chat Tab | Switch to Chat tab |
|
|
| `Ctrl+2` (Cmd+2) | Conversations Tab | Switch to Conversations |
|
|
| `Ctrl+3` (Cmd+3) | Logs Tab | Switch to Logs |
|
|
| `Ctrl+4` (Cmd+4) | Settings Tab | Switch to Settings |
|
|
| `Ctrl+Shift+M` | Toggle Mode | Switch Chat/Agent mode |
|
|
|
|
---
|
|
|
|
## 📊 Token Counter & Cost Estimation
|
|
|
|
Monitor your usage and costs in real-time:
|
|
|
|
- **Live Token Counting**: See tokens as you type
|
|
- **Cost Estimation**: Per-model pricing calculations
|
|
- **Session Statistics**: Total tokens and costs per session
|
|
- **Accurate Estimates**: Different rates for code vs text
|
|
|
|
Supports pricing for:
|
|
- GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
|
|
- Claude 3.5 Sonnet, Claude 3 Opus
|
|
- Cohere Command R+, Command R
|
|
- And more
|
|
|
|
---
|
|
|
|
## 🔧 Available Tools
|
|
|
|
| Tool | Description | Example |
|
|
|------|-------------|---------|
|
|
| `listDirectory` | List directory contents | Explore project structure |
|
|
| `glob` | Find files by pattern | Find all TypeScript files |
|
|
| `readFile` | Read file contents | Analyze code |
|
|
| `writeFile` | Create/update files | Generate new code |
|
|
| `replace` | Replace text in files | Fix bugs, refactor |
|
|
| `runShellCommand` | Execute commands | Run tests, build |
|
|
| `searchFileContent` | Search across files | Find specific code |
|
|
| `readManyFiles` | Read multiple files | Analyze related code |
|
|
| `google_web_search` | Search the web | Research solutions |
|
|
| `web_fetch` | Fetch web content | Read documentation |
|
|
| `save_memory` | Store information | Remember context |
|
|
|
|
---
|
|
|
|
## 🐛 Troubleshooting
|
|
|
|
### Models not loading?
|
|
- **Local Providers**: Ensure LM Studio/Ollama is running
|
|
- **Cloud Providers**: Verify your API key is correct
|
|
- Check the AI Provider and Server URL in Settings
|
|
- For Anthropic/Cohere: Models are pre-configured, just ensure API key is valid
|
|
|
|
### Agent not working correctly?
|
|
- Use a capable model (Qwen 2.5 Coder, GPT-4, or Claude 3.5 Sonnet recommended)
|
|
- Check Logs tab for error messages
|
|
- Ensure model supports JSON output (most modern models do)
|
|
|
|
### Streaming not working?
|
|
- Streaming is enabled for all cloud providers except Ollama
|
|
- For local providers, ensure you're using a recent model
|
|
- Check Developer Console for errors (Help → Toggle Developer Tools)
|
|
|
|
### Keyboard shortcuts not working?
|
|
- Ensure Xendi AI panel is focused
|
|
- Check for conflicts with other extensions
|
|
- Shortcuts are initialized when webview loads
|
|
|
|
---
|
|
|
|
## 📝 License
|
|
|
|
MIT License
|
|
|
|
Copyright © 2025 Robin Oliver Lucas (RL-Dev)
|
|
|
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
of this software and associated documentation files (the "Software"), to deal
|
|
in the Software without restriction, including without limitation the rights
|
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
copies of the Software, and to permit persons to whom the Software is
|
|
furnished to do so, subject to the following conditions:
|
|
|
|
The above copyright notice and this permission notice shall be included in all
|
|
copies or substantial portions of the Software.
|
|
|
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
SOFTWARE.
|
|
|
|
---
|
|
|
|
## 👨💻 Author & Support
|
|
|
|
**Developer**: Robin Oliver Lucas
|
|
**Publisher**: RL-Dev
|
|
**Website**: [https://rl-dev.de](https://rl-dev.de)
|
|
|
|
### Support & Feedback
|
|
- 📧 Contact: [https://rl-dev.de](https://rl-dev.de)
|
|
- 💡 Feature requests and bug reports welcome via website
|
|
|
|
---
|
|
|
|
## 🙏 Acknowledgments
|
|
|
|
- Built with 8 AI provider APIs for maximum flexibility
|
|
- Inspired by GitHub Copilot and Claude
|
|
- Community feedback and contributions
|
|
|
|
---
|
|
|
|
<div align="center">
|
|
|
|
**Made with ❤️ by RL-Dev**
|
|
|
|
[](https://rl-dev.de)
|
|
|
|
*Choose your AI provider. Keep your workflow.*
|
|
|
|
</div>
|