Dateien nach "/" hochladen
This commit is contained in:
286
CHANGELOG.md
Normal file
286
CHANGELOG.md
Normal file
@@ -0,0 +1,286 @@
|
||||
# Change Log
|
||||
|
||||
All notable changes to the "Xendi AI" extension will be documented in this file.
|
||||
|
||||
## [1.2.1] - 2025-10-03
|
||||
|
||||
### 🧹 Packaging Improvements
|
||||
|
||||
#### Bug Fixes
|
||||
- Fixed "nul" file causing unsafe extraction error
|
||||
- Removed temporary build folders from VSIX package
|
||||
- Cleaned up .vscodeignore to exclude development files
|
||||
|
||||
#### Package Optimization
|
||||
- Removed `temp_extract/` and `temp_latest/` directories
|
||||
- Excluded `*.log` files
|
||||
- Excluded `IMPROVEMENTS_ROADMAP.md` (development doc)
|
||||
- Excluded `tsconfig.json.disabled`
|
||||
- Package size reduced from 2.8 MB to 2.73 MB
|
||||
|
||||
---
|
||||
|
||||
## [1.2.0] - 2025-10-03
|
||||
|
||||
### 🌐 Multi-Provider Support - Major Release
|
||||
|
||||
#### ✨ 8 AI Provider Support
|
||||
- **LM Studio** - Run models locally on your machine (existing)
|
||||
- **Ollama** - Simple local AI with easy model management (existing)
|
||||
- **OpenAI** - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo (NEW)
|
||||
- **OpenRouter** - Access to 100+ models including GPT-4, Claude, Gemini (NEW)
|
||||
- **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus (NEW)
|
||||
- **Cohere** - Command R+, Command R (NEW)
|
||||
- **Groq** - Ultra-fast inference with Llama, Mixtral (NEW)
|
||||
- **Mistral** - Mistral Large, Mistral Medium (NEW)
|
||||
|
||||
#### ⚡ Productivity Features
|
||||
- **Real-time Streaming**: Streaming responses for all cloud providers (except Ollama)
|
||||
- **Keyboard Shortcuts**: 15+ shortcuts for maximum productivity
|
||||
- `Ctrl+Enter` - Send message
|
||||
- `Ctrl+K` - New chat
|
||||
- `Ctrl+L` - Clear chat
|
||||
- `Ctrl+/` - Focus input
|
||||
- `Ctrl+Shift+C` - Copy last response
|
||||
- `Escape` - Cancel request
|
||||
- `Ctrl+↑/↓` - Navigate message history
|
||||
- `Ctrl+1/2/3/4` - Switch tabs
|
||||
- `Ctrl+Shift+M` - Toggle Chat/Agent mode
|
||||
- **Token Counter**: Live token counting with cost estimation per model
|
||||
- Per-model pricing for GPT-4, Claude, Cohere, etc.
|
||||
- Session statistics tracking
|
||||
- Accurate estimates for code vs text
|
||||
- **Theme Sync**: Automatic light/dark theme sync with VS Code
|
||||
|
||||
#### 🎨 UI/UX Improvements
|
||||
- **Provider Selection**: AI Provider dropdown in Settings tab
|
||||
- **API Key Management**: Secure API key input for cloud providers
|
||||
- **Fixed Minimum Sizes**: Consistent UI sizing (600px body, 400px tabs, 200px chat)
|
||||
- **Better Tab Switching**: Fixed tab navigation bugs
|
||||
- **Null-safe JavaScript**: Comprehensive null checks for all DOM elements
|
||||
|
||||
#### 🔧 Technical Improvements
|
||||
- **Provider-specific Authentication**:
|
||||
- Bearer tokens for OpenAI, OpenRouter, Groq, Mistral
|
||||
- x-api-key header for Anthropic
|
||||
- Authorization header for Cohere
|
||||
- **Smart Default URLs**: Each provider has pre-configured default endpoints
|
||||
- **Default Models**: Pre-configured models for Anthropic and Cohere
|
||||
- **Streaming Implementation**: Full SSE (Server-Sent Events) parsing for real-time responses
|
||||
- **Optimized Logging**: Structured logging for better debugging
|
||||
|
||||
#### 🐛 Bug Fixes
|
||||
- Fixed tab switching not working (ID mismatch resolved)
|
||||
- Fixed null reference errors in addEventListener calls
|
||||
- Fixed TypeScript compilation overwriting extension.js
|
||||
- Fixed streaming cursor animation
|
||||
- Fixed settings not persisting correctly
|
||||
|
||||
#### 📚 Documentation
|
||||
- Completely overhauled README.md with 8 provider setup guides
|
||||
- Added keyboard shortcuts documentation
|
||||
- Added token counter and cost estimation docs
|
||||
- Updated troubleshooting section
|
||||
- Provider comparison table
|
||||
|
||||
### 🔄 Migration Notes
|
||||
- Existing LM Studio and Ollama configurations will continue to work
|
||||
- New users can choose any of the 8 supported providers
|
||||
- API keys are stored securely in VS Code globalState
|
||||
- No breaking changes to existing functionality
|
||||
|
||||
---
|
||||
|
||||
## [1.1.3] - 2025-10-02
|
||||
|
||||
### 🐍 Python Virtual Environment Support
|
||||
- **Automatic venv Detection**: Agent now detects `venv/`, `.venv/`, and `env/` directories
|
||||
- **Auto-Activation**: Python virtual environments are automatically activated before running commands
|
||||
- **Seamless pip/python**: No need to manually activate venv - just use `pip install` or `python` directly
|
||||
- **Smart Command Wrapping**: Commands are wrapped with `source venv/bin/activate &&` automatically
|
||||
|
||||
### 🔧 Developer Experience
|
||||
- **Agent knows about venv**: Updated system prompt to inform agent about automatic venv activation
|
||||
- **Better Python Development**: Agent can now install packages and run Python code without environment issues
|
||||
- **Cross-Project Support**: Works with any Python project structure (venv, .venv, or env)
|
||||
|
||||
---
|
||||
|
||||
## [1.1.2] - 2025-10-02
|
||||
|
||||
### 🎯 Agent Improvements
|
||||
- **Fixed Topic Switching**: Agent now stays focused on one task until completion
|
||||
- **No More Task Jumping**: Agent won't suggest or start additional work unless requested
|
||||
- **Laser Focus**: Clear instructions to work ONLY on the current user request
|
||||
|
||||
### 🎨 UI Improvements
|
||||
- **Command Result Dropdowns**: All agent tool outputs now display in collapsible dropdowns
|
||||
- **Tool Icons**: Visual icons for each command type (🔍 glob, 📄 readFile, ⚡ runShellCommand, etc.)
|
||||
- **Cleaner Agent Chat**: Command results are collapsed by default for better readability
|
||||
- **Consistent Design**: Dropdown style matches thinking process display
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
- Fixed agent wandering off-topic during multi-step tasks
|
||||
- Improved command output formatting in agent mode
|
||||
|
||||
---
|
||||
|
||||
## [1.1.1] - 2025-10-02
|
||||
|
||||
### 🚀 Performance Improvements
|
||||
- **Optimized API Requests**: Added connection keep-alive and request timeouts (60s) for faster communication
|
||||
- **Reduced Token Limits**: Decreased max_tokens from 2000 to 1000 for faster response times
|
||||
- **Better Server Communication**: Improved fetch configuration with abort controllers
|
||||
|
||||
### 🎨 UI Improvements
|
||||
- **Redesigned Statistics Panel**: New clean, professional design matching extension theme
|
||||
- **Removed Emojis**: Statistics now use text labels (FILES, LINES, TOKENS) instead of emoji icons
|
||||
- **Enhanced Styling**: Monospace font for values, uppercase labels, purple accent colors
|
||||
- **Better Layout**: Vertical stat items with clear separation and improved readability
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
- Fixed statistics panel visual consistency with extension design
|
||||
- Improved stats visibility toggle with CSS classes
|
||||
|
||||
---
|
||||
|
||||
## [1.1.0] - 2025-10-02
|
||||
|
||||
### ✨ New Features
|
||||
|
||||
#### 🧠 Thinking Process Display
|
||||
- **AI Reasoning Visualization**: Models that output thinking process (using `<think>` tags) now show expandable dropdown
|
||||
- **Transparent AI Decisions**: See exactly how the AI reasons through problems
|
||||
- **Collapsible Interface**: Keeps chat clean while making reasoning available on-demand
|
||||
|
||||
#### 📊 Agent Statistics & Monitoring
|
||||
- **Real-time Stats Display**: See files created/modified, lines added/deleted, tokens used
|
||||
- **Task Plan View**: Visual overview of agent's execution plan and progress
|
||||
- **File Diff Links**: Clickable file paths with diff view (like GitHub Copilot)
|
||||
- **Performance Metrics**: Track token usage and operation efficiency
|
||||
|
||||
#### 🔄 Ollama Support
|
||||
- **Multiple LLM Backends**: Now supports both LM Studio and Ollama
|
||||
- **Server Type Selection**: Easy switching between LM Studio (`localhost:1234`) and Ollama (`localhost:11434`)
|
||||
- **Auto-detection**: Automatically handles different API formats
|
||||
|
||||
#### 💬 Enhanced Conversation Management
|
||||
- **Auto-generated Titles**: Titles automatically created from first message
|
||||
- **Timestamp Display**: Shows relative time ("5 Min.", "2 days") or full date/time
|
||||
- **Context Preservation**: Full conversation history maintained across all messages
|
||||
- **Delete with Confirmation**: Safe conversation deletion with German/English prompts
|
||||
- **Quick Actions**: "New Conversation" button in chat for instant access
|
||||
|
||||
#### 🎨 Markdown Rendering
|
||||
- **Full Markdown Support**: Renders code blocks, lists, headers, bold, italic
|
||||
- **Syntax Highlighting**: Language-specific code block rendering
|
||||
- **Clean Formatting**: Professional display of AI responses
|
||||
|
||||
#### 🌐 Context Reset Commands
|
||||
- `vergiss alles` (German)
|
||||
- `alles vergessen` (German)
|
||||
- `reset context` (English)
|
||||
- `clear context` (English)
|
||||
- `forget everything` (English)
|
||||
|
||||
### 🐛 Bug Fixes
|
||||
- Fixed conversation context not persisting between messages
|
||||
- Fixed agent losing context during multi-step tasks
|
||||
- Improved error handling for network issues
|
||||
|
||||
### 🔧 Technical Improvements
|
||||
- Better API compatibility (LM Studio + Ollama)
|
||||
- Optimized conversation history handling
|
||||
- Enhanced markdown rendering engine
|
||||
- Improved statistics tracking
|
||||
|
||||
---
|
||||
|
||||
## [1.0.0] - 2025-10-02
|
||||
|
||||
### 🎉 Initial Release
|
||||
|
||||
**Xendi AI - Autonomous AI Coding Assistant for VS Code**
|
||||
|
||||
### ✨ Features
|
||||
|
||||
#### 🤖 Agent Mode
|
||||
- **Autonomous task completion** - Works like GitHub Copilot/Claude
|
||||
- **Multi-step workflow** - Explores, analyzes, fixes, and tests automatically
|
||||
- **Smart tool execution** - Automatically reads files, finds bugs, implements fixes
|
||||
- **Continuous iteration** - Keeps working until task is 100% complete
|
||||
- **Workspace awareness** - Knows project structure and context
|
||||
|
||||
#### 💬 Chat Mode
|
||||
- Interactive AI assistant for coding questions
|
||||
- Code explanations and suggestions
|
||||
- Real-time code generation
|
||||
|
||||
#### 🛠️ Powerful Tools
|
||||
- **File Operations**: Read, write, search, replace
|
||||
- **Code Analysis**: Pattern search, bug detection
|
||||
- **Shell Commands**: Run tests, builds, install packages
|
||||
- **Web Integration**: Google search, web content fetching
|
||||
- **Memory**: Save and recall information
|
||||
|
||||
#### 🌍 Universal Language Support
|
||||
- JavaScript/TypeScript
|
||||
- Python
|
||||
- Java, C++, C#
|
||||
- Go, Rust
|
||||
- PHP, Ruby, Swift, Kotlin
|
||||
- And many more...
|
||||
|
||||
#### 🔒 Privacy & Security
|
||||
- 100% local execution with LM Studio
|
||||
- No cloud services
|
||||
- No telemetry or tracking
|
||||
- Your code stays on your machine
|
||||
|
||||
#### 🎨 User Experience
|
||||
- Clean, intuitive interface
|
||||
- Multilingual UI (English/German)
|
||||
- Comprehensive logging system
|
||||
- Auto-approve tools for seamless workflow
|
||||
- Model selection and switching
|
||||
|
||||
### 🔧 Technical Details
|
||||
|
||||
- **LM Studio Integration**: Seamless connection to local LLM server
|
||||
- **Workspace Tools**: Full VSCode workspace API integration
|
||||
- **Conversation History**: Persistent chat and agent conversations
|
||||
- **Error Handling**: Robust error detection and recovery
|
||||
- **Performance**: Optimized file operations with caching
|
||||
|
||||
### 📝 Configuration
|
||||
|
||||
- Customizable LM Studio server URL
|
||||
- Language selection (English/German)
|
||||
- Auto-approve settings for tools
|
||||
- Model temperature and token settings
|
||||
|
||||
### 🎯 Best For
|
||||
|
||||
- **Debugging**: Find and fix bugs automatically
|
||||
- **Refactoring**: Improve code structure and quality
|
||||
- **Feature Development**: Build new features end-to-end
|
||||
- **Learning**: Understand code and get explanations
|
||||
- **Testing**: Run and fix test failures
|
||||
|
||||
### 🙏 Credits
|
||||
|
||||
- Built with LM Studio API
|
||||
- Inspired by GitHub Copilot and Claude
|
||||
- Community feedback and testing
|
||||
|
||||
---
|
||||
|
||||
**Developer**: Robin Oliver Lucas (RL-Dev)
|
||||
**License**: MIT
|
||||
**Website**: https://rl-dev.de
|
||||
|
||||
---
|
||||
|
||||
For support and feedback:
|
||||
- GitHub Issues: https://github.com/RL-Dev/xendi-ai/issues
|
||||
- Website: https://rl-dev.de
|
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 Robin Oliver Lucas (RL-Dev)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
398
README.md
Normal file
398
README.md
Normal file
@@ -0,0 +1,398 @@
|
||||
# Xendi AI - Autonomous AI Coding Assistant
|
||||
|
||||
<div align="center">
|
||||
|
||||
**Your intelligent coding companion powered by 8 AI providers**
|
||||
|
||||
[](https://marketplace.visualstudio.com/items?itemName=RL-Dev.xendi-ai)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://code.visualstudio.com/)
|
||||
|
||||
[Features](#features) • [Installation](#installation) • [Usage](#usage) • [Configuration](#configuration) • [Keyboard Shortcuts](#keyboard-shortcuts)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Features
|
||||
|
||||
### 🤖 **Autonomous Agent Mode**
|
||||
- Works like GitHub Copilot - explores, analyzes, and fixes code autonomously
|
||||
- Multi-step task execution without manual intervention
|
||||
- Automatically reads files, finds bugs, and implements fixes
|
||||
- Tests changes and iterates until task completion
|
||||
- Continuously explains thinking process like a senior developer
|
||||
|
||||
### 💬 **Interactive Chat Mode**
|
||||
- Ask questions about your code
|
||||
- Get explanations and suggestions
|
||||
- Code generation and refactoring assistance
|
||||
- Maintains conversation context across all messages
|
||||
- Full markdown rendering with syntax highlighting
|
||||
- **NEW**: Real-time streaming responses
|
||||
|
||||
### 🌐 **8 AI Provider Support**
|
||||
Choose from multiple AI providers to suit your needs:
|
||||
- **LM Studio** - Run models locally on your machine
|
||||
- **Ollama** - Simple local AI with easy model management
|
||||
- **OpenAI** - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
|
||||
- **OpenRouter** - Access to 100+ models (GPT-4, Claude, Gemini, etc.)
|
||||
- **Anthropic** - Claude 3.5 Sonnet, Claude 3 Opus
|
||||
- **Cohere** - Command R+, Command R
|
||||
- **Groq** - Ultra-fast inference with Llama, Mixtral
|
||||
- **Mistral** - Mistral Large, Mistral Medium
|
||||
|
||||
### ⚡ **Productivity Features (NEW!)**
|
||||
- **Keyboard Shortcuts**: 15+ shortcuts for faster workflow
|
||||
- `Ctrl+Enter` - Send message
|
||||
- `Ctrl+K` - New chat
|
||||
- `Ctrl+L` - Clear chat
|
||||
- `Ctrl+/` - Focus input
|
||||
- [See all shortcuts](#keyboard-shortcuts)
|
||||
- **Token Counter**: Live token counting with cost estimation per model
|
||||
- **Theme Sync**: Automatic light/dark theme sync with VS Code
|
||||
- **Smart Context**: Preserved conversation history
|
||||
|
||||
### 🌍 **Universal Language Support**
|
||||
Supports **all programming languages**:
|
||||
- JavaScript/TypeScript, Python, Java, C++, C#, Go, Rust
|
||||
- PHP, Ruby, Swift, Kotlin, Dart, and more
|
||||
|
||||
### 🛠️ **Powerful Tools**
|
||||
- **File Operations**: Read, write, search, and modify files
|
||||
- **Code Analysis**: Search patterns, find bugs, analyze structure
|
||||
- **Shell Commands**: Run tests, builds, install packages
|
||||
- **Web Search**: Google search and web content fetching
|
||||
- **Memory**: Save and recall information across sessions
|
||||
|
||||
### 🔒 **Privacy First**
|
||||
- Option to run **100% locally** with LM Studio or Ollama
|
||||
- Your code never leaves your machine when using local providers
|
||||
- No telemetry, no tracking
|
||||
- Choose between local and cloud providers based on your needs
|
||||
|
||||
### 💾 **Smart Conversation Management**
|
||||
- Auto-generated titles from first message
|
||||
- Timestamp display (relative time or date/time)
|
||||
- Delete conversations with confirmation
|
||||
- Context preservation across all messages
|
||||
- Quick "New Conversation" button in chat
|
||||
|
||||
### 🎨 **Multilingual Interface**
|
||||
- English and German UI
|
||||
- Easy language switching in settings
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
### Prerequisites
|
||||
1. **Visual Studio Code** (v1.83.0 or higher)
|
||||
2. **Choose your AI provider**:
|
||||
- **Local (Free)**:
|
||||
- LM Studio: [lmstudio.ai](https://lmstudio.ai)
|
||||
- Ollama: [ollama.ai](https://ollama.ai)
|
||||
- **Cloud (API Key Required)**:
|
||||
- OpenAI: [openai.com](https://openai.com)
|
||||
- OpenRouter: [openrouter.ai](https://openrouter.ai)
|
||||
- Anthropic: [anthropic.com](https://anthropic.com)
|
||||
- Cohere: [cohere.com](https://cohere.com)
|
||||
- Groq: [groq.com](https://groq.com)
|
||||
- Mistral: [mistral.ai](https://mistral.ai)
|
||||
|
||||
### Install Extension
|
||||
|
||||
**Method 1: From VSIX File**
|
||||
1. Download the `xendi-ai-1.2.1.vsix` file
|
||||
2. Open VS Code
|
||||
3. Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
|
||||
4. Click the "..." menu (top right)
|
||||
5. Select "Install from VSIX..."
|
||||
6. Choose the downloaded `.vsix` file
|
||||
|
||||
**Method 2: From VS Code Marketplace** (when published)
|
||||
1. Open VS Code
|
||||
2. Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
|
||||
3. Search for "Xendi AI"
|
||||
4. Click Install
|
||||
|
||||
### Setup Local Providers (Free)
|
||||
|
||||
#### LM Studio
|
||||
1. Download and install LM Studio
|
||||
2. Download a model (recommended: `Qwen/Qwen2.5-Coder-7B-Instruct`)
|
||||
3. Start the local server (default: `http://localhost:1234`)
|
||||
4. In Xendi AI Settings, select "LM Studio" as AI Provider
|
||||
5. Leave Server URL empty (uses default)
|
||||
|
||||
#### Ollama
|
||||
1. Download and install Ollama
|
||||
2. Pull a model: `ollama pull qwen2.5-coder:7b`
|
||||
3. Ollama server runs automatically on `http://localhost:11434`
|
||||
4. In Xendi AI Settings, select "Ollama" as AI Provider
|
||||
5. Leave Server URL empty (uses default)
|
||||
|
||||
### Setup Cloud Providers (API Key Required)
|
||||
|
||||
#### OpenAI
|
||||
1. Create an account on [openai.com](https://openai.com)
|
||||
2. Get your API key from the dashboard
|
||||
3. In Xendi AI Settings:
|
||||
- Select "OpenAI" as AI Provider
|
||||
- Enter your API key
|
||||
- Leave Server URL empty (uses default)
|
||||
|
||||
#### Anthropic (Claude)
|
||||
1. Create an account on [anthropic.com](https://anthropic.com)
|
||||
2. Get your API key
|
||||
3. In Xendi AI Settings:
|
||||
- Select "Anthropic" as AI Provider
|
||||
- Enter your API key
|
||||
- Leave Server URL empty (uses default)
|
||||
|
||||
#### OpenRouter (Access 100+ Models)
|
||||
1. Create an account on [openrouter.ai](https://openrouter.ai)
|
||||
2. Get your API key from the dashboard
|
||||
3. In Xendi AI Settings:
|
||||
- Select "OpenRouter" as AI Provider
|
||||
- Enter your API key
|
||||
- Leave Server URL empty (uses default)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Usage
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. **Open Xendi AI**: Click the Xendi AI icon in the Activity Bar (left sidebar)
|
||||
2. **Configure Provider**: Go to Settings tab and select your AI provider
|
||||
3. **Select Model**: Choose your loaded model from the dropdown
|
||||
4. **Choose Mode**:
|
||||
- **Chat Mode**: Ask questions and get answers with streaming responses
|
||||
- **Agent Mode**: Let AI autonomously complete tasks with explanations
|
||||
5. **Start Chatting**: Click the "+ Neu" button to create a new conversation
|
||||
|
||||
### Agent Mode Examples
|
||||
|
||||
**Fix bugs autonomously:**
|
||||
```
|
||||
Fix the TypeScript compilation errors
|
||||
```
|
||||
|
||||
**Create new features:**
|
||||
```
|
||||
Create a REST API with Express.js including CRUD operations for users
|
||||
```
|
||||
|
||||
**Refactor code:**
|
||||
```
|
||||
Refactor the authentication module to use async/await
|
||||
```
|
||||
|
||||
**Debug issues:**
|
||||
```
|
||||
The login is not working, find and fix the issue
|
||||
```
|
||||
|
||||
### Chat Mode Examples
|
||||
|
||||
**Code explanation:**
|
||||
```
|
||||
Explain what this function does
|
||||
```
|
||||
|
||||
**Code generation:**
|
||||
```
|
||||
Write a function to validate email addresses
|
||||
```
|
||||
|
||||
**Debugging help:**
|
||||
```
|
||||
Why am I getting a null pointer exception here?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Settings
|
||||
|
||||
Access settings via the **Settings** tab in Xendi AI sidebar:
|
||||
|
||||
- **AI Provider**: Choose from 8 providers (LM Studio, Ollama, OpenAI, etc.)
|
||||
- **API Key**: Your API key for cloud providers (required for OpenAI, Anthropic, etc.)
|
||||
- **Server URL**: Custom server address (leave empty to use provider defaults)
|
||||
- LM Studio: `http://localhost:1234` (default)
|
||||
- Ollama: `http://localhost:11434` (default)
|
||||
- OpenAI: `https://api.openai.com/v1` (default)
|
||||
- OpenRouter: `https://openrouter.ai/api/v1` (default)
|
||||
- Anthropic: `https://api.anthropic.com/v1` (default)
|
||||
- Cohere: `https://api.cohere.ai/v1` (default)
|
||||
- Groq: `https://api.groq.com/openai/v1` (default)
|
||||
- Mistral: `https://api.mistral.ai/v1` (default)
|
||||
- **Language**: Interface language (English/German)
|
||||
- **Auto-approve Tools**: Enable automatic execution of specific tools in Agent mode
|
||||
|
||||
### Recommended Model Settings
|
||||
|
||||
**For Local Providers (LM Studio/Ollama):**
|
||||
- **Temperature**: 0.1 - 0.3 (lower = more deterministic)
|
||||
- **Max Tokens**: 2000+
|
||||
- **Context Length**: 4096+ (higher = better for large files)
|
||||
|
||||
**For Cloud Providers:**
|
||||
- Models are pre-configured with optimal settings
|
||||
- Streaming enabled for real-time responses
|
||||
|
||||
### Best Models by Provider
|
||||
|
||||
| Provider | Model | Best For |
|
||||
|----------|-------|----------|
|
||||
| **LM Studio** | Qwen 2.5 Coder 7B | General coding, debugging |
|
||||
| **Ollama** | qwen2.5-coder:7b | Code generation, analysis |
|
||||
| **OpenAI** | GPT-4 Turbo | Complex reasoning, large context |
|
||||
| **OpenRouter** | Claude 3.5 Sonnet | Code refactoring, architecture |
|
||||
| **Anthropic** | Claude 3.5 Sonnet | Extended context, code review |
|
||||
| **Groq** | llama-3.1-70b | Ultra-fast responses |
|
||||
| **Cohere** | Command R+ | Multilingual code tasks |
|
||||
| **Mistral** | Mistral Large | European data compliance |
|
||||
|
||||
---
|
||||
|
||||
## ⌨️ Keyboard Shortcuts
|
||||
|
||||
Boost your productivity with these shortcuts:
|
||||
|
||||
| Shortcut | Action | Description |
|
||||
|----------|--------|-------------|
|
||||
| `Ctrl+Enter` (Cmd+Enter) | Send Message | Send your prompt |
|
||||
| `Ctrl+K` (Cmd+K) | New Chat | Start new conversation |
|
||||
| `Ctrl+L` (Cmd+L) | Clear Chat | Clear current chat |
|
||||
| `Ctrl+/` (Cmd+/) | Focus Input | Focus the input field |
|
||||
| `Ctrl+Shift+C` | Copy Response | Copy last AI response |
|
||||
| `Escape` | Cancel Request | Cancel ongoing request |
|
||||
| `Ctrl+↑` (Cmd+↑) | Previous Message | Navigate message history up |
|
||||
| `Ctrl+↓` (Cmd+↓) | Next Message | Navigate message history down |
|
||||
| `Ctrl+1` (Cmd+1) | Chat Tab | Switch to Chat tab |
|
||||
| `Ctrl+2` (Cmd+2) | Conversations Tab | Switch to Conversations |
|
||||
| `Ctrl+3` (Cmd+3) | Logs Tab | Switch to Logs |
|
||||
| `Ctrl+4` (Cmd+4) | Settings Tab | Switch to Settings |
|
||||
| `Ctrl+Shift+M` | Toggle Mode | Switch Chat/Agent mode |
|
||||
|
||||
---
|
||||
|
||||
## 📊 Token Counter & Cost Estimation
|
||||
|
||||
Monitor your usage and costs in real-time:
|
||||
|
||||
- **Live Token Counting**: See tokens as you type
|
||||
- **Cost Estimation**: Per-model pricing calculations
|
||||
- **Session Statistics**: Total tokens and costs per session
|
||||
- **Accurate Estimates**: Different rates for code vs text
|
||||
|
||||
Supports pricing for:
|
||||
- GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
|
||||
- Claude 3.5 Sonnet, Claude 3 Opus
|
||||
- Cohere Command R+, Command R
|
||||
- And more
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Available Tools
|
||||
|
||||
| Tool | Description | Example |
|
||||
|------|-------------|---------|
|
||||
| `listDirectory` | List directory contents | Explore project structure |
|
||||
| `glob` | Find files by pattern | Find all TypeScript files |
|
||||
| `readFile` | Read file contents | Analyze code |
|
||||
| `writeFile` | Create/update files | Generate new code |
|
||||
| `replace` | Replace text in files | Fix bugs, refactor |
|
||||
| `runShellCommand` | Execute commands | Run tests, build |
|
||||
| `searchFileContent` | Search across files | Find specific code |
|
||||
| `readManyFiles` | Read multiple files | Analyze related code |
|
||||
| `google_web_search` | Search the web | Research solutions |
|
||||
| `web_fetch` | Fetch web content | Read documentation |
|
||||
| `save_memory` | Store information | Remember context |
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Models not loading?
|
||||
- **Local Providers**: Ensure LM Studio/Ollama is running
|
||||
- **Cloud Providers**: Verify your API key is correct
|
||||
- Check the AI Provider and Server URL in Settings
|
||||
- For Anthropic/Cohere: Models are pre-configured, just ensure API key is valid
|
||||
|
||||
### Agent not working correctly?
|
||||
- Use a capable model (Qwen 2.5 Coder, GPT-4, or Claude 3.5 Sonnet recommended)
|
||||
- Check Logs tab for error messages
|
||||
- Ensure model supports JSON output (most modern models do)
|
||||
|
||||
### Streaming not working?
|
||||
- Streaming is enabled for all cloud providers except Ollama
|
||||
- For local providers, ensure you're using a recent model
|
||||
- Check Developer Console for errors (Help → Toggle Developer Tools)
|
||||
|
||||
### Keyboard shortcuts not working?
|
||||
- Ensure Xendi AI panel is focused
|
||||
- Check for conflicts with other extensions
|
||||
- Shortcuts are initialized when webview loads
|
||||
|
||||
---
|
||||
|
||||
## 📝 License
|
||||
|
||||
MIT License
|
||||
|
||||
Copyright © 2025 Robin Oliver Lucas (RL-Dev)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
---
|
||||
|
||||
## 👨💻 Author & Support
|
||||
|
||||
**Developer**: Robin Oliver Lucas
|
||||
**Publisher**: RL-Dev
|
||||
**Website**: [https://rl-dev.de](https://rl-dev.de)
|
||||
|
||||
### Support & Feedback
|
||||
- 📧 Contact: [https://rl-dev.de](https://rl-dev.de)
|
||||
- 💡 Feature requests and bug reports welcome via website
|
||||
|
||||
---
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
|
||||
- Built with 8 AI provider APIs for maximum flexibility
|
||||
- Inspired by GitHub Copilot and Claude
|
||||
- Community feedback and contributions
|
||||
|
||||
---
|
||||
|
||||
<div align="center">
|
||||
|
||||
**Made with ❤️ by RL-Dev**
|
||||
|
||||
[](https://rl-dev.de)
|
||||
|
||||
*Choose your AI provider. Keep your workflow.*
|
||||
|
||||
</div>
|
Reference in New Issue
Block a user