11 KiB
11 KiB
Change Log
All notable changes to the "Xendi AI" extension will be documented in this file.
[1.2.1] - 2025-10-03
🧹 Packaging Improvements
Bug Fixes
- Fixed "nul" file causing unsafe extraction error
- Removed temporary build folders from VSIX package
- Cleaned up .vscodeignore to exclude development files
Package Optimization
- Removed
temp_extract/
andtemp_latest/
directories - Excluded
*.log
files - Excluded
IMPROVEMENTS_ROADMAP.md
(development doc) - Excluded
tsconfig.json.disabled
- Package size reduced from 2.8 MB to 2.73 MB
[1.2.0] - 2025-10-03
🌐 Multi-Provider Support - Major Release
✨ 8 AI Provider Support
- LM Studio - Run models locally on your machine (existing)
- Ollama - Simple local AI with easy model management (existing)
- OpenAI - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo (NEW)
- OpenRouter - Access to 100+ models including GPT-4, Claude, Gemini (NEW)
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus (NEW)
- Cohere - Command R+, Command R (NEW)
- Groq - Ultra-fast inference with Llama, Mixtral (NEW)
- Mistral - Mistral Large, Mistral Medium (NEW)
⚡ Productivity Features
- Real-time Streaming: Streaming responses for all cloud providers (except Ollama)
- Keyboard Shortcuts: 15+ shortcuts for maximum productivity
Ctrl+Enter
- Send messageCtrl+K
- New chatCtrl+L
- Clear chatCtrl+/
- Focus inputCtrl+Shift+C
- Copy last responseEscape
- Cancel requestCtrl+↑/↓
- Navigate message historyCtrl+1/2/3/4
- Switch tabsCtrl+Shift+M
- Toggle Chat/Agent mode
- Token Counter: Live token counting with cost estimation per model
- Per-model pricing for GPT-4, Claude, Cohere, etc.
- Session statistics tracking
- Accurate estimates for code vs text
- Theme Sync: Automatic light/dark theme sync with VS Code
🎨 UI/UX Improvements
- Provider Selection: AI Provider dropdown in Settings tab
- API Key Management: Secure API key input for cloud providers
- Fixed Minimum Sizes: Consistent UI sizing (600px body, 400px tabs, 200px chat)
- Better Tab Switching: Fixed tab navigation bugs
- Null-safe JavaScript: Comprehensive null checks for all DOM elements
🔧 Technical Improvements
- Provider-specific Authentication:
- Bearer tokens for OpenAI, OpenRouter, Groq, Mistral
- x-api-key header for Anthropic
- Authorization header for Cohere
- Smart Default URLs: Each provider has pre-configured default endpoints
- Default Models: Pre-configured models for Anthropic and Cohere
- Streaming Implementation: Full SSE (Server-Sent Events) parsing for real-time responses
- Optimized Logging: Structured logging for better debugging
🐛 Bug Fixes
- Fixed tab switching not working (ID mismatch resolved)
- Fixed null reference errors in addEventListener calls
- Fixed TypeScript compilation overwriting extension.js
- Fixed streaming cursor animation
- Fixed settings not persisting correctly
📚 Documentation
- Completely overhauled README.md with 8 provider setup guides
- Added keyboard shortcuts documentation
- Added token counter and cost estimation docs
- Updated troubleshooting section
- Provider comparison table
🔄 Migration Notes
- Existing LM Studio and Ollama configurations will continue to work
- New users can choose any of the 8 supported providers
- API keys are stored securely in VS Code globalState
- No breaking changes to existing functionality
[1.1.3] - 2025-10-02
🐍 Python Virtual Environment Support
- Automatic venv Detection: Agent now detects
venv/
,.venv/
, andenv/
directories - Auto-Activation: Python virtual environments are automatically activated before running commands
- Seamless pip/python: No need to manually activate venv - just use
pip install
orpython
directly - Smart Command Wrapping: Commands are wrapped with
source venv/bin/activate &&
automatically
🔧 Developer Experience
- Agent knows about venv: Updated system prompt to inform agent about automatic venv activation
- Better Python Development: Agent can now install packages and run Python code without environment issues
- Cross-Project Support: Works with any Python project structure (venv, .venv, or env)
[1.1.2] - 2025-10-02
🎯 Agent Improvements
- Fixed Topic Switching: Agent now stays focused on one task until completion
- No More Task Jumping: Agent won't suggest or start additional work unless requested
- Laser Focus: Clear instructions to work ONLY on the current user request
🎨 UI Improvements
- Command Result Dropdowns: All agent tool outputs now display in collapsible dropdowns
- Tool Icons: Visual icons for each command type (🔍 glob, 📄 readFile, ⚡ runShellCommand, etc.)
- Cleaner Agent Chat: Command results are collapsed by default for better readability
- Consistent Design: Dropdown style matches thinking process display
🐛 Bug Fixes
- Fixed agent wandering off-topic during multi-step tasks
- Improved command output formatting in agent mode
[1.1.1] - 2025-10-02
🚀 Performance Improvements
- Optimized API Requests: Added connection keep-alive and request timeouts (60s) for faster communication
- Reduced Token Limits: Decreased max_tokens from 2000 to 1000 for faster response times
- Better Server Communication: Improved fetch configuration with abort controllers
🎨 UI Improvements
- Redesigned Statistics Panel: New clean, professional design matching extension theme
- Removed Emojis: Statistics now use text labels (FILES, LINES, TOKENS) instead of emoji icons
- Enhanced Styling: Monospace font for values, uppercase labels, purple accent colors
- Better Layout: Vertical stat items with clear separation and improved readability
🐛 Bug Fixes
- Fixed statistics panel visual consistency with extension design
- Improved stats visibility toggle with CSS classes
[1.1.0] - 2025-10-02
✨ New Features
🧠 Thinking Process Display
- AI Reasoning Visualization: Models that output thinking process (using
<think>
tags) now show expandable dropdown - Transparent AI Decisions: See exactly how the AI reasons through problems
- Collapsible Interface: Keeps chat clean while making reasoning available on-demand
📊 Agent Statistics & Monitoring
- Real-time Stats Display: See files created/modified, lines added/deleted, tokens used
- Task Plan View: Visual overview of agent's execution plan and progress
- File Diff Links: Clickable file paths with diff view (like GitHub Copilot)
- Performance Metrics: Track token usage and operation efficiency
🔄 Ollama Support
- Multiple LLM Backends: Now supports both LM Studio and Ollama
- Server Type Selection: Easy switching between LM Studio (
localhost:1234
) and Ollama (localhost:11434
) - Auto-detection: Automatically handles different API formats
💬 Enhanced Conversation Management
- Auto-generated Titles: Titles automatically created from first message
- Timestamp Display: Shows relative time ("5 Min.", "2 days") or full date/time
- Context Preservation: Full conversation history maintained across all messages
- Delete with Confirmation: Safe conversation deletion with German/English prompts
- Quick Actions: "New Conversation" button in chat for instant access
🎨 Markdown Rendering
- Full Markdown Support: Renders code blocks, lists, headers, bold, italic
- Syntax Highlighting: Language-specific code block rendering
- Clean Formatting: Professional display of AI responses
🌐 Context Reset Commands
vergiss alles
(German)alles vergessen
(German)reset context
(English)clear context
(English)forget everything
(English)
🐛 Bug Fixes
- Fixed conversation context not persisting between messages
- Fixed agent losing context during multi-step tasks
- Improved error handling for network issues
🔧 Technical Improvements
- Better API compatibility (LM Studio + Ollama)
- Optimized conversation history handling
- Enhanced markdown rendering engine
- Improved statistics tracking
[1.0.0] - 2025-10-02
🎉 Initial Release
Xendi AI - Autonomous AI Coding Assistant for VS Code
✨ Features
🤖 Agent Mode
- Autonomous task completion - Works like GitHub Copilot/Claude
- Multi-step workflow - Explores, analyzes, fixes, and tests automatically
- Smart tool execution - Automatically reads files, finds bugs, implements fixes
- Continuous iteration - Keeps working until task is 100% complete
- Workspace awareness - Knows project structure and context
💬 Chat Mode
- Interactive AI assistant for coding questions
- Code explanations and suggestions
- Real-time code generation
🛠️ Powerful Tools
- File Operations: Read, write, search, replace
- Code Analysis: Pattern search, bug detection
- Shell Commands: Run tests, builds, install packages
- Web Integration: Google search, web content fetching
- Memory: Save and recall information
🌍 Universal Language Support
- JavaScript/TypeScript
- Python
- Java, C++, C#
- Go, Rust
- PHP, Ruby, Swift, Kotlin
- And many more...
🔒 Privacy & Security
- 100% local execution with LM Studio
- No cloud services
- No telemetry or tracking
- Your code stays on your machine
🎨 User Experience
- Clean, intuitive interface
- Multilingual UI (English/German)
- Comprehensive logging system
- Auto-approve tools for seamless workflow
- Model selection and switching
🔧 Technical Details
- LM Studio Integration: Seamless connection to local LLM server
- Workspace Tools: Full VSCode workspace API integration
- Conversation History: Persistent chat and agent conversations
- Error Handling: Robust error detection and recovery
- Performance: Optimized file operations with caching
📝 Configuration
- Customizable LM Studio server URL
- Language selection (English/German)
- Auto-approve settings for tools
- Model temperature and token settings
🎯 Best For
- Debugging: Find and fix bugs automatically
- Refactoring: Improve code structure and quality
- Feature Development: Build new features end-to-end
- Learning: Understand code and get explanations
- Testing: Run and fix test failures
🙏 Credits
- Built with LM Studio API
- Inspired by GitHub Copilot and Claude
- Community feedback and testing
Developer: Robin Oliver Lucas (RL-Dev) License: MIT Website: https://rl-dev.de
For support and feedback:
- GitHub Issues: https://github.com/RL-Dev/xendi-ai/issues
- Website: https://rl-dev.de