// Project

Changelog

Release history and development roadmap. Uneven AI follows semantic versioning.

The first public npm release is v0.4.0 — April 11, 2026. Versions v0.1–v0.3 represent the internal development history shipped as part of that release.
v0.1RELEASEDEarly 2026
  • Rust core scaffold + napi-rs bridge
  • TypeScript public API + type declarations
  • Basic CLI: uneven-ai init, uneven-ai ask
  • Local LLM inference via Candle (LLaMA 3.2 1B Q8)
  • External brain providers: OpenAI, Claude, Gemini, Ollama
v0.2RELEASEDEarly 2026
  • Embeddings generation via Candle (1024-dim, L2 normalized)
  • Vector store integration (usearch HNSW)
  • File and directory indexing (Rust)
  • Database connectors: PostgreSQL, MySQL, SQLite, MongoDB
  • URL loader and docs scraping (undici + cheerio)
v0.3RELEASEDEarly 2026
  • Terminal watcher via tokio::process
  • Stack trace error parser (TS, JS, Python, Rust, Go, Java, PHP, Ruby)
  • Retrieval-Augmented Fix (RAF) — KB semantic search before pattern-match fallback
  • Auto-fix engine via similar crate (surgical diffs)
  • File watcher with auto re-indexing (notify)
v0.4RELEASEDApril 2026
  • Pentester static mode (OWASP Top 10, secrets, CVEs, injections, headers)
  • Pentester active mode with SHA-256 signed authorization scope
  • Malware scanner: 8 categories, CI-compatible exit code (uneven scan)
  • AI data analyst: natural language → SQL → Excel/dashboard (uneven analyze)
  • CI/CD headless pipeline with GitHub Actions integration (uneven ci)
  • Session state protocol (.uneven/session.json)
  • Security report generator: HTML + Markdown output
  • License system: free / Pro tiers with machine fingerprinting and offline grace period
v0.7RELEASEDApril 2026
  • Process lock protocol — exclusive .uneven/uneven.lock prevents concurrent Uneven instances from corrupting state
  • Atomic index saves — crash during indexing no longer corrupts the previous state
  • Logger write queue — serializes all log writes to prevent race conditions
  • Throttled security scanners — pentest runs at most 3 concurrent filesystem walkers (was 8)
  • Timeout kill switch — all LLM inference, DB queries, web fetches and git operations have hard deadlines
  • Index preview — cost and time estimate per brain provider before committing to indexing
  • User-selectable DB tables for data analysis + LLM-suggested analysis ideas
  • Excel folder reading for data analysis (uneven analyze)
  • Humanized CLI help screen — plain-language descriptions for every command
v0.11.1MASTER SERIESApril 2026
  • Hardware Acceleration: Automatic GPU/CUDA offloading (32 layers max) for lightning-fast local inference
  • Ethical Guardrails: Built-in defensive system that prevents malware, exploit, and cracking generation
  • Advanced Document Intelligence: Native support for PDF, Word (.docx), Excel (.xlsx), and CSV Pro (industrial parsing)
  • Strategic AI Recommendations: Preferred path for Local AI and Google Gemini Flash for privacy and cost-efficiency
  • System Diagnostics: New `uneven info` command for real-time hardware, VRAM, and version auditing
  • Memory Hardening: Silent model auto-unload (10min) and anti-loop safety (5-try ceiling)
  • Local Data Analyst: Direct natural language queries over local CSV and Excel files without DB requirements
v1.0RELEASEDApril 2026
  • Stable public release — production-ready across all core features
  • Streaming AI responses — output appears token by token instead of waiting for the full answer
  • Human-centric fix approval — alertOnly mode suggests fixes for your review before applying anything
  • Dynamic autofix toggle — switch between autonomous and assistant mode mid-session with --autofix
  • Faster knowledge base — significantly smaller index files and faster search queries
  • SBOM generation (uneven sbom) — export a software bill of materials for your project
  • Improved provider support — updated model defaults and correct API endpoints for all external providers
v1.0.8RELEASEDApril 2026
  • Conversational shell — run uneven-ai with no arguments to open a natural-language interface in any language; available on the free tier
  • GPU auto-upgrade — uneven-ai init detects your GPU (NVIDIA CUDA, Apple Metal) and installs the optimised binary automatically; no manual steps or recompilation
  • New commands: explain <file> (explain any file in plain language), docs <file> (generate Markdown documentation for a file), review (AI code review — Pro), askf <task> (AI writes or edits files — Pro)
  • Free-tier conversational responses — shell responses respect your configured token limit; bring your own API key and chat with no query caps
  • Background task concurrency limit — shell safely queues concurrent background jobs to prevent resource exhaustion
  • Local model stability — concurrent inference requests are now handled correctly; timed-out requests no longer produce partial output
  • Config write safety — configuration changes are now written atomically to prevent data loss on unexpected shutdown
  • Race condition fixes — concurrent knowledge base saves and session state operations are now correctly serialised
v1.1.9RELEASEDApril 2026
  • Attacker Attribution: Active pentest probes now silently transmit the origin IP for trace purposes
  • Threat Detection: Terminal watcher identifies incoming Uneven probes and alerts the administrator
  • License Hardening: Active Pentester mode now requires a Team tier license
  • Shell Polish: Improved Portuguese intent classification and fixed false success log notifications
v1.1.8RELEASEDApril 2026
  • 3-Layer Privacy Protection: Enhanced data safety suite with Schema Filtering, Semantic SQL Auditing, and Live Result Redaction
  • Forensic Audit Logs: Detailed local history of AI actions and system fixes stored in structured Markdown format
  • Autonomous Security Pentester: Validated engine for detecting hardcoded secrets, injections, and infrastructure header flaws
  • Universal Data Context: Seamlessly index and query .csv and .xlsx files for automated data processing tasks