IDENTIFICATION DIVISION.
PROGRAM-ID. BANK-TX.
PROCEDURE DIVISION.
DISPLAY "PROCESSING...".section .text
global _start
_start:
mov rax, 1
mov rdi, 1export class Guard {
private state: DomainState;
validate() { ... }
}Uneven AI runs on your machine — no telemetry, no cloud lock-in. It watches your terminal, fixes errors in real time, audits security continuously and logs everything. Now with a conversational shell, GPU auto-detection and legacy COBOL and Assembly support.
npm install uneven-ai Uneven AI spawns alongside your dev process, reading stdout and stderr in real time. The moment an error appears, it diagnoses the cause, applies a surgical fix and writes a detailed entry to .uneven-ai/log.md — with file, line, column, diff and recommendation.
Every power works locally, in real time, configured via a single TypeScript object.
Run uneven-ai with no arguments to open a natural-language shell. Type in any language — Uneven understands your intent and routes it to the right command. No flags to memorise. Free tier.
free · any languageParses stack traces with file, line and column precision. Supports TS/JS, Python, Rust, Go, Java, PHP, Ruby, COBOL and Assembly. Applies surgical, minimal diffs — never rewrites more than needed.
bank-ready · surgical diffSpawns alongside any running process — npm, cargo, python, node, any command. Captures stdout and stderr in real time. Zero CPU overhead when idle.
real-time · zero overheadStatic and active security scanning with built-in Ethical Guardrails. Proactively blocks malware generation and exploit creation while enforcing strict authorization scopes for active testing.
ethical guardrails · owaspLLaMA 3.2 runs fully offline with GPU acceleration. On first run, Uneven detects your hardware and installs the optimised binary for NVIDIA CUDA or Apple Metal — no manual steps. Swap to any cloud brain via one config line.
gpu auto-install · offlineIndexes PDF, Excel (.xlsx), Word (.docx) and CSV files into a semantic knowledge base alongside your source code. Query business logic and documentation in natural language.
pdf · xlsx · docx · csvEvery event is recorded with timestamps, file paths, and code diffs. Forensic activity history stored in structured Markdown — fully auditable and locally kept. Nothing ever leaves your machine.
markdown log · local onlyDetects reverse shells, obfuscation, crypto mining, compromised dependencies and supply chain attacks. Audits AI-generated code suggestions for maximum safety. CI-compatible exit codes.
uneven scanNatural language → Insights. Connect to PostgreSQL, MySQL, SQLite or MongoDB. 3-Layer Security (Schema Filter, SQL Audit, Result Redaction) ensures sensitive data never reaches the AI or your screen.
3-layer security · nl → sqlHeadless pipeline: TypeScript typecheck → malware scan → test suite. Exit code 0 = pass, 1 = fail. Writes step summaries to GitHub Actions.
uneven ci · github actionsUneven runs a local LLM by default — zero cloud, zero API key. The brain is pluggable. Swap to any provider via a single config line without changing anything else.
Runs LLaMA 3.2 fully on your machine. GPU auto-detected and optimised on first run — NVIDIA CUDA and Apple Metal supported. 100% offline, no API key, no subscriptions. Recommended for privacy.
provider: 'local'Gemini 2.0 Flash & Pro. The best cloud-value brain. Superior performance with minimal cost. Recommended as the primary cloud fallback.
provider: 'gemini'Delegates inference to a local Ollama daemon. Supports any model in the Ollama library. Requires Ollama installed on your machine.
provider: 'ollama'GPT-4o and O1 models. Industry standard intelligence. Connect your OpenAI account while keeping your project data local and safe.
provider: 'openai'Claude 3.5 Sonnet and Opus. Exceptional coding reasoning. Best for complex architectural tasks when cloud usage is preferred.
provider: 'claude'1import { UnevenConfig } from 'uneven-ai' 2 3const config: UnevenConfig = { 4 5 // brain engine 6 brain: { 7 provider: 'local', // or 'openai' | 'claude' | 'gemini' 8 model: 'llama3.2', 9 temperature: 0.3, 10 }, 11 12 // knowledge sources 13 knowledge: { 14 dirs: ['./src', './docs'], 15 files: ['./README.md'], 16 db: { url: process.env.DATABASE_URL }, 17 urls: ['https://docs.example.com'], 18 }, 19 20 // terminal watcher 21 watch: { 22 terminal: 'npm run dev', 23 autoFix: true, 24 confirmBeforeFix: false, 25 }, 26 27 // security analysis 28 pentester: { 29 enabled: true, 30 mode: 'active', 31 target: 'http://localhost:3000', 32 static: { owasp: true, secrets: true, deps: true }, 33 bruteforce: { enabled: true, detectRateLimit: true }, 34 }, 35 36 // log 37 log: { 38 path: './.uneven-ai/log.md', 39 includeDiff: true, 40 }, 41 42} 43 44export default config
Install Uneven AI globally or as a dev dependency. The Rust core compiles automatically on first install via napi-rs.
npm install uneven-aiScaffolds uneven-ai.config.ts, creates .uneven-ai/ folder and downloads the default LLM model (~1.5GB, one time only).
npx uneven-ai initIndexes your knowledge base, spawns alongside your dev process and starts watching. The full agent in one command.
npx uneven-ai startUneven's Master Series now features **Modular Security Domains**. By isolating business logic from infrastructure, we ensure that sensitive code never leaves your local machine. Every AI suggestion is audited against a defensive constitution to ensure malware and exploits never reach your source code.
⚠ Active mode should only be used on systems you own or have explicit authorization to test.
Opens the conversational shell. Type in any language — Uneven understands your intent and routes it to the right command. Available on the free tier.
Scaffolds uneven-ai.config.ts, creates .uneven/ folder, downloads the default LLM model (~1.5GB, one time only) and auto-detects your GPU (NVIDIA CUDA, Apple Metal) to install the optimised binary.
Indexes all configured knowledge sources — local files, database tables and external URLs — into the local vector store.
Starts the full agent: terminal watcher, file watcher, knowledge base and pentester if configured. The primary entry point.
Starts only the terminal watcher. Reads stdout/stderr of your dev process in real time. Pass --cmd to specify the process.
Queries the knowledge base directly from the terminal. The agent searches your indexed project, DB and docs to answer. Unlimited queries on all plans.
Explains any file in plain language — purpose, key functions, dependencies and caveats. Free tier.
Generates Markdown documentation for a source file and writes it alongside it. Free tier.
AI writes or edits files to accomplish the described task, running in the background. Pro feature.
AI code review of the latest commit. Pass --commit <hash>, --from <ref> or --staged for custom scope. Pro feature.
Runs the security analysis engine manually. Performs static and/or active testing based on your pentester config.
Scans for malware and compromised dependencies. Detects reverse shells, obfuscation, supply chain attacks and typosquatting. Exit code 1 on critical/high findings (CI-compatible).
AI data analyst interactive REPL. Natural language → SQL → execute → Excel/dashboard. Pass --db <url> for PostgreSQL, MySQL, SQLite or MongoDB. --package-exe exports as a portable .exe.
Headless CI pipeline: TypeScript typecheck → malware scan → test suite. Exits 0 on pass, 1 on fail. Use --github to write step summaries to GitHub Actions.
Starts a local HTTP/HTTPS server so Discord bots, Slack bots and external scripts can send natural-language messages to Uneven and receive JSON responses. Configure port, bearer-token auth and HTTPS in uneven.config.ts.
Displays the current activity log in the terminal. Shows all errors, fixes and security findings.
Clears the knowledge base and re-indexes everything from scratch. Use when the knowledge base is stale.
import { Uneven } from 'uneven-ai' const ai = new Uneven(config) await ai.init() // index + setup await ai.watch() // start agent await ai.pentest() // run security await ai.ask('...') // query KB await ai.stop() // cleanup // Events ai.on('error-detected', fn) ai.on('fix-applied', fn) ai.on('pentest-finding', fn) ai.on('indexed', fn)