@opencodereview/cli
AI code quality scanner for the terminal — Detect hallucinated packages, phantom dependencies, stale APIs, and logic gaps in seconds. Open-source, runs locally, zero API cost.

✨ Why?
AI code assistants generate code fast — but they hallucinate packages, reference outdated APIs, and leave logic gaps. open-code-review catches these AI-specific defects before they ship.
🚀 Quick Start
npx @opencodereview/cli scan .
npm install -g @opencodereview/cli
ocr scan .
That's it. Scans your project and prints a quality report to the terminal.
📦 Installation
npm install -g @opencodereview/cli
npx @opencodereview/cli scan .
The CLI provides two binary names: open-code-review and ocr (shorthand).
📋 Commands
scan [path] — Scan for AI-generated defects (V4, default)
ocr scan .
ocr scan ./src
ocr scan . --sla L2
ocr scan . --sla L3
ocr scan . --diff
ocr scan . --format json --output report.json
ocr scan . --format sarif --output report.sarif
ocr scan . --format html --output report.html
ocr scan . --format markdown
ocr scan . --locale zh
ocr scan . --exclude "**/test/**,**/*.test.*"
ocr scan . --offline
ocr scan . --no-score
scan-v3 [paths...] — Legacy V3 scan
ocr scan-v3 ./src --threshold 80 --format json
ocr scan-v3 ./src --heal
init — Create configuration file
ocr init
login — Set up license key
ocr login
config — View or update configuration
ocr config show
ocr config set license AICV-XXXX-...
ocr config set cloud-url https://...
ocr config set api-key your-key
⚙️ V4 Scan Options
--sla <level> | SLA level: L1 (fast), L2 (standard), L3 (deep) | L1 |
--locale <locale> | Output language: en, zh | en |
--format <fmt> | Output format: terminal, json, sarif, markdown, html | terminal |
--diff | Scan only changed files (vs origin/main) | off |
--base <ref> | Base branch for diff | origin/main |
--head <ref> | Head branch for diff | HEAD |
--config <path> | Custom config file path | .ocrrc.yml |
--offline | Skip registry verification | off |
--include <patterns> | File patterns to include (comma-separated) | (auto-detect) |
--exclude <patterns> | File patterns to exclude (comma-separated) | (none) |
--ai-local-model <name> | Ollama model for L2/L3 | (default) |
--ai-local-url <url> | Ollama base URL | http://localhost:11434 |
--ai-remote-provider | Remote AI provider: openai, anthropic | — |
--ai-remote-model <name> | Remote AI model name | — |
--ai-remote-key <key> | Remote AI API key | — |
--no-score | Skip scoring, just list issues | off |
--json | Shorthand for --format json | off |
--output <path> | Write report to file | (stdout) |
--license <key> | License key | — |
Environment Variables
OCR_API_KEY | Remote AI API key |
OCR_SLA | Default SLA level |
OCR_LOCALE | Default locale |
OCR_OLLAMA_URL | Ollama base URL |
OCR_OLLAMA_MODEL | Ollama model name |
📊 Output Formats
Terminal (default)
Open Code Review V4
SLA: L1 | Locale: en
Scanning...
Found 3 issue(s) in 12 file(s)
🔴 error src/auth.ts:12 Package `@supabase/auth-helpers` not found in registry
⚠️ warning src/date.ts:5 Deprecated API `moment().format()` used
ℹ️ info src/api.ts:23 Unused variable `tempResult`
Score: 78/100 (C) — Threshold: 70 ✅ Passed
JSON
ocr scan . --format json
SARIF
ocr scan . --format sarif --output report.sarif
HTML
ocr scan . --format html --output report.html
🔗 GitHub Action Integration
Open Code Review works great as a GitHub Action too. Use it in CI to automatically review every PR:
- name: Open Code Review
uses: raye-deng/open-code-review@v1
with:
sla: L1
threshold: 70
github-token: ${{ secrets.GITHUB_TOKEN }}
Or use the CLI directly in your workflow:
- name: Scan with CLI
run: npx @opencodereview/cli scan . --format json --output report.json
📋 Scan Levels
| L1 | AST analysis: hallucinated packages, stale APIs, dead code, logic gaps | ⚡ ~5s | No |
| L2 | L1 + embedding recall for deeper pattern matching | 🚀 ~30s | Optional (Ollama) |
| L3 | L2 + LLM deep analysis for nuanced code review | 🐢 ~2min | Yes (Ollama / Cloud) |
🔒 Privacy
- L1 & L2 (TF-IDF): 100% local — no external API calls
- L2 (Ollama) / L3: Your code only goes to your own Ollama server or your chosen cloud API
- We never see your code
📜 License
- Personal & Open-source: Free under BSL 1.1
- Commercial: License required — see codes.evallab.ai
- Converts to Apache 2.0 on 2030-03-11
Links