🚨 Active Supply Chain Attack:node-ipc Package Compromised.Learn More β†’
Socket
Book a DemoSign in
Socket

miii-cli

Package Overview
Dependencies
Maintainers
1
Versions
32
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

miii-cli

The high-performance local AI coding agent for your terminal. Automate complex workflows with local LLMs.

latest
Source
npmnpm
Version
1.2.0
Version published
Weekly downloads
1.4K
Maintainers
1
Weekly downloads
Β 
Created
Source

Miii β€” Local-First AI Coding Agent

The only coding CLI that runs fully local or cloud β€” any model, zero lock-in, zero monthly bill.

MIII Demo

npm version npm downloads license node

Miii is a fully autonomous coding agent that runs entirely on your machine. It plans, edits files, runs your tests, searches the web, indexes your codebase semantically, and iterates until the job is done β€” all without a single byte of your code leaving your network.

Zero subscription. Zero cloud dependency. Zero Python overhead. 176 KB total.

npm install -g miii-cli && miii

Why Miii Exists

Claude Code is impressive. It's also cloud-only, costs $20–200/month, and sends every line of your codebase to a server you don't control.

OpenCode and Codex CLI have the same problem β€” they're all cloud-first, all locked to specific providers, and all charge you indefinitely for the privilege of reading your private code.

Miii flips the model. Run on Ollama: $0/month, fully offline, code never leaves your machine. Switch to Anthropic or OpenAI when you need cloud power. Change providers live inside the app β€” no config files, no restarts.

Your compute. Your data. Your rules.

How Miii Compares

MiiiClaude CodeOpenCodeCodex CLIAider
Monthly cost$0$20–200API costAPI cost$0
Bundle size176 KB~50 MB~30 MB~20 MB~200 MB
Local / offline (Ollama)βœ…βŒpartial❌⚠️
Air-gappedβœ…βŒβŒβŒβŒ
Switch provider liveβœ…βŒβŒβŒβŒ
File checkpoints (undo)βœ…βŒβŒβŒβŒ
Permission gatesβœ…βœ…partialβœ…βŒ
MCP clientβœ…βœ…βœ…βŒβŒ
Semantic codebase indexβœ…βŒβŒβŒβŒ
Skill/extension systemβœ…plugins❌❌❌
Startup time<100ms~2s~1s~1s~4s
LicenseMITProprietaryMITMITApache 2.0

What Miii Actually Does

This isn't autocomplete. Miii is a full autonomous agent loop:

  • You describe a goal
  • Miii reads your codebase, plans the changes, edits the files
  • It asks your permission before touching anything (edit, delete, run commands)
  • It runs your test suite automatically after every change
  • If tests fail, it reads the error, fixes the code, re-runs
  • It repeats until the work is done β€” and checkpoints every file so you can abort safely

What a Session Looks Like

> refactor the auth module to use JWT instead of sessions

  ● Researching: refactor auth module to use JWT
  ● Reading src/auth/session.ts
    Read 42 lines
  ● Reading src/middleware/auth.ts
    Read 28 lines

  ─ plan (2 actions)
    β—¦ edit_file src/auth/session.ts
    β—¦ edit_file src/middleware/auth.ts

  ⚠ edit_file  src/auth/session.ts   y approve  n deny
  > y

  ● edit_file src/auth/session.ts
    Wrote 12 lines
  ● edit_file src/middleware/auth.ts
    Wrote 8 lines
  ● run_tests
    βœ… Tests passed

  ─ refactor done β€” 2 file(s) processed

Killer Features

πŸ”’ Privacy-First, Local by Default Run on Ollama and your code never leaves your machine. No account. No API key. No monthly bill. Switch to Anthropic or OpenAI when you need it β€” one command, live, mid-session.

πŸ”„ Live Provider Switching Type /config to open an interactive picker. Arrow-navigate between Ollama, Anthropic, and OpenAI-compatible endpoints. Change model, API key, base URL, or Tavily key without restarting. Config saves automatically.

πŸ›‘ Permission Gates + File Checkpoints Miii asks before every edit, delete, or shell command β€” just like Claude Code. Every file is checkpointed before it's touched. Hit Esc to abort and all changes roll back automatically.

πŸ” Semantic Codebase Indexing Build a vector index of your entire codebase using local embeddings. Ask "where is the auth logic?" and Miii finds it by meaning, not keyword. No data leaves your machine.

🧠 Deep Think Engine Before answering complex questions, Miii runs a constrained research phase β€” reading files, checking git history, searching the web β€” then synthesizes a grounded answer.

🌐 Real-Time Web Access Tavily-powered web search, built in. Ask about breaking changes in a library you just upgraded. Get an answer that's actually current.

πŸ›  Surgical File Editing patch_file replaces exact strings in your files. No full rewrites. No formatting destruction. Exactly the change, nothing more.

πŸ” Self-Healing Test Loop Runs npm test after every file change. If something breaks, reads the failure trace and fixes it autonomously β€” up to 3 retries before surfacing the issue.

πŸ“‚ Persistent Sessions Pick up exactly where you left off. Named sessions mean your context, history, and goal survive terminal restarts.

πŸ“¦ Skill System Extend Miii with plain Markdown files or npm packages. Ship reusable agent behaviors as versioned packages your whole team can pull.

πŸ”Œ MCP Client Connect any MCP-compatible tool server. Miii discovers tools automatically and makes them available to the agent.

Get Running in 60 Seconds

# 1. Start Ollama and pull a model
ollama pull qwen2.5-coder:7b

# 2. Install Miii
npm install -g miii-cli

# 3. Go to your project and start
cd your-project
miii

No API keys. No account. No sign-up form. First run walks you through setup interactively.

Power Commands

CommandWhat it does
/configOpen interactive picker β€” change provider, model, API key, base URL, Tavily key live
/think <question>Deep research: reads files + web, then answers
/refactor <goal>Autonomous multi-file refactor with test validation
/index buildBuild semantic vector index of your codebase
/index search <query>Find code by meaning, not string match
/git reviewAI reviews your current diff for bugs and issues
/git commit <msg>Stage everything and commit in one shot
/plan <topic>Structured planning mode before you write a line
/model <name>Hot-swap your LLM mid-conversation
/session <name>Switch between named project sessions
/watch <path>Monitor files for changes and trigger agent reactions
@filenameInject any file directly into context

Semantic Codebase Indexing

For large codebases, Miii builds and queries a local vector index β€” no third-party APIs, no embeddings sent anywhere.

# Pull an embedding model (one time)
ollama pull nomic-embed-text

# Index your project
/index build

# The agent calls search_codebase automatically when it needs to find code by concept

Configuration

Interactive (recommended): type /config inside Miii to open the picker.

File-based: drop a .miii.json in your project root or ~/.config/miii/config.json globally:

{
  "model": "qwen2.5-coder:7b",
  "provider": "ollama",
  "baseUrl": "http://localhost:11434",
  "gitContext": true,
  "embedModel": "nomic-embed-text"
}

Providers: ollama (local, free) Β· anthropic (Claude API) Β· openai-compat (OpenAI or any compatible endpoint)

Build from Source

git clone https://github.com/maruakshay/miii-cli
cd miii-cli && npm install && npm run build && npm link

Who Should Use Miii

  • Privacy-conscious developers β€” won't send proprietary code to Anthropic or OpenAI
  • Cost-sensitive teams β€” API bills compound; Ollama is $0
  • Air-gapped environments β€” regulated industries, defense, offline infra
  • Model experimenters β€” want to try llama3, mistral, qwen, Claude side-by-side without switching tools

The Bottom Line

The AI coding tools you're paying for right now will raise their prices, change their terms, and keep reading your code. Miii won't. It's MIT licensed, runs locally, and gets better every time Ollama ships a new model.

If this saves you time or money, star the repo β€” it's the only metric that tells other engineers this is worth their attention.

⭐ Star on GitHub

Built by @maruakshay β€” open to PRs, issues, and model recommendations.

License

MIT β€” do whatever you want with it.

Keywords

ai-coding-assistant

FAQs

Package last updated on 15 May 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts