✅ QA writes tests, security scans vulnerabilities — after every build
✅ Judgment middleware blocks placeholder code before it reaches disk
✅ Each agent has territory — backend can't touch frontend files and vice versa
Quickstart
# Install globally via NPM
npm install -g sajicode
# Or run directly without installing
npx sajicode
Run with your preferred model
# Local (no API key needed)
sajicode -p ollama -m llama3.1:70b
# Cloud providers
sajicode -p openai -m gpt-4.1
sajicode -p google -m gemini-2.5-flash
sajicode -p anthropic -m claude-sonnet-4-20250514
Headless mode for CI/CD pipeline
sajicode build "Fix the login bug and write tests" --headless
Environment variables
# Set the API key for your chosen providerexport OPENAI_API_KEY="sk-..."export GOOGLE_API_KEY="..."# or GEMINI_API_KEYexport ANTHROPIC_API_KEY="sk-ant-..."export TAVILY_API_KEY="tvly-..."# optional — enables web search
WhatsApp Integration
Send coding tasks from your phone. SajiCode connects directly to WhatsApp — no third-party service, no API key, just scan a QR code.
# Start SajiCode with WhatsApp channel
sajicode --channels whatsapp
First run: A QR code appears in your terminal. Scan it with WhatsApp (Settings → Linked Devices → Link a Device). Auth is saved globally to ~/.sajicode/whatsapp-auth/ — you only scan once, works across all projects.
After that: Send any message from WhatsApp → SajiCode processes it → replies directly in the chat.
Long responses are chunked to fit WhatsApp's 4096 char limit
CLI still works normally alongside WhatsApp
Two modes (configurable):
Mode
Who uses it
What it does
Admin Mode (default)
You, the developer
Send coding tasks from your phone → SajiCode builds them
Personal Bot Mode
Your contacts
AI assistant that learns your chat style and replies on your behalf
In Admin Mode, only your own messages are processed as commands. In Personal Bot Mode, incoming messages from contacts are handled by a personalized AI that adapts to your tone and conversation patterns.
Configure in .sajicode/config.json:
// Admin Mode (default) — send coding tasks from your phone{"whatsapp":{"enabled":true,"mode":"admin"}}
// Personal Bot Mode — AI replies to your contacts in your style{"whatsapp":{"enabled":true,"mode":"personal","personalBotPrompt":"Reply like Rahees — friendly, direct, use emojis sometimes."}}
When enabled is true in config, WhatsApp starts automatically — no --channels flag needed.
Coming soon: Discord and Telegram channels using the same adapter pattern.
How It Works
Step 1 — You describe what to build
>_ build a fullstack task management app with Express API, SQLite, and React dashboard
Step 2 — PM Agent architects the solution
The PM scans your codebase with collect_repo_map, creates architecture.md with system diagrams, API tables, and file ownership map, then presents the plan.
Step 3 — Parallel delegation to specialists
PM Agent
├─▶ Backend Lead → "Build Express REST API in src/routes/, src/models/"
├─▶ Frontend Lead → "Build React dashboard in src/components/, src/pages/"
│
├─▶ QA Lead → "Write tests for all endpoints and components"
├─▶ Security Lead → "Audit for XSS, injection, hardcoded secrets"
└─▶ Review Agent → "Final quality gate — no TODOs, no broken imports"
Step 4 — Each lead delegates further
Backend Lead spawns api-builder and db-designer to work concurrently. Frontend Lead spawns component-builder and style-designer. The work is parallel at every level.
Step 5 — Production-ready output
Every file is validated by the judgment middleware (no placeholder code allowed), tested by QA, audited by security, and reviewed before the task completes.
Architecture
┌──────────────┐
│ PM Agent │
└──────┬───────┘
│
┌────────┬─────────┼─────────┬────────┐
│ │ │ │ │
┌────▼────┐┌──▼────┐┌───▼────┐┌───▼────┐┌──▼────┐
│ Backend ││ Front ││ QA ││ Secur. ││ Deploy│
│ Lead ││ Lead ││ Lead ││ Lead ││ Lead │
└────┬────┘└───┬───┘└───┬────┘└───┬────┘└───┬───┘
│ │ │ │ │
api db comp style unit integ vuln dep docker ci
bldr dsgn bldr dsgn tstr tstr scan aud spec spec
1 PM + 6 Leads + 10 Sub-agents = 17 agents total
Each agent has:
Owned directories — files it can create/modify
Forbidden paths — files it must never touch
Persistent memory — remembers what it built across sessions
Skills — 21 expert skill files covering full-stack development, AI engineering, system architecture, debugging, and more
The PM delegates to multiple leads simultaneously, and each lead further delegates to sub-agents. No waterfall — everything that can run in parallel does.
Judgment Middleware — Zero Placeholder Code
A 3-layer protection system that wraps every tool call:
Risk assessment — warns on destructive operations (rm -rf, drop table) and sensitive paths (.env, credentials)
Placeholder blocking — blockswrite_file if content contains TODO, FIXME, placeholder stubs, or empty function bodies. The agent is forced to write real code
Loop detection — detects when an agent calls the same tool 3+ times identically and breaks the loop
Human-in-the-Loop (HITL)
Optional approval system for shell commands and file deletions:
Safe commands (like npm install) are auto-approved. Dangerous ones require your explicit approval.
Multi-Provider LLM Support
Provider
Flag
Example
Ollama (local)
-p ollama
deepseek-v3.1:671b-cloud, llama3.1:70b
OpenAI
-p openai
gpt-4.1, gpt-4o
Google
-p google
gemini-2.5-flash, gemini-2.5-pro
Anthropic
-p anthropic
claude-sonnet-4-20250514
Codebase Intelligence
The collect_repo_map tool scans your entire project and extracts function/class/interface signatures across 7 languages (TypeScript, JavaScript, Python, Go, Java, Rust, Ruby). Agents get a ~50 token/file condensed map instead of reading 500+ tokens per file.
Persistent Memory System
.sajicode/
├── config.json # Model, HITL, and risk settings
├── architecture.md # Current project architecture plan
├── whats_done.md # Shared team log — append-only
├── memories/ # Long-term user preferences
│ └── preferences.md
├── agents/ # Per-agent structured JSON memory
│ ├── backend-lead.json
│ ├── frontend-lead.json
│ └── ...
└── mcp-servers.json # MCP server configurations
Every agent's memory persists across sessions. When you start a new thread, agents remember what they built before, what contracts they established with other agents, and your preferences.
Headless & CI/CD Mode
SajiCode isn't just an interactive CLI—it's designed to run completely unattended in your deployment pipelines. Using the --headless flag, you can trigger agents to write tests, review code, or audit security vulnerabilities automatically within GitHub Actions or your preferred CI. Pre and post-action hooks allow you to customize these agent workflows deeply into your existing build systems.
Advanced Engineering Capabilities
Intelligent Version Control: Full suite of git tools (commit, branch, diff, checkpoint) wired directly into the PM and domain leads. Combined with built-in file snapshot and undo tracking, agents can checkpoint progress and revert mistakes automatically.
Enterprise-Grade Memory: Advanced CompositeBackend persistent memory architecture spans multiple sessions and storage layers. This is fortified by a Summarization Middleware that actively condenses context to prevent LLM window collapse during long tasks.
Optimized Execution Engine: A real-time streaming progress dashboard gives you full visibility into all 17 agents. Sub-agents are optimized with strict response limits (500 words max) and enhanced context briefing delegation to guarantee agile, concise outputs.
TDD-First Architecture: Test-Driven Development is baked natively into the QA and domain leads' workflow, ensuring every newly generated feature or module is backed by comprehensive testing before completion.
21 Expert Skills
Skills are modular knowledge files following the Agent Skills specification. Agents read them on-demand via progressive disclosure — loading only what's needed for the current task.
{{projectPath}} is automatically replaced with your project's absolute path. MCP tools are injected into the PM agent and available immediately.
CLI Reference
Command
Action
/init
Scans project and generates SAJICODE.md context file
/status
Shows session info — thread, model, context, HITL status
/undo <file>
Undoes the last file change made by an agent and restores from snapshot
/snapshots
Lists recent file snapshots taken by agents
/help
Lists all available commands
/clear
Clears the terminal
/exit
Gracefully shuts down all agents and MCP connections
CLI Flags
sajicode [options]
-p, --provider <name> LLM provider (ollama, openai, google, anthropic)
-m, --model <name> Model name
-c, --channels <list> Comma-separated channels to start (whatsapp)
-H, --headless Run in headless mode (no UI, ideal for CI/CD)
Examples:
# Terminal only (default)
sajicode -p ollama -m llama3.1:70b
# Terminal + WhatsApp
sajicode -p openai -m gpt-4.1 --channels whatsapp
# Headless mode for CI/CD pipeline
sajicode build "Fix the login bug and write tests" --headless
We found that sajicode demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 1 open source maintainer collaborating on the project.
Package last updated on 05 Mar 2026
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
TC39’s March 2026 meeting advanced eight ECMAScript proposals, including Temporal reaching Stage 4 and securing its place in the ECMAScript 2026 specification.
Since January 31, 2026, we identified at least 72 additional malicious Open VSX extensions, including transitive GlassWorm loader extensions targeting developers.