New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

polyai-agent

Package Overview
Dependencies
Maintainers
1
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

polyai-agent

AI-powered vibe coder, live debugger, and agent session manager — supports OpenAI, Anthropic, and Google Gemini

latest
Source
npmnpm
Version
1.0.1
Version published
Maintainers
1
Created
Source

polyai-agent

An AI-powered vibe coder, live service debugger, and agent session manager deployable, usable as a CLI or importable library.

Supports OpenAI, Anthropic, and Google Gemini with streaming responses.

Features

FeatureDescription
🤖 Vibe CoderAI pair-programmer that reads your project context and generates/refactors code
🔍 Live DebuggerAttach to running services (log files, Docker, processes) and get real-time AI analysis
📦 Speckits7 prebuilt agent configurations: vibe-coder, debugger, code-review, doc-writer, test-writer, refactor, security-audit
🛡 GuardrailsPer-session rules the AI must follow (allowed paths, denied operations, style rules, custom rules)
💾 SessionsNamed, persistent sessions with full conversation history and file-change tracking
🌐 Web UIBuilt-in web dashboard to view, manage and export sessions
📚 Library APIImportable TypeScript module for programmatic use

Architecture & Flow

High-Level Architecture

flowchart TD
    User([👤 User]) --> CLI["CLI\n<code>ai-agent</code>"]
    User --> LibAPI["Library API\n<code>import { AgentCLI }</code>"]

    CLI --> Chat["<code>chat</code>\nvibe-coder / speckit"]
    CLI --> Debug["<code>debug</code>\nLive Debugger"]
    CLI --> UI["<code>ui</code>\nWeb Dashboard"]
    CLI --> SessionCmd["<code>session</code>\nSession Manager"]
    CLI --> Speckit["<code>speckit</code>\nSpeckit Browser"]
    CLI --> Config["<code>config</code>\nConfiguration"]

    Chat --> SM[Session Manager]
    LibAPI --> SM

    SM --> Guardrails[🛡 Guardrails]
    SM --> SpeckitConf[📦 Speckit Config]
    SM --> Providers

    Debug --> LiveDbg[Live Debugger]
    LiveDbg --> Providers
    LiveDbg --> LogSrc[Log Sources]

    LogSrc --> LogFile[📄 Log File]
    LogSrc --> DockerSrc[🐳 Docker Container]
    LogSrc --> ProcSrc[⚙️ Process / Command]
    LogSrc --> HTTPSrc[🌐 HTTP Poll]

    Providers[AI Providers] --> OpenAI[OpenAI]
    Providers --> Anthropic[Anthropic]
    Providers --> Gemini[Google Gemini]

    SM --> Storage[(💾 &lt;home&gt;/.polyai-agent/\nsessions)]

    UI --> WebServer["Express + Socket.IO\nWeb Server"]
    WebServer --> SM
    SessionCmd --> SM

Chat Session Flow

sequenceDiagram
    actor U as User
    participant CLI as CLI
    participant SM as Session Manager
    participant SK as Speckit
    participant G as Guardrails
    participant P as AI Provider

    U->>CLI: ai-agent chat --speckit vibe-coder --session my-project
    CLI->>SM: Create or resume session "my-project"
    SM->>SK: Load speckit system prompt (vibe-coder)
    SM->>G: Inject guardrail rules into system prompt

    loop Interactive conversation
        U->>CLI: Enter message / code request
        CLI->>P: Send (system prompt + history + message)
        P-->>CLI: Stream response tokens
        CLI-->>U: Display streamed response
        CLI->>SM: Record turn + any file changes
    end

    U->>CLI: /exit
    CLI->>SM: Persist session to <home>/.polyai-agent/sessions/
    SM-->>U: Session saved ✓

Live Debugger Flow

flowchart LR
    subgraph Sources["Log Sources"]
        LF[📄 Log File]
        DC[🐳 Docker Logs]
        PR[⚙️ Process stdout]
        HP[🌐 HTTP Health Poll]
    end

    subgraph Debugger["Live Debugger"]
        Collector[Log Collector]
        Batcher["Batch Buffer\n(configurable size)"]
        Analyzer[AI Analysis]
    end

    subgraph Output["Output"]
        Terminal[Terminal / onAnalysis callback]
        Session[Session Turn Record]
    end

    LF --> Collector
    DC --> Collector
    PR --> Collector
    HP --> Collector

    Collector --> Batcher
    Batcher -->|"batch full or timeout"| Analyzer
    Analyzer -->|AI Provider| Terminal
    Analyzer --> Session

Installation

# Global install (recommended for CLI use)
npm install -g polyai-agent

# Dev dependency (for programmatic use)
npm install --save-dev polyai-agent

Quick Start

Set your API key

export OPENAI_API_KEY=sk-...
# or
export ANTHROPIC_API_KEY=sk-ant-...
# or
export GEMINI_API_KEY=AIza...

Start coding

ai-agent chat

Debug a live service

ai-agent debug --file /var/log/myapp.log
ai-agent debug --docker my-container
ai-agent debug --cmd "node server.js"

Launch Web UI

ai-agent ui
# Open http://localhost:3000

CLI Reference

Usage: ai-agent [options] [command]

Commands:
  chat [options]     Start an interactive chat session (vibe coder mode)
  speckit [name]     List or run a prebuilt speckit
  debug [options]    Attach to a live service and start AI-assisted debugging
  session [options]  Manage sessions (list, delete, export)
  ui [options]       Launch the Web UI
  config [options]   Configure default settings

Options:
  -V, --version      output the version number
  -h, --help         display help for command

ai-agent chat

ai-agent chat [options]

Options:
  -p, --provider <provider>  AI provider (openai|anthropic|gemini)
  -m, --model <model>        Model name (e.g. gpt-4o)
  -s, --session <name>       Session name — creates or resumes (default: "default")
  -k, --speckit <speckit>    Speckit to use (default: vibe-coder)
  -g, --guardrail <rule>     Add a guardrail rule (repeatable)
  --context                  Inject project directory structure as context

Interactive commands

Inside a chat session:

CommandAction
/exit or /quitEnd session and save
/saveSave current session
/turnsShow conversation history
/contextInject current project context

ai-agent speckit

ai-agent speckit           # list all speckits
ai-agent speckit vibe-coder  # show details of a speckit

ai-agent debug

ai-agent debug --file /var/log/app.log         # Watch a log file
ai-agent debug --docker my-container           # Docker container logs
ai-agent debug --cmd "node server.js"          # Attach to a process
ai-agent debug --batch 30 --session my-debug   # Custom batch size

ai-agent session

ai-agent session --list           # List all sessions
ai-agent session --delete <id>    # Delete a session
ai-agent session --export <id>    # Print session JSON

ai-agent ui

ai-agent ui               # Start on default port 3000
ai-agent ui --port 8080   # Custom port

ai-agent config

ai-agent config --show              # Show current config
ai-agent config --provider openai   # Set default provider
ai-agent config --model gpt-4o      # Set default model
ai-agent config --port 3000         # Set default Web UI port

Speckits

Speckits are pre-configured agent personas. Use --speckit <name> with chat.

NameDescription
vibe-coderFull-stack AI pair programmer (default)
debuggerRoot-cause analysis and targeted code fixes
code-reviewOWASP/quality review with severity grading
doc-writerJSDoc, README, OpenAPI docs generation
test-writerUnit and integration test generation
refactorStructural refactoring without changing behavior
security-auditOWASP Top 10 security vulnerability scan
ai-agent chat --speckit security-audit

Guardrails

Guardrails are rules injected into the AI's system prompt to constrain its behavior.

# Only allow changes in src/
ai-agent chat -g "Only modify files within the src/ directory"

# Enforce code style
ai-agent chat -g "Always use TypeScript strict mode" -g "Prefer async/await over callbacks"

# Multiple guardrails
ai-agent chat \
  -g "Never delete files" \
  -g "Always write unit tests for new functions" \
  -g "Use camelCase for all variable names"

Guardrail types (programmatic API)

import { createGuardrail } from 'polyai-agent';

createGuardrail('allow-paths', ['./src', './tests'])
createGuardrail('deny-paths', ['./node_modules', './.env'])
createGuardrail('deny-operations', ['delete', 'overwrite'])
createGuardrail('max-tokens', 2000)
createGuardrail('style', 'Use functional programming patterns')
createGuardrail('custom', 'Always add JSDoc to exported functions')

Configuration File

Create .polyai-agent.json in your project root:

{
  "provider": "openai",
  "model": "gpt-4o",
  "port": 3000,
  "guardrails": [
    { "type": "custom", "value": "Always use TypeScript" }
  ]
}

Or ~/.polyai-agent/config.json for global settings.

API keys are never stored in config files — use environment variables:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...
AI_PROVIDER=openai
AI_MODEL=gpt-4o
AI_AGENT_PORT=3000

Library / Programmatic API

import { AgentCLI, createGuardrail } from 'polyai-agent';

// Create an agent instance
const agent = new AgentCLI({
  provider: 'openai',   // or 'anthropic', 'gemini'
  model: 'gpt-4o',
  apiKey: process.env.OPENAI_API_KEY,
});

// One-shot chat
const response = await agent.chat('Write a hello world in Rust');
console.log(response);

// Session-based chat with guardrails
const session = agent.createSession({
  name: 'my-project',
  speckit: 'vibe-coder',
  guardrails: [
    createGuardrail('allow-paths', ['./src']),
    createGuardrail('custom', 'Always add TypeScript types'),
  ],
});

const turn = await session.chat('Add a user authentication middleware');
console.log(turn.assistantMessage);

// Apply a file change
session.applyFileChange('./src/middleware/auth.ts', '// new content...');

// Revert the change
session.revertTurnChanges(turn.id);

// Save session
agent.sessionManager.persistSession(session);

Live Debugger API

import { AgentCLI, LiveDebugger } from 'polyai-agent';

const agent = new AgentCLI({ provider: 'openai' });
const session = agent.createSession({ name: 'debug', speckit: 'debugger' });

const debugger_ = new LiveDebugger({
  session,
  batchSize: 20,
  onLog: (line) => console.log(line),
  onAnalysis: (analysis) => console.log('AI:', analysis),
});

// Watch a log file
debugger_.watchLogFile('/var/log/app.log');

// Or connect to a service
debugger_.connectToService({ type: 'docker', container: 'my-app' });
debugger_.connectToService({ type: 'process', command: 'node', args: ['server.js'] });
debugger_.connectToService({ type: 'http-poll', url: 'http://localhost:8080/health' });

// Stop
process.on('SIGINT', () => debugger_.stop());

Web Server API

import { AgentCLI, createWebServer } from 'polyai-agent';

const agent = new AgentCLI({ provider: 'openai' });
const server = createWebServer({ port: 3000, sessionManager: agent.sessionManager });
await server.start();

Web UI

Start with ai-agent ui and open http://localhost:3000.

  • Sessions Dashboard — view all sessions, status, provider, model
  • Session Detail — browse conversation turns, guardrails, file changes
  • Settings — configure default provider and model
  • Export — download any session as JSON
  • Real-time updates — via Socket.IO

Providers & Models

ProviderEnv VariableRecommended Models
OpenAIOPENAI_API_KEYgpt-4o, gpt-4o-mini, gpt-4-turbo
AnthropicANTHROPIC_API_KEYclaude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
Google GeminiGEMINI_API_KEYgemini-1.5-pro, gemini-1.5-flash

Development

git clone https://github.com/fury-r/ai-agent-cli.git
cd ai-agent-cli
npm install
npm run build
npm test
npm run dev -- chat   # run CLI in dev mode

License

MIT

Keywords

ai

FAQs

Package last updated on 21 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts