You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

network-ai

Package Overview
Dependencies
Maintainers
1
Versions
81
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

network-ai

AI agent orchestration framework for TypeScript/Node.js - 15 adapters (LangChain, AutoGen, CrewAI, OpenAI Assistants, LlamaIndex, Semantic Kernel, Haystack, DSPy, Agno, MCP, OpenClaw, A2A, Codex, MiniMax + streaming variants). Built-in CLI, security, swar

Source
npmnpm
Version
4.6.1
Version published
Weekly downloads
1.6K
1.95%
Maintainers
1
Weekly downloads
 
Created
Source

Network-AI

TypeScript/Node.js multi-agent orchestrator — shared state, guardrails, budgets, and cross-framework coordination

CI CodeQL Release npm Tests Adapters License Socket Node.js TypeScript ClawHub Integration Guide Glama

Network-AI is a TypeScript/Node.js multi-agent orchestrator that adds coordination, guardrails, and governance to any AI agent stack.

  • Shared blackboard with locking — atomic propose → validate → commit prevents race conditions and split-brain failures across parallel agents
  • Guardrails and budgets — FSM governance, per-agent token ceilings, HMAC audit trails, and permission gating
  • 15 adapters — LangChain (+ streaming), AutoGen, CrewAI, OpenAI Assistants, LlamaIndex, Semantic Kernel, Haystack, DSPy, Agno, MCP, Custom (+ streaming), OpenClaw, A2A, Codex, and MiniMax — no glue code, no lock-in
  • Persistent project memory (Layer 3)context_manager.py injects decisions, goals, stack, milestones, and banned patterns into every system prompt so agents always have full project context

The silent failure mode in multi-agent systems: parallel agents writing to the same key use last-write-wins by default — one agent's result silently overwrites another's mid-flight. The outcome is split-brain state: double-spends, contradictory decisions, corrupted context, no error thrown. Network-AI's propose → validate → commit mutex prevents this at the coordination layer, before any write reaches shared state.

Use Network-AI as:

  • A TypeScript/Node.js libraryimport { createSwarmOrchestrator } from 'network-ai'
  • An MCP servernpx network-ai-server --port 3001
  • A CLInetwork-ai bb get status / network-ai audit tail
  • An OpenClaw skillclawhub install network-ai

5-minute quickstart →  |  Architecture →  |  All adapters →  |  Benchmarks →

Try the control-plane stress test — no API key, ~3 seconds:

npx ts-node examples/08-control-plane-stress-demo.ts

Runs priority preemption, AuthGuardian permission gating, FSM governance, and compliance monitoring against a live swarm. No external services required.

If it saves you from a race condition, a ⭐ helps others find it.

Why teams use Network-AI

ProblemHow Network-AI solves it
Race conditions in parallel agentsAtomic blackboard: propose → validate → commit with file-system mutex
Agent overspend / runaway costsFederatedBudget — hard per-agent token ceilings with live spend tracking
No visibility into what agents didHMAC-signed audit log on every write, permission grant, and FSM transition
Locked into one AI framework15 adapters — mix LangChain + AutoGen + CrewAI + Codex + MiniMax + custom in one swarm
Agents escalating beyond their scopeAuthGuardian — scoped permission tokens required before sensitive operations
Agents lack project context between runsProjectContextManager (Layer 3) — inject decisions, goals, stack, and milestones into every system prompt

Architecture

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#1e293b', 'primaryTextColor': '#e2e8f0', 'primaryBorderColor': '#475569', 'lineColor': '#94a3b8', 'clusterBkg': '#0f172a', 'clusterBorder': '#334155', 'edgeLabelBackground': '#1e293b', 'edgeLabelColor': '#cbd5e1', 'titleColor': '#e2e8f0'}}}%%
flowchart TD
    classDef app        fill:#1e3a5f,stroke:#3b82f6,color:#bfdbfe,font-weight:bold
    classDef security   fill:#451a03,stroke:#d97706,color:#fde68a
    classDef routing    fill:#14532d,stroke:#16a34a,color:#bbf7d0
    classDef quality    fill:#3b0764,stroke:#9333ea,color:#e9d5ff
    classDef blackboard fill:#0c4a6e,stroke:#0284c7,color:#bae6fd
    classDef adapters   fill:#064e3b,stroke:#059669,color:#a7f3d0
    classDef audit      fill:#1e293b,stroke:#475569,color:#94a3b8
    classDef context    fill:#3b1f00,stroke:#b45309,color:#fef3c7

    App["Your Application"]:::app
    App -->|"createSwarmOrchestrator()"| SO

    PC["ProjectContextManager\n(Layer 3 — persistent memory)\ngoals · stack · decisions\nmilestones · banned"]:::context
    PC -->|"injected into system prompt"| SO

    subgraph SO["SwarmOrchestrator"]
        AG["AuthGuardian\n(permission gating)"]:::security
        AR["AdapterRegistry\n(route tasks to frameworks)"]:::routing
        QG["QualityGateAgent\n(validate blackboard writes)"]:::quality
        BB["SharedBlackboard\n(shared agent state)\npropose → validate → commit\nfilesystem mutex"]:::blackboard
        AD["Adapters — plug any framework in, swap freely\nLangChain · AutoGen · CrewAI · MCP · LlamaIndex · …"]:::adapters

        AG -->|"grant / deny"| AR
        AR -->|"tasks dispatched"| AD
        AD -->|"writes results"| BB
        QG -->|"validates"| BB
    end

    SO --> AUDIT["data/audit_log.jsonl"]:::audit

FederatedBudget is a standalone export — instantiate it separately and optionally wire it to a blackboard backend for cross-node token budget enforcement.

Full architecture, FSM journey, and handoff protocol

Install

npm install network-ai

No native dependencies, no build step. Adapters are dependency-free (BYOC — bring your own client).

Use as MCP Server

Start the server (no config required, zero dependencies):

npx network-ai-server --port 3001
# or from source:
npx ts-node bin/mcp-server.ts --port 3001

Then wire any MCP-compatible client to it.

Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "network-ai": {
      "url": "http://localhost:3001/sse"
    }
  }
}

Cursor / Cline / any SSE-based MCP client — point to the same URL:

{
  "mcpServers": {
    "network-ai": {
      "url": "http://localhost:3001/sse"
    }
  }
}

Verify it's running:

curl http://localhost:3001/health   # { "status": "ok", "tools": <n>, "uptime": <ms> }
curl http://localhost:3001/tools    # full tool list

Tools exposed over MCP:

  • blackboard_read / blackboard_write / blackboard_list / blackboard_delete / blackboard_exists
  • budget_status / budget_spend / budget_reset — federated token tracking
  • token_create / token_validate / token_revoke — HMAC-signed permission tokens
  • audit_query — query the append-only audit log
  • config_get / config_set — live orchestrator configuration
  • agent_list / agent_spawn / agent_stop — agent lifecycle
  • fsm_transition — write FSM state transitions to the blackboard

Each tool takes an agent_id parameter — all writes are identity-verified and namespace-scoped, exactly as they are in the TypeScript API.

Options: --no-budget, --no-token, --no-control, --ceiling <n>, --board <name>, --audit-log <path>.

CLI

Control Network-AI directly from the terminal — no server required. The CLI imports the same core engine used by the MCP server.

# One-off commands (no server needed)
npx ts-node bin/cli.ts bb set status running --agent cli
npx ts-node bin/cli.ts bb get status
npx ts-node bin/cli.ts bb snapshot

# After npm install -g network-ai:
network-ai bb list
network-ai audit tail          # live-stream the audit log
network-ai auth token my-bot --resource blackboard
Command groupWhat it controls
network-ai bbBlackboard — get, set, delete, list, snapshot, propose, commit, abort
network-ai authAuthGuardian — issue tokens, revoke, check permissions
network-ai budgetFederatedBudget — spend status, set ceiling
network-ai auditAudit log — print, live-tail, clear

Global flags on every command: --data <path> (data directory, default ./data) · --json (machine-readable output)

→ Full reference in QUICKSTART.md § CLI

Two agents, one shared state — without race conditions

The real differentiator is coordination. Here is what no single-framework solution handles: two agents writing to the same resource concurrently, atomically, without corrupting each other.

import { LockedBlackboard, CustomAdapter, createSwarmOrchestrator } from 'network-ai';

const board   = new LockedBlackboard('.');
const adapter = new CustomAdapter();

// Agent 1: writes its analysis result atomically
adapter.registerHandler('analyst', async () => {
  const id = board.propose('report:status', { phase: 'analysis', complete: true }, 'analyst');
  board.validate(id, 'analyst');
  board.commit(id);                           // file-system mutex — no race condition possible
  return { result: 'analysis written' };
});

// Agent 2: runs concurrently, writes to its own key safely
adapter.registerHandler('reviewer', async () => {
  const id = board.propose('report:review', { approved: true }, 'reviewer');
  board.validate(id, 'reviewer');
  board.commit(id);
  const analysis = board.read('report:status');
  return { result: `reviewed phase=${analysis?.phase}` };
});

createSwarmOrchestrator({ adapters: [{ adapter }] });

// Both fire concurrently — mutex guarantees no write is ever lost
const [, ] = await Promise.all([
  adapter.executeAgent('analyst',  { action: 'run', params: {} }, { agentId: 'analyst' }),
  adapter.executeAgent('reviewer', { action: 'run', params: {} }, { agentId: 'reviewer' }),
]);

console.log(board.read('report:status'));   // { phase: 'analysis', complete: true }
console.log(board.read('report:review'));   // { approved: true }

Add budgets, permissions, and cross-framework agents with the same pattern. → QUICKSTART.md

Demo — Control-Plane Stress Test (no API key)

Runs in ~3 seconds. Proves the coordination primitives without any LLM calls.

npm run demo -- --08

What it shows: atomic blackboard locking, priority preemption (priority-3 wins over priority-0 on same key), AuthGuardian permission gate (blocked → justified → granted with token), FSM hard-stop at 700 ms, live compliance violation capture (TOOL_ABUSE, TURN_TAKING, RESPONSE_TIMEOUT, JOURNEY_TIMEOUT), and FederatedBudget tracking — all without a single API call.

Control Plane Demo

8-agent AI pipeline (requires OPENAI_API_KEY — builds a Payment Processing Service end-to-end):

npm run demo -- --07

Code Review Swarm Demo

Adapter System

15 adapters, zero adapter dependencies. You bring your own SDK objects.

AdapterFramework / ProtocolRegister method
CustomAdapterAny function or HTTP endpointregisterHandler(name, fn)
LangChainAdapterLangChainregisterAgent(name, runnable)
AutoGenAdapterAutoGen / AG2registerAgent(name, agent)
CrewAIAdapterCrewAIregisterAgent or registerCrew
MCPAdapterModel Context ProtocolregisterTool(name, handler)
LlamaIndexAdapterLlamaIndexregisterQueryEngine(), registerChatEngine()
SemanticKernelAdapterMicrosoft Semantic KernelregisterKernel(), registerFunction()
OpenAIAssistantsAdapterOpenAI AssistantsregisterAssistant(name, config)
HaystackAdapterdeepset HaystackregisterPipeline(), registerAgent()
DSPyAdapterStanford DSPyregisterModule(), registerProgram()
AgnoAdapterAgno (formerly Phidata)registerAgent(), registerTeam()
OpenClawAdapterOpenClawregisterSkill(name, skillRef)
A2AAdapterGoogle A2A ProtocolregisterRemoteAgent(name, url)
CodexAdapterOpenAI Codex / gpt-4o / Codex CLIregisterCodexAgent(name, config)
MiniMaxAdapterMiniMax LLM API (M2.5 / M2.5-highspeed)registerAgent(name, config)

Streaming variants (drop-in replacements with .stream() support):

AdapterExtendsStreaming source
LangChainStreamingAdapterLangChainAdapterCalls .stream() on the Runnable if available; falls back to .invoke()
CustomStreamingAdapterCustomAdapterPipes AsyncIterable<string> handlers; falls back to single-chunk for plain Promises

Extend BaseAdapter (or StreamingBaseAdapter for streaming) to add your own in minutes. See references/adapter-system.md.

Works with LangGraph, CrewAI, and AutoGen

Network-AI is the coordination layer you add on top of your existing stack. Keep your LangChain chains, CrewAI crews, and AutoGen agents — and add shared state, governance, and budgets around them.

CapabilityNetwork-AILangGraphCrewAIAutoGen
Cross-framework agents in one swarm✅ 15 built-in adapters⚠️ Nodes can call any code; no adapter abstraction⚠️ Extensible via tools; CrewAI-native agents only⚠️ Extensible via plugins; AutoGen-native agents only
Atomic shared state (conflict-safe)propose → validate → commit mutex⚠️ State passed between nodes; last-write-wins⚠️ Shared memory available; no conflict resolution⚠️ Shared context available; no conflict resolution
Hard token ceiling per agentFederatedBudget (first-class API)⚠️ Via callbacks / custom middleware⚠️ Via callbacks / custom middleware⚠️ Built-in token tracking in v0.4+; no swarm-level ceiling
Permission gating before sensitive opsAuthGuardian (built-in)⚠️ Possible via custom node logic⚠️ Possible via custom tools⚠️ Possible via custom middleware
Append-only audit log✅ plain JSONL (data/audit_log.jsonl)⚠️ Not built-in⚠️ Not built-in⚠️ Not built-in
Encryption at rest✅ AES-256-GCM (TypeScript layer)⚠️ Not built-in⚠️ Not built-in⚠️ Not built-in
LanguageTypeScript / Node.jsPythonPythonPython

Testing

npm run test:all          # All suites in sequence
npm test                  # Core orchestrator
npm run test:security     # Security module
npm run test:adapters     # All 15 adapters
npm run test:streaming    # Streaming adapters
npm run test:a2a          # A2A protocol adapter
npm run test:codex        # Codex adapter
npm run test:priority     # Priority & preemption
npm run test:cli          # CLI layer

1,449 passing assertions across 18 test suites (npm run test:all):

SuiteAssertionsCovers
test-phase4.ts147FSM governance, compliance monitor, adapter integration
test-phase5f.ts127SSE transport, McpCombinedBridge, extended MCP tools
test-phase5g.ts121CRDT backend, vector clocks, bidirectional sync
test-phase6.ts121MCP server, control-plane tools, audit tools
test-adapters.ts140All 15 adapters, registry routing, integration, edge cases
test-phase5d.ts117Pluggable backend (Redis, CRDT, Memory)
test-standalone.ts88Blackboard, auth, integration, persistence, parallelisation, quality gate
test-phase5e.ts87Federated budget tracking
test-phase5c.ts73Named multi-blackboard, isolation, backend options
test-codex.ts51Codex adapter: chat, completion, CLI, BYOC client, error paths
test-minimax.ts50MiniMax adapter: lifecycle, registration, chat mode, temperature clamping
test-priority.ts64Priority preemption, conflict resolution, backward compat
test-a2a.ts35A2A protocol: register, execute, mock fetch, error paths
test-streaming.ts32Streaming adapters, chunk shapes, fallback, collectStream
test-phase5b.ts55Pluggable backend part 2, consistency levels
test-phase5.ts42Named multi-blackboard base
test-security.ts34Tokens, sanitization, rate limiting, encryption, audit
test-cli.ts65CLI layer: bb, auth, budget, audit commands

Documentation

DocContents
QUICKSTART.mdInstallation, first run, CLI reference, PowerShell guide, Python scripts CLI
ARCHITECTURE.mdRace condition problem, FSM design, handoff protocol, project structure
BENCHMARKS.mdProvider performance, rate limits, local GPU, max_completion_tokens guide
SECURITY.mdSecurity module, permission system, trust levels, audit trail
ENTERPRISE.mdEvaluation checklist, stability policy, security summary, integration entry points
AUDIT_LOG_SCHEMA.mdAudit log field reference, all event types, scoring formula
ADOPTERS.mdKnown adopters — open a PR to add yourself
INTEGRATION_GUIDE.mdEnd-to-end integration walkthrough
references/adapter-system.mdAdapter architecture, writing custom adapters
references/auth-guardian.mdPermission scoring, resource types
references/trust-levels.mdTrust level configuration

Use with Claude, ChatGPT & Codex

Three integration files are included in the repo root:

FileUse
claude-tools.jsonClaude API tool use & OpenAI Codex — drop into the tools array
openapi.yamlCustom GPT Actions — import directly in the GPT editor
claude-project-prompt.mdClaude Projects — paste into Custom Instructions

Claude API / Codex:

import tools from './claude-tools.json' assert { type: 'json' };
// Pass tools array to anthropic.messages.create({ tools }) or OpenAI chat completions

Custom GPT Actions: In the GPT editor → Actions → Import from URL, or paste the contents of openapi.yaml. Set the server URL to your running npx network-ai-server --port 3001 instance.

Claude Projects: Copy the contents of claude-project-prompt.md (below the horizontal rule) into a Claude Project's Custom Instructions field. No server required for instruction-only mode.

Contributing

  • Fork → feature branch → npm run test:all → pull request
  • Bugs and feature requests via Issues

MIT License — LICENSE  ·  CHANGELOG  ·  CONTRIBUTING  ·  RSS

Keywords

multi-agent · agent orchestration · AI agents · agentic AI · agentic workflow · TypeScript · Node.js · LangGraph · CrewAI · AutoGen · MCP · model-context-protocol · LlamaIndex · Semantic Kernel · OpenAI Assistants · Haystack · DSPy · Agno · OpenClaw · ClawHub · shared state · blackboard pattern · atomic commits · guardrails · token budgets · permission gating · audit trail · agent coordination · agent handoffs · governance · cost-awareness

Keywords

ai-agents

FAQs

Package last updated on 12 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts