New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details β†’ β†’
Socket
Book a DemoSign in
Socket

opencode-swarm-plugin

Package Overview
Dependencies
Maintainers
1
Versions
152
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

opencode-swarm-plugin

Multi-agent swarm coordination for OpenCode with learning capabilities, beads integration, and Agent Mail

latest
Source
npmnpm
Version
0.63.2
Version published
Maintainers
1
Created
Source

opencode-swarm-plugin

Multi-agent swarm coordination for OpenCode - break tasks into parallel subtasks, spawn worker agents, learn from outcomes.

🌐 Website: swarmtools.ai
πŸ“š Full Documentation: swarmtools.ai/docs

Eval Gate

 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ–ˆβ•—
 β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ•‘
 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘ β–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ•‘
 β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘
 β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ•”β–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘ β•šβ•β• β–ˆβ–ˆβ•‘
 β•šβ•β•β•β•β•β•β• β•šβ•β•β•β•šβ•β•β• β•šβ•β•  β•šβ•β•β•šβ•β•  β•šβ•β•β•šβ•β•     β•šβ•β•

Prerequisites

Bun is required. The CLI uses Bun-specific APIs and won't run with Node.js alone.

# Install Bun (if you don't have it)
curl -fsSL https://bun.sh/install | bash

See bun.sh for other installation methods (Homebrew, npm, etc.).

Quickstart (<2 minutes)

1. Install

bun install -g opencode-swarm-plugin@latest
swarm setup

Note: You can also use npm install -g, but Bun must be installed to run the CLI.

Claude Code Plugin (Marketplace)

/plugin

Choose Marketplace β†’ opencode-swarm-plugin β†’ Install.

GitHub marketplace (this repo):

/plugin marketplace add joelhooks/swarm-tools
/plugin install swarm@swarm-tools

Global install (npm):

# After `npm install -g opencode-swarm-plugin`
swarm claude install

Project-local config (standalone):

swarm claude init

MCP auto-launch: Claude Code starts MCP servers declared in the plugin mcpServers config automatically. You only need swarm mcp-serve when debugging outside Claude Code.

MCP Troubleshooting (Marketplace Install)

If Claude Code reports an MCP failure or no swarm tools appear, build artifacts are missing.

  • From the repo root, build the plugin:
    bun install
    bun turbo build --filter=opencode-swarm-plugin
    
  • Confirm packages/opencode-swarm-plugin/dist/ exists.
  • Reinstall the plugin from /plugin and restart OpenCode.

2. Initialize in Your Project

cd your-project
swarm init

3. Run Your First Swarm

# Inside OpenCode
/swarm "Add user authentication with OAuth"

What happens:

  • Task decomposed into parallel subtasks (coordinator queries past similar tasks)
  • Worker agents spawn with file reservations
  • Progress tracked with auto-checkpoints at 25/50/75%
  • Completion runs bug scans, releases file locks, records learnings

Done. You're swarming.

How Swarms Get Smarter Over Time

Swarms learn from outcomes. Every completed subtask records what worked and what failed - then injects that wisdom into future prompts.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        SWARM LEARNING LOOP                              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                         β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚   β”‚  TASK    │───▢│ DECOMPOSE│───▢│  EXECUTE │───▢│ COMPLETE β”‚         β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚        β–²               β”‚               β”‚               β”‚                β”‚
β”‚        β”‚               β–Ό               β–Ό               β–Ό                β”‚
β”‚        β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚        β”‚         β”‚           EVENT STORE                   β”‚            β”‚
β”‚        β”‚         β”‚  subtask_outcome, eval_finalized, ...   β”‚            β”‚
β”‚        β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚        β”‚                           β”‚                                    β”‚
β”‚        β”‚                           β–Ό                                    β”‚
β”‚        β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚        β”‚         β”‚         INSIGHTS LAYER                  β”‚            β”‚
β”‚        β”‚         β”‚  Strategy | File | Pattern insights     β”‚            β”‚
β”‚        β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚        β”‚                           β”‚                                    β”‚
β”‚        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                    β”‚
β”‚                  (injected into next decomposition)                     β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The Insights Layer

swarm-insights (src/swarm-insights.ts) is the data aggregation layer that queries historical outcomes and semantic memory to provide context-efficient summaries for coordinator and worker agents.

Three insight types:

TypeWhat It TracksUsed By
StrategyInsightSuccess rates by decomposition strategy (file-based, feature-based, risk-based)Coordinators
FileInsightFile-specific failure patterns and gotchas from past subtasksWorkers
PatternInsightCommon failure patterns across all subtasks (type errors, timeouts, conflicts)Coordinators

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         DATA FLOW                                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚   Event Store   β”‚     β”‚ Semantic Memory β”‚     β”‚  Anti-Patterns  β”‚   β”‚
β”‚  β”‚  (libSQL)       β”‚     β”‚  (Ollama/FTS)   β”‚     β”‚  (Registry)     β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚           β”‚                       β”‚                       β”‚            β”‚
β”‚           β–Ό                       β–Ό                       β–Ό            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚                    INSIGHTS AGGREGATION                         β”‚   β”‚
β”‚  β”‚                                                                 β”‚   β”‚
β”‚  β”‚  getStrategyInsights()  getFileInsights()  getPatternInsights() β”‚   β”‚
β”‚  β”‚         β”‚                      β”‚                    β”‚           β”‚   β”‚
β”‚  β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚   β”‚
β”‚  β”‚                                β–Ό                                β”‚   β”‚
β”‚  β”‚                    formatInsightsForPrompt()                    β”‚   β”‚
β”‚  β”‚                    (token-budgeted output)                      β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                   β”‚                                    β”‚
β”‚           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚           β–Ό                       β–Ό                       β–Ό            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚   Coordinator   β”‚     β”‚     Worker      β”‚     β”‚     Worker      β”‚   β”‚
β”‚  β”‚   (strategy +   β”‚     β”‚  (file-specific β”‚     β”‚  (file-specific β”‚   β”‚
β”‚  β”‚    patterns)    β”‚     β”‚    gotchas)     β”‚     β”‚    gotchas)     β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

API Reference

For coordinators (strategy selection):

import { getStrategyInsights, getPatternInsights, formatInsightsForPrompt } from "opencode-swarm-plugin";

const strategies = await getStrategyInsights(swarmMail, task);
// Returns: [{ strategy: "file-based", successRate: 85.5, totalAttempts: 12, recommendation: "..." }]

const patterns = await getPatternInsights(swarmMail);
// Returns: [{ pattern: "type_error", frequency: 5, recommendation: "Add type annotations" }]

const summary = formatInsightsForPrompt({ strategies, patterns }, { maxTokens: 500 });
// Injected into decomposition prompt

For workers (file-specific context):

import { getFileInsights, formatInsightsForPrompt } from "opencode-swarm-plugin";

const fileInsights = await getFileInsights(swarmMail, ["src/auth.ts", "src/db.ts"]);
// Returns: [{ file: "src/auth.ts", failureCount: 3, lastFailure: "2025-12-20T...", gotchas: [...] }]

const summary = formatInsightsForPrompt({ files: fileInsights }, { maxTokens: 300 });
// Injected into worker prompt

Caching (5-minute TTL):

import { getCachedInsights, clearInsightsCache } from "opencode-swarm-plugin";

const insights = await getCachedInsights(swarmMail, "strategies:auth-task", async () => ({
  strategies: await getStrategyInsights(swarmMail, "add auth"),
}));

clearInsightsCache(); // Force fresh computation

Token Budgets

Agent TypeMax TokensWhat's Included
Coordinator500Top 3 strategies + top 3 patterns
Worker300Top 5 files with gotchas

Recommendation Thresholds

Strategy success rates map to recommendations:

Success RateRecommendation
β‰₯80%"performing well"
60-79%"moderate - monitor for issues"
40-59%"low success - consider alternatives"
<40%"AVOID - high failure rate"

Data Sources

SourceWhat It ProvidesQuery Pattern
Event Storesubtask_outcome events with strategy, success, files_touched, error_typeSQL aggregation
Semantic MemoryFile-specific learnings from past debuggingSemantic search (TODO)
Anti-Pattern RegistryPatterns with >60% failure rateDirect lookup

See swarmtools.ai/docs/insights for full details.

Semantic Memory (for pattern learning)

brew install ollama
ollama serve &
ollama pull mxbai-embed-large

Without Ollama, memory falls back to full-text search (still works, just less semantic).

Historical Context (CASS)

Queries past AI sessions for similar decompositions:

git clone https://github.com/Dicklesworthstone/coding_agent_session_search
cd coding_agent_session_search
pip install -e .
cass index  # Run periodically to index new sessions

Core Concepts

The Hive 🐝

Work items (cells) stored in .hive/ and synced to git. Each cell is a unit of work - think GitHub issue but local-first.

Cell IDs: Project-prefixed for clarity (e.g., swarm-mail-lf2p4u-abc123 not generic bd-xxx)

The Swarm

Parallel agents coordinated via Swarm Mail (message passing + file reservations). Coordinator spawns workers β†’ workers reserve files β†’ do work β†’ report progress β†’ complete with verification.

Learning

  • Pattern maturity tracks what decomposition strategies work
  • Confidence decay fades unreliable patterns (90-day half-life)
  • Anti-pattern inversion auto-marks failing approaches to avoid
  • Outcome tracking learns from speed, errors, retries

Checkpoint & Recovery

Auto-saves progress at milestones. Survives context death or crashes. Data stored in embedded libSQL (no external DB needed).

When checkpoints happen:

  • Auto at 25%, 50%, 75% progress
  • Before risky operations (via swarm_checkpoint)
  • On errors (captures error context for recovery)

Recovery: swarm_recover(project_key, epic_id) returns full context to resume work.

Tools Reference

Always-on guidance: Coordinator and worker prompts (plus compaction resumes) include an always-on guidance skill. It sets instruction priority and tool order: swarm plugin tools β†’ Read/Edit β†’ search β†’ Bash. Model defaults differ: GPT-5.2-code prefers strict checklists and minimal output; Opus 4.5 allows brief rationale.

Hive (Work Item Tracking)

ToolPurpose
hive_createCreate cell with type-safe validation
hive_create_epicAtomic epic + subtasks creation
hive_queryQuery with filters
hive_updateUpdate status/description/priority
hive_closeClose with reason
hive_startMark in-progress
hive_readyGet next unblocked cell
hive_syncSync to git

Migration Note: beads_* tools still work but show deprecation warnings. Update to hive_* tools.

Swarm Mail (Agent Coordination)

ToolPurpose
swarmmail_initInitialize session
swarmmail_sendSend message to agents
swarmmail_inboxFetch inbox (context-safe)
swarmmail_read_messageFetch one message body
swarmmail_reserveReserve files for exclusive edit
swarmmail_releaseRelease reservations

Swarm Orchestration

ToolPurpose
swarm_select_strategyAnalyze task, recommend strategy
swarm_decomposeGenerate decomposition prompt (queries CASS)
swarm_validate_decompositionValidate response, detect conflicts
swarm_subtask_promptGenerate worker agent prompt
swarm_statusGet swarm progress by epic ID
swarm_progressReport subtask progress
swarm_completeComplete subtask (releases reservations)
swarm_checkpointSave progress snapshot (auto at 25/50/75%)
swarm_recoverResume from checkpoint
swarm_reviewGenerate review prompt for coordinator
swarm_review_feedbackSend approval/rejection to worker (3-strike)

Semantic Memory (Persistent Learning)

Vector embeddings for persistent agent learnings. Uses libSQL native vector support via sqlite-vec extension + Ollama for embeddings.

ToolPurpose
semantic-memory_storeStore learnings (with auto-tag/auto-link/entity extraction)
semantic-memory_findSearch by semantic similarity
semantic-memory_getGet specific memory by ID
semantic-memory_validateValidate memory accuracy (resets 90-day decay)
semantic-memory_listList stored memories
semantic-memory_removeDelete outdated/incorrect memories

Wave 1-3 Smart Operations:

// Simple store (always adds new)
semantic-memory_store(information="OAuth tokens need 5min buffer before expiry")

// Store with auto-tagging (LLM extracts tags)
semantic-memory_store(
  information="OAuth tokens need 5min buffer",
  metadata='{"autoTag": true}'
)
// Returns: { id: "mem-abc123", autoTags: { tags: ["auth", "oauth", "tokens"], confidence: 0.85 } }

// Store with auto-linking (links to related memories)
semantic-memory_store(
  information="Token refresh race condition fixed",
  metadata='{"autoLink": true}'
)
// Returns: { id: "mem-def456", links: [{ memory_id: "mem-abc123", link_type: "related", score: 0.82 }] }

// Store with entity extraction (builds knowledge graph)
semantic-memory_store(
  information="Joel prefers TypeScript for Next.js projects",
  metadata='{"extractEntities": true}'
)
// Returns: { id: "mem-ghi789", entities: [{ name: "Joel", type: "person" }, { name: "TypeScript", type: "technology" }] }

// Combine all smart features
semantic-memory_store(
  information="OAuth tokens need 5min buffer to avoid race conditions",
  metadata='{"autoTag": true, "autoLink": true, "extractEntities": true}'
)

// Search memories
semantic-memory_find(query="token refresh issues", limit=5)

// Validate memory (resets 90-day decay timer)
semantic-memory_validate(id="mem-abc123")

Graceful Degradation: All smart operations fall back to heuristics if LLM/Ollama unavailable:

  • Auto-tagging returns undefined (no tags added)
  • Auto-linking returns undefined (no links created)
  • Entity extraction returns empty arrays
  • Vector search falls back to full-text search (FTS5)

Requires Ollama for smart operations:

brew install ollama
ollama serve &
ollama pull mxbai-embed-large

See swarm-mail README for full API details.

Skills (Knowledge Injection)

ToolPurpose
skills_listList available skills
skills_useLoad skill into context
skills_readRead skill content
skills_createCreate new skill

Bundled skills:

  • testing-patterns - 25 dependency-breaking techniques, characterization tests
  • swarm-coordination - Multi-agent decomposition, file reservations
  • cli-builder - Argument parsing, help text, subcommands
  • system-design - Architecture decisions, module boundaries
  • learning-systems - Confidence decay, pattern maturity
  • skill-creator - Meta-skill for creating new skills

What's New in v0.33

  • Pino logging infrastructure - Structured JSON logs with daily rotation to ~/.config/swarm-tools/logs/
  • Compaction hook instrumented - 14 log points across all phases (START, GATHER, RENDER, DECIDE, COMPLETE)
  • swarm log CLI - Query/tail logs with module, level, and time filters
  • Analytics queries - 5 pre-built queries based on Four Golden Signals (latency, traffic, errors, saturation, conflicts)

v0.32

  • libSQL storage (embedded SQLite) replaced PGLite - no external DB needed
  • 95% integration test coverage - checkpoint/recovery proven with 9 tests
  • Coordinator review gate - swarm_review + swarm_review_feedback with 3-strike rule
  • Smart ID resolution - partial hashes work like git (mjhgw0g matches opencode-swarm-monorepo-lf2p4u-mjhgw0ggt00)
  • Auto-sync at key events - no more forgotten hive_sync calls
  • Project-prefixed cell IDs - swarm-mail-xxx instead of generic bd-xxx

Architecture

Built on swarm-mail event sourcing primitives. Data stored in libSQL (embedded SQLite).

src/
β”œβ”€β”€ hive.ts                # Work item tracking integration
β”œβ”€β”€ swarm-mail.ts          # Agent coordination tools
β”œβ”€β”€ swarm-orchestrate.ts   # Coordinator logic (spawns workers)
β”œβ”€β”€ swarm-decompose.ts     # Task decomposition strategies
β”œβ”€β”€ swarm-insights.ts      # Historical insights aggregation (strategy/file/pattern)
β”œβ”€β”€ swarm-review.ts        # Review gate for completed work
β”œβ”€β”€ skills.ts              # Knowledge injection system
β”œβ”€β”€ learning.ts            # Pattern maturity, outcomes
β”œβ”€β”€ anti-patterns.ts       # Anti-pattern detection
β”œβ”€β”€ structured.ts          # JSON parsing utilities
└── schemas/               # Zod validation schemas

Development

# From monorepo root
bun turbo build --filter=opencode-swarm-plugin
bun turbo test --filter=opencode-swarm-plugin
bun turbo typecheck --filter=opencode-swarm-plugin

# Or from this directory
bun run build
bun test
bun run typecheck

Release (Changesets)

Create changeset files manually (avoid bunx changeset).

# From monorepo root
cat > .changeset/your-change.md << 'EOF'
---
"opencode-swarm-plugin": patch
---

Describe the change
EOF

git add .changeset/your-change.md
git commit -m "chore: add changeset"
git push

Changesets CI opens a release PR. Merge it to publish via npm OIDC.

Evaluation Pipeline

Test decomposition quality and coordinator discipline with Evalite (TypeScript-native eval framework):

# Run all evals
bun run eval:run

# Run specific suites
bun run eval:decomposition    # Task decomposition quality
bun run eval:coordinator      # Coordinator protocol compliance
bun run eval:compaction       # Compaction prompt quality

# Check eval status (progressive gates)
swarm eval status [eval-name]

# View history with trends
swarm eval history

Progressive Gates:

Phase             Runs    Gate Behavior
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Bootstrap         <10     βœ… Always pass (collect data)
Stabilization     10-50   ⚠️  Warn on >10% regression
Production        >50     ❌ Fail on >5% regression

What gets evaluated:

Eval SuiteMeasuresData Source
swarm-decompositionSubtask independence, complexity balance, coverage, clarityFixtures + .opencode/eval-data.jsonl
coordinator-sessionViolation count, spawn efficiency, review thoroughness~/.config/swarm-tools/sessions/*.jsonl
compaction-promptID specificity, actionability, identity, forbidden toolsSession compaction events

Learning Feedback Loop:

When eval scores drop >15% from baseline, failure context is automatically stored to semantic memory. Future prompts query these learnings for context.

Data capture locations:

  • Decomposition inputs/outputs: .opencode/eval-data.jsonl
  • Eval history: .opencode/eval-history.jsonl
  • Coordinator sessions: ~/.config/swarm-tools/sessions/*.jsonl
  • Subtask outcomes: swarm-mail database

See evals/README.md for full architecture, scorer details, CI integration, and how to write new evals.

Maintenance

Clean test session files:

# Remove test session files from global sessions directory
./scripts/clean-test-sessions.sh

# Dry run (see what would be deleted)
./scripts/clean-test-sessions.sh --dry-run

Test session files (test*.jsonl, no-context*.jsonl, timing-test*.jsonl) accumulate in ~/.config/swarm-tools/sessions/ during development. Run this script periodically to clean them up.

CLI Reference

Setup & Configuration

swarm setup        # Interactive installer for all dependencies
swarm setup -y     # Non-interactive mode (auto-migrate stray databases)
swarm doctor       # Check dependency health (CASS, UBS, Ollama)
swarm init         # Initialize hive in current project
swarm config       # Show config file paths
swarm update       # Update swarm plugin and bundled skills
swarm migrate      # Migrate from legacy PGLite to libSQL
swarm version      # Show version info

Database Consolidation: swarm setup automatically detects and migrates stray databases (.opencode/swarm.db, .hive/swarm-mail.db, nested package databases) to the global database at ~/.config/swarm-tools/swarm.db. Use -y flag to migrate without prompting.

Observability Commands

swarm query - SQL analytics with presets

# Execute custom SQL query against event store
swarm query --sql "SELECT * FROM events WHERE type='worker_spawned' LIMIT 10"

# Use preset query (10+ presets available)
swarm query --preset failed_decompositions
swarm query --preset duration_by_strategy
swarm query --preset file_conflicts
swarm query --preset worker_success_rate
swarm query --preset review_rejections
swarm query --preset blocked_tasks
swarm query --preset agent_activity
swarm query --preset event_frequency
swarm query --preset error_patterns
swarm query --preset compaction_stats

# Output formats
swarm query --preset failed_decompositions --format table  # Default
swarm query --preset duration_by_strategy --format csv
swarm query --preset file_conflicts --format json

swarm dashboard - Live terminal UI

# Launch dashboard (auto-refresh every 1s)
swarm dashboard

# Focus on specific epic
swarm dashboard --epic mjmas3zxlmg

# Custom refresh rate (milliseconds)
swarm dashboard --refresh 2000

Dashboard shows:

  • Active workers and their current tasks
  • Progress bars for in-progress work
  • File reservations (who owns what)
  • Recent messages between agents
  • Error alerts

swarm replay - Event replay with timing

# Replay epic at normal speed
swarm replay mjmas3zxlmg

# Fast playback
swarm replay mjmas3zxlmg --speed 2x
swarm replay mjmas3zxlmg --speed instant

# Filter by event type
swarm replay mjmas3zxlmg --type worker_spawned,task_completed

# Filter by agent
swarm replay mjmas3zxlmg --agent DarkHawk

# Time range filters
swarm replay mjmas3zxlmg --since "2025-12-25T10:00:00"
swarm replay mjmas3zxlmg --until "2025-12-25T12:00:00"

# Combine filters
swarm replay mjmas3zxlmg --speed 2x --type worker_spawned --agent BlueLake

swarm export - Data export for analysis

# Export all events as JSON (stdout)
swarm export

# Export specific epic
swarm export --epic mjmas3zxlmg

# Export formats
swarm export --format json --output events.json
swarm export --format csv --output events.csv
swarm export --format otlp --output events.otlp  # OpenTelemetry Protocol

# Pipe to jq for filtering
swarm export --format json | jq '.[] | select(.type=="worker_spawned")'

swarm stats - Health metrics

# Last 7 days (default)
swarm stats

# Custom time period
swarm stats --since 24h
swarm stats --since 30m

# JSON output for scripting
swarm stats --json

swarm history - Activity timeline

# Last 10 swarms (default)
swarm history

# More results
swarm history --limit 20

# Filter by status
swarm history --status success
swarm history --status failed
swarm history --status in_progress

# Filter by strategy
swarm history --strategy file-based
swarm history --strategy feature-based

# Verbose mode (show subtasks)
swarm history --verbose

swarm log - Query/tail logs

# Recent logs (last 50 lines)
swarm log

# Filter by module
swarm log compaction

# Filter by level
swarm log --level error
swarm log --level warn

# Time filters
swarm log --since 30s
swarm log --since 5m
swarm log --since 2h

# JSON output
swarm log --json

# Limit output
swarm log --limit 100

# Watch mode (live tail)
swarm log --watch
swarm log --watch --interval 500  # Poll every 500ms

swarm log sessions - View coordinator sessions

# List all sessions
swarm log sessions

# View specific session
swarm log sessions <session_id>

# Most recent session
swarm log sessions --latest

# Filter by event type
swarm log sessions --type DECISION
swarm log sessions --type VIOLATION
swarm log sessions --type OUTCOME
swarm log sessions --type COMPACTION

# JSON output for jq
swarm log sessions --json

Debug Logging

Use DEBUG env var to enable swarm debug logs:

# All swarm logs
DEBUG=swarm:* swarm dashboard

# Coordinator only
DEBUG=swarm:coordinator swarm replay <epic-id>

# Workers only
DEBUG=swarm:worker swarm export

# Swarm mail only
DEBUG=swarm:mail swarm query --preset agent_activity

# Multiple namespaces (comma-separated)
DEBUG=swarm:coordinator,swarm:worker swarm dashboard

Namespaces:

NamespaceWhat It Logs
swarm:*All swarm activity
swarm:coordinatorCoordinator decisions (spawn, review, approve/reject)
swarm:workerWorker progress, reservations, completions
swarm:mailInter-agent messages, inbox/outbox activity

Observability Architecture

Swarm uses event sourcing for complete observability. Every coordination action is an event - nothing is lost, everything is queryable.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    OBSERVABILITY FLOW                                   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                                         β”‚
β”‚  β”‚   Agent    β”‚  swarmmail_init()                                       β”‚
β”‚  β”‚  (Worker)  β”‚  swarmmail_reserve(paths=["src/auth.ts"])               β”‚
β”‚  β”‚            β”‚  swarm_progress(status="in_progress")                   β”‚
β”‚  β”‚            β”‚  swarm_complete(...)                                    β”‚
β”‚  β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜                                                         β”‚
β”‚        β”‚                                                                β”‚
β”‚        β–Ό                                                                β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”‚
β”‚  β”‚              libSQL Event Store                        β”‚             β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚             β”‚
β”‚  β”‚  β”‚ events table (append-only)                       β”‚  β”‚             β”‚
β”‚  β”‚  β”‚ β”œβ”€ id, type, timestamp, project_key, data       β”‚  β”‚             β”‚
β”‚  β”‚  β”‚ β”œβ”€ agent_registered, message_sent, ...          β”‚  β”‚             β”‚
β”‚  β”‚  β”‚ └─ task_started, task_progress, task_completed  β”‚  β”‚             β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚             β”‚
β”‚  β”‚                                                         β”‚             β”‚
β”‚  β”‚  Automatic Projections (materialized views):            β”‚             β”‚
β”‚  β”‚  β”œβ”€ agents (who's registered)                           β”‚             β”‚
β”‚  β”‚  β”œβ”€ messages (agent inbox/outbox)                       β”‚             β”‚
β”‚  β”‚  β”œβ”€ reservations (file locks)                           β”‚             β”‚
β”‚  β”‚  └─ swarm_contexts (checkpoints)                        β”‚             β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
β”‚                    β”‚                                                    β”‚
β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                          β”‚
β”‚       β–Ό            β–Ό            β–Ό            β–Ό                          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                     β”‚
β”‚  β”‚  swarm  β”‚ β”‚  swarm  β”‚ β”‚  swarm   β”‚ β”‚  swarm   β”‚                     β”‚
β”‚  β”‚  query  β”‚ β”‚  stats  β”‚ β”‚ dashboardβ”‚ β”‚  replay  β”‚                     β”‚
β”‚  β”‚  (SQL)  β”‚ β”‚ (counts)β”‚ β”‚   (TUI)  β”‚ β”‚ (time)   β”‚                     β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β”‚
β”‚                                                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚  β”‚  Analytics Layer (Golden Signals)                       β”‚            β”‚
β”‚  β”‚  β”œβ”€ Latency: avg task duration, P50/P95/P99             β”‚            β”‚
β”‚  β”‚  β”œβ”€ Traffic: events/sec, message rate                   β”‚            β”‚
β”‚  β”‚  β”œβ”€ Errors: task failures, violations                   β”‚            β”‚
β”‚  β”‚  β”œβ”€ Saturation: file conflicts, blocked tasks           β”‚            β”‚
β”‚  β”‚  └─ Conflicts: reservation collisions, deadlocks        β”‚            β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Event Types

Agent Lifecycle:

Event TypeWhen It FiresUsed For
agent_registeredAgent calls swarmmail_init()Agent discovery, project tracking
agent_activePeriodic heartbeatLast-seen tracking

Messages:

Event TypeWhen It FiresUsed For
message_sentAgent sends swarm mailCoordination, thread tracking
message_readAgent reads messageRead receipts
message_ackedAgent acknowledgesConfirmation tracking
thread_createdFirst message in threadThread lifecycle
thread_activityThread stats updateUnread counts, participants

File Reservations:

Event TypeWhen It FiresUsed For
file_reservedAgent reserves filesConflict detection, lock management
file_releasedAgent releases filesLock cleanup, reservation tracking
file_conflictReservation collisionConflict resolution, deadlock detection

Task Tracking:

Event TypeWhen It FiresUsed For
task_startedAgent starts cell workProgress tracking, timeline
task_progressAgent reports milestoneReal-time monitoring, ETA
task_completedAgent calls swarm_complete()Outcome tracking, learning signals
task_blockedAgent hits blockerDependency tracking, alerts

Swarm Coordination:

Event TypeWhen It FiresUsed For
swarm_startedCoordinator begins epicSwarm lifecycle tracking
worker_spawnedCoordinator spawns workerWorker tracking, spawn order
worker_completedWorker finishes subtaskOutcome tracking, duration
review_startedCoordinator begins reviewReview tracking, attempts
review_completedReview finishesApproval/rejection tracking
swarm_completedAll subtasks doneEpic completion, success rate
decomposition_generatedTask decomposedStrategy tracking, subtask planning
subtask_outcomeSubtask finishesLearning signals, scope violations

Checkpoints & Recovery:

Event TypeWhen It FiresUsed For
swarm_checkpointedAuto at 25/50/75% or manualRecovery, context preservation
swarm_recoveredResume from checkpointRecovery tracking, checkpoint age
checkpoint_createdCheckpoint savedCheckpoint lifecycle
context_compactedContext compaction runsContext compression tracking

Validation & Learning:

Event TypeWhen It FiresUsed For
validation_startedValidation beginsValidation lifecycle
validation_issueValidation finds issueIssue tracking, debugging
validation_completedValidation finishesPass/fail tracking
human_feedbackHuman accepts/modifiesHuman-in-loop learning

Full Schema: See swarm-mail/src/streams/events.ts for complete Zod schemas (30+ event types)

Analytics Queries

Pre-built queries based on Four Golden Signals observability framework:

Latency (how fast):

-- Average task duration by type
SELECT 
  json_extract(data, '$.type') as task_type,
  AVG(duration_ms) as avg_duration,
  MAX(duration_ms) as p99_duration
FROM events
WHERE type = 'task_completed'
GROUP BY task_type;

Traffic (how much):

-- Events per hour
SELECT 
  strftime('%Y-%m-%d %H:00', datetime(timestamp/1000, 'unixepoch')) as hour,
  COUNT(*) as event_count
FROM events
GROUP BY hour
ORDER BY hour DESC
LIMIT 24;

Errors (what's broken):

-- Failed tasks with reasons
SELECT 
  json_extract(data, '$.bead_id') as task,
  json_extract(data, '$.reason') as failure_reason,
  timestamp
FROM events
WHERE type = 'task_completed' 
  AND json_extract(data, '$.success') = 0
ORDER BY timestamp DESC;

Saturation (resource contention):

-- File reservation conflicts
SELECT 
  json_extract(data, '$.paths') as file_paths,
  COUNT(*) as conflict_count,
  GROUP_CONCAT(json_extract(data, '$.agent_name')) as agents
FROM events
WHERE type = 'file_reserved'
GROUP BY file_paths
HAVING COUNT(*) > 1;

Conflicts (deadlocks, collisions):

-- Reservation wait times (TTL expirations)
SELECT 
  json_extract(data, '$.agent_name') as agent,
  json_extract(data, '$.paths') as paths,
  (expires_at - timestamp) as wait_time_ms
FROM events
WHERE type = 'file_reserved'
  AND (expires_at - timestamp) > 10000 -- >10sec wait
ORDER BY wait_time_ms DESC;

Run these via:

swarm query --preset golden-signals
swarm query --preset compaction-health
swarm query --preset file-conflicts

Getting Started with Debugging

Scenario 1: Task is stuck "in_progress" forever

# 1. Find the task in events
swarm query --sql "SELECT * FROM events WHERE json_extract(data, '$.bead_id') = 'mjmas411jtj' ORDER BY timestamp"

# 2. Check for file reservation conflicts
swarm query --preset file_conflicts

# 3. Replay to see execution timeline
swarm replay mjmas3zxlmg --agent WorkerName

# 4. Check if agent is still registered
swarm stats

# 5. Enable debug logging for live tracking
DEBUG=swarm:worker swarm dashboard --epic mjmas3zxlmg

Scenario 2: High failure rate for a specific epic

# 1. Get stats by epic
swarm query --sql "SELECT type, COUNT(*) FROM events WHERE json_extract(data, '$.epic_id') = 'mjmas3zxlmg' GROUP BY type"

# 2. Find failures with reasons
swarm query --sql "SELECT * FROM events WHERE type = 'task_completed' AND json_extract(data, '$.epic_id') = 'mjmas3zxlmg' AND json_extract(data, '$.success') = 0"

# 3. Export for analysis
swarm export --epic mjmas3zxlmg --format csv > failures.csv

# 4. Check coordinator session for violations
swarm log sessions --type VIOLATION --json

Scenario 3: Performance regression (tasks slower than before)

# 1. Check latency trends
swarm query --preset duration_by_strategy

# 2. Compare with historical baselines
swarm history --limit 50

# 3. Identify bottlenecks
swarm dashboard --epic mjmas3zxlmg --refresh 2

# 4. Analyze worker spawn efficiency
swarm query --preset worker_success_rate

Scenario 4: File reservation conflicts

# 1. Check active locks
swarm query --preset file_conflicts

# 2. See who's holding what
swarm dashboard  # Shows file locks section

# 3. View full conflict history
swarm query --sql "SELECT * FROM events WHERE type = 'file_conflict' ORDER BY timestamp DESC LIMIT 20"

# 4. Replay to see conflict sequence
swarm replay mjmas3zxlmg --type file_reserved,file_released,file_conflict

Scenario 5: Coordinator not spawning workers

# 1. Check coordinator session for violations
swarm log sessions --latest --type DECISION,VIOLATION

# 2. Verify decomposition was generated
swarm query --sql "SELECT * FROM events WHERE type = 'decomposition_generated' ORDER BY timestamp DESC LIMIT 5"

# 3. Debug coordinator logic
DEBUG=swarm:coordinator swarm replay mjmas3zxlmg

# 4. Check for blocked tasks
swarm query --preset blocked_tasks

Event Store Schema

CREATE TABLE events (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  type TEXT NOT NULL,                    -- Event discriminator
  project_key TEXT NOT NULL,             -- Project path (for multi-project filtering)
  timestamp INTEGER NOT NULL,            -- Unix ms
  sequence INTEGER GENERATED ALWAYS AS (id) STORED,
  data TEXT NOT NULL,                    -- JSON payload (event-specific fields)
  created_at TEXT DEFAULT (datetime('now'))
);

-- Indexes for fast queries
CREATE INDEX idx_events_project_key ON events(project_key);
CREATE INDEX idx_events_type ON events(type);
CREATE INDEX idx_events_timestamp ON events(timestamp);
CREATE INDEX idx_events_project_type ON events(project_key, type);

Event payload examples:

// agent_registered event
{
  "type": "agent_registered",
  "project_key": "/path/to/project",
  "timestamp": 1703001234567,
  "data": "{\"agent_name\":\"BlueLake\",\"program\":\"opencode\",\"model\":\"claude-sonnet-4\",\"task_description\":\"mjmas411jtj: Update READMEs\"}"
}

// task_completed event
{
  "type": "task_completed",
  "project_key": "/path/to/project", 
  "timestamp": 1703001299999,
  "data": "{\"agent_name\":\"BlueLake\",\"bead_id\":\"mjmas411jtj\",\"summary\":\"Updated both READMEs with CLI reference and event schema\",\"files_touched\":[\"packages/opencode-swarm-plugin/README.md\",\"packages/swarm-mail/README.md\"],\"success\":true}"
}

Database Location

# libSQL database path
~/.config/swarm-tools/libsql/<project-hash>/swarm.db

# Find your project's database
swarm config  # Shows database path for current project

Further Reading

"High-variability sequencing of whole-task problems."
β€” 4C/ID Instructional Design Model

License

MIT

Keywords

opencode

FAQs

Package last updated on 06 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts