New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

bs-buster

Package Overview
Dependencies
Maintainers
1
Versions
16
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

bs-buster

CLI for evaluating AI agent harnesses across GitHub Copilot, Claude Code, Codex, and custom agent workflows.

latest
Source
npmnpm
Version
0.3.0
Version published
Weekly downloads
79
146.88%
Maintainers
1
Weekly downloads
 
Created
Source

BS Buster

Don't blame the model. Measure the harness.

npm License: MIT TypeScript Node Checks

47 deterministic checks | Passive observation | 4 harness targets | Ablation attribution

BS Buster is a CLI for evaluating AI agent harnesses. It helps teams measure the orchestration layer behind GitHub Copilot, Claude Code, Codex, and custom agent workflows so you can separate model problems from harness problems.

Best for: AI engineering teams, agent platform builders, developer tooling teams, and anyone comparing coding assistants in real workflows.

BS Buster hero banner

Install

Install locally in your project:

npm install bs-buster

That's it. No build step, no config files, no Docker. One command.

After installing, all commands use npx:

npx bs-buster init
npx bs-buster start
npx bs-buster stop
npx bs-buster report

Or install globally

npm install -g bs-buster
bs-buster init    # no npx needed when global
Pixel Buster arcade cabinet

Insert coin. Bust BS.

Quick Start

Note: If you installed locally (not globally), you must use npx to run commands. Running bs-buster start directly won't work — use npx bs-buster start instead.

1. Initialize (one time)

npx bs-buster init
BS Buster Setup
Don't blame the model. Measure the harness.
───────────────────────────────────────────

Scanning for installed harnesses...

  [✓] Claude Code     Found .jsonl files in ~/.claude/projects (high confidence)
  [✓] GitHub Copilot  Found extension(s) in ~/.vscode-insiders (high confidence)
  [✗] OpenAI Codex    Not detected
  [✓] Generic         Filesystem watcher available for any harness

Select your primary harness:
  1) Claude Code (recommended)
  2) GitHub Copilot
  3) Generic

Enter number [1]: 2

Configuration saved to .bs-buster/config.json

Auto-detects Claude Code, Codex, and Copilot (including VS Code Insiders). Also checks your project root for .claude/, CLAUDE.md, and .github/copilot-instructions.md. Saves config so all future commands need zero flags.

2. Observe

npx bs-buster start       # Starts background observer
# ... use your harness normally ...
npx bs-buster stop        # Stops observer, finalizes data

3. Report

npx bs-buster report      # Generates an HTML report

The report includes:

  • Overall score (0–100) and letter grade (A+ through F)
  • Per-pillar pass rates across 47 checks
  • Attribution analysis: harness failures vs. model failures vs. ambiguous
  • Observation coverage: what data the observer could and couldn't capture
  • Cross-pillar interactions: emergent capabilities between pillars
  • Ranked recommendations for improving your harness

Report formats: --format html (default), --format json, --format markdown

Ghostbusters-style attribution animation showing model blame being diagnosed as a harness issue

Run the checks. Find the actual culprit.

CLI Reference

npx bs-buster <command> [options]
CommandDescription
npx bs-buster initAuto-detect harnesses, save config
npx bs-buster startStart background observer
npx bs-buster stopStop observer, finalize events
npx bs-buster statusCheck if observer is running
npx bs-buster reportGenerate evaluation report
npx bs-buster sessionsList past observation sessions
OptionDescriptionDefault
--target, -tclaude-code, codex, copilot, genericauto (from init)
--dir, -dWorkspace directory to watch.
--format, -fReport format: html, json, markdownhtml
--session, -sSession ID for reportlatest
--harness-nameLabel the harness being tested (e.g., "Ruflo Swarm")auto
--harness-descCustom description for the harness in reportsauto
# Override saved config:
npx bs-buster start --target codex --dir ./my-project
npx bs-buster report --session abc123 --format json

The Problem

The AI industry has a hand-waving problem. When an agent fails — hallucinating, looping, destroying production data — the diagnosis is always the same: "the model needs to be smarter."

This is the model attribution error, and it is the most expensive misdiagnosis in AI engineering.

Same model, different harness = different outcomes. The model was fine. The orchestration wasn't.

"Your agent's reliability problem is not a model problem. We can prove it."

The BS ClaimWhat People SayWhat the Data Shows
Model Blame"We need a smarter model"Same model, different harness = different outcomes
Benchmark Theater"Our model scores 92% on SWE-bench"Benchmarks test models in isolation. Production agents fail at the harness layer
Prompt Engineering"We just need better prompts"Prompts break on model updates. Harness guarantees persist across models
Context Window Copium"We need a bigger context window"You have 200K tokens and filled them with raw dumps. Lifecycle management was nonexistent
Scale Solves Everything"Next year's model will fix it"Next year's model still needs stopping rules, tool schemas, and a policy layer
Alignment Hand-Wave"The model doesn't understand consequences"You handed it rm -rf and git push --force with no permission gate
Jurassic Park inspired pixel art declaring a big pile of BS

The industry's default diagnosis, visualized.

Real-World Case Study: Harness Quality Matters

Two real harness evaluations. Same evaluation framework. Different orchestration maturity.

MetricRuflo Swarm (A-)ATV Agent Orchestrator (B)
Underlying AssistantClaude CodeGitHub Copilot
Overall Score91.1 / 10083.7 / 100
GradeA-B
Observation Coverage100%75% (no token tracking)
Turns / Tool Calls12 turns, 16 tool calls381 turns, 103 tool calls
Worst PillarContext Assembly at 87.5% (4 of 5 pillars at 100%)Loop Discipline at 33.3% (4 of 5 pillars at 100%)
Attribution (harness vs model)0% harness failures -- all minor issues attributed to model behavior0% harness failures -- eval framework-specific checks were corrected
Infinite Loop SignatureZero infinite loops, zero idle turns, explicit termination381 turns with 99% duplicate tool calls, 26 idle turns, no checkpoints

Don't blame the model. Measure the harness. Both Claude Code and GitHub Copilot are capable coding assistants -- the difference in scores reflects harness orchestration maturity, not model quality. Ruflo's mature harness design -- iteration ceilings, idle detection, progress tracking -- prevented the failure modes that ATV's orchestrator still exhibits. The remaining gap is a harness engineering gap: loop discipline and checkpoint strategy, not model intelligence.

Full reports: Ruflo Swarm (A-) | ATV Agent Orchestrator (B) | Comparative Analysis

HAL 9000 themed pixel art apologizing for missing policy enforcement

When the harness is missing a policy layer, even HAL has receipts.

The Five Pillars

#PillarWhat It MeasuresChecks
1Context AssemblySystem prompt, tool declarations, state injection7
2Tool IntegritySchema validation, error messages, self-correction8
3Loop DisciplineStopping rules, iteration ceilings, stall detection10
4Policy EnforcementPermission gates, destructive action blocking8
5Context LifecycleToken management, compaction, delegation8
Cross-PillarEmergent interactions between pillars6
The five pillars diagram for the BS Buster framework

How It Works

BS Buster uses passive observation — it watches your harness operate in its natural environment and reconstructs what happened from side effects. No synthetic sandboxes, no test doubles.

npx bs-buster start     →  Observer watches harness in background
  (use your harness normally)
npx bs-buster stop      →  Finalize event collection
npx bs-buster report    →  Reconstruct → 47 Checks → Attribution → Score → HTML Report
Evaluation flow diagram from passive observation to scoring and reporting

Observation Pipeline

fs.watch / file tailing
    → Observer emits ObserverEvents
        → EventCollector writes JSONL to disk
            → reconstructOutput() builds AgentOutput
                → evaluateObservation() runs 47 checks
                    → Scoring engine → HarnessReport

Every check is a pure function: (AgentOutput) → { pass, detail }. No model calls. No randomness. Reproducible across runs.

Supported Harnesses

TargetObserver MethodCaptures
Claude CodeJSONL tail + workspace fs.watchModel responses, tool calls, token usage, file changes
CodexJSON output tail + workspace fs.watchModel responses, tool calls, token usage, file changes
CopilotWorkspace snapshot + fs.watch + .git watcherFile changes, git activity
GenericWorkspace fs.watch with glob patternsFile changes (any harness)
Ghostbusters-style harness attribution animation

Programmatic API

import {
  evaluateObservation,
  reconstructOutput,
  EventCollector,
  generateHtmlReport,
} from "bs-buster";

// Read collected events
const events = EventCollector.readEvents("path/to/session.events.jsonl");

// Reconstruct agent output from raw events
const result = reconstructOutput(events, sessionId, target, harnessId, modelId);

// Run 47-check evaluation
const report = await evaluateObservation(
  result.output,
  "claude-code",
  result.observation_coverage
);

console.log(`Score: ${report.overall_score} (${report.overall_grade})`);

// Generate standalone HTML report
const html = generateHtmlReport({
  session_id: sessionId,
  target: "claude-code",
  observation_coverage: result.observation_coverage,
  warnings: result.warnings,
  evaluation: report,
  summary: { turns: 830, tool_calls: 976, total_tokens: 366570, duration_ms: 0 },
});

Project Structure

src/
├── cli/                                 # CLI entry point
│   ├── index.ts                         #   Argument parsing, 6 commands
│   ├── init-wizard.ts                   #   Guided setup with auto-detection
│   ├── harness-detector.ts              #   Scans system for installed harnesses
│   ├── html-report.ts                   #   Self-contained HTML report generator
│   ├── config.ts                        #   .bs-buster/config.json persistence
│   └── daemon.ts                        #   Background process with PID management
├── observer/                            # Passive observation engine
│   ├── types.ts                         #   HarnessTarget, ObserverEvent, HarnessObserver
│   ├── eval-bridge.ts                   #   Bridges observation → eval pipeline
│   ├── event-collector.ts               #   JSONL event writer/reader
│   ├── observer-registry.ts             #   Factory for target-specific observers
│   ├── observers/                       #   Per-harness observer implementations
│   │   ├── claude-code.observer.ts      #     JSONL tail + workspace watcher
│   │   ├── codex.observer.ts            #     JSON output tail + workspace watcher
│   │   ├── copilot.observer.ts          #     Workspace snapshot + fs.watch + git
│   │   └── generic.observer.ts          #     Glob-filtered filesystem watcher
│   └── reconstruction/
│       └── output-builder.ts            #   Events → AgentOutput reconstruction
└── eval/                                # Evaluation engine
    ├── types.ts                         #   40+ interfaces
    ├── check-registry.ts                #   Registration, lookup, lazy ESM loading
    ├── checks/                          #   47 deterministic checks
    ├── scoring/                         #   Weighted composite scoring + attribution
    ├── reporters/                       #   JSON & Markdown renderers
    └── harnesses/                       #   Ablation testing

Documentation

DocumentWhat It Is
Agent Harness WhitepaperThe thesis: five pillars framework, failure modes, architectural patterns
BS Buster PhilosophyHow 47 checks kill the model attribution error
ArchitectureSystem architecture, module design, data flow
Eval MethodologyPhil Schmid adaptation for harness testing
Pillar StrategyPer-pillar check design and metrics
Comparative AnalysisReal-world A- vs B harness comparison: Ruflo Swarm vs ATV Agent Orchestrator
Retro pixel arcade cabinet for BS Buster

Requirements

  • Node.js ≥ 18
  • No other dependencies at runtime (just zod for schema validation)
  • Works on macOS, Linux, and Windows

47 checks that don't lie.

npm · GitHub · Issues

MIT License

Keywords

bs-buster

FAQs

Package last updated on 10 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts