
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
@probelabs/visor
Advanced tools
AI workflow engine for code review, assistants, and automation — orchestrate checks, MCP tools, and AI providers with YAML-driven pipelines
Orchestrate checks, MCP tools, and AI providers with YAML-driven pipelines. Runs as GitHub Action, CLI, Slack bot, Telegram bot, or HTTP API.
Visor is an open-source workflow engine that lets you define multi-step AI pipelines in YAML. Wire up shell commands, AI providers, MCP tools, HTTP calls, and custom scripts into dependency-aware DAGs — then run them from your terminal, CI, Slack, Telegram, Email, WhatsApp, Teams, or an HTTP endpoint.
What you get out of the box:
ai, command, script, mcp, utcp, http, claude-code, a2a, github, memory, workflow, and more.| Goal | Start here | Example |
|---|---|---|
| Code review on PRs | Guide: Code Review Pipeline | quick-start-tags.yaml |
| AI agent with tools | Guide: AI Agent | ai-custom-tools-simple.yaml |
| Multi-step automation | Workflow Creation Guide | enhanced-config.yaml |
| Chat assistant / Bot | Bot Integrations | teams-assistant.yaml |
| Run shell commands + AI | Command Provider | ai-with-bash.yaml |
| Connect MCP tools | MCP Provider | mcp-provider-example.yaml |
| Call tools via UTCP | UTCP Provider | utcp-provider-example.yaml |
| Add API integrations (TDD) | Guide: TDD Assistant Workflows | workable.tests.yaml |
First time? Run
npx visor initto scaffold a working config, thennpx visorto run it.
Requirements: Node.js 18+ (CI runs Node 20).
# Install
npm i -D @probelabs/visor
# Scaffold a starter config (pick a template)
npx visor init # interactive picker
npx visor init code-review # PR review pipeline
npx visor init agent # AI agent with tools
npx visor init automation # multi-step pipeline
npx visor init assistant # chat assistant / Slack bot
# Run
npx visor # run all steps
npx visor --tags fast # run steps tagged "fast"
npx visor validate # check config for errors
Or one-off without installing: npx -y @probelabs/visor@latest --check all --output table
.visor.yaml)version: "1.0"
steps:
security:
type: ai
prompt: "Identify security issues in changed files"
tags: ["fast", "security"]
run-tests:
type: command
exec: npm test
depends_on: [security]
notify:
type: http
method: POST
url: https://hooks.slack.com/...
body: '{ "text": "Tests {{ outputs[''run-tests''].status }}" }'
depends_on: [run-tests]
# .github/workflows/visor.yml
name: Visor
on:
pull_request: { types: [opened, synchronize] }
issues: { types: [opened] }
issue_comment: { types: [created] }
permissions:
contents: read
pull-requests: write
issues: write
checks: write
jobs:
visor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: probelabs/visor@v1
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
Tip: Pin releases for stability with
@v1. For bleeding-edge, use@nightly.
Visor ships with a built-in assistant framework — three composable workflows for building AI-powered assistants with skills, tools, and multi-repo code exploration. Import them with a single line:
version: "1.0"
imports:
- visor://assistant.yaml
checks:
chat:
type: workflow
workflow: assistant
assume: ["true"]
args:
question: "{{ conversation.current.text }}"
system_prompt: "You are a helpful engineering assistant."
intents:
- id: chat
description: general Q&A or small talk
- id: code_help
description: questions about code or architecture
default_skills: [code-explorer]
skills:
- id: code-explorer
description: needs codebase exploration or code search
tools:
code-talk:
workflow: code-talk
inputs:
projects:
- id: backend
repo: my-org/backend
description: Backend API service
allowed_commands: ['git:log:*', 'git:diff:*']
on_success:
goto: chat
| Workflow | What it does |
|---|---|
| assistant | Full AI assistant — intent routing, dynamic skill activation, tool orchestration, knowledge injection, bash command control |
| code-talk | Multi-repo code exploration — routes questions to repos, checks out code, explores with tools, returns answers with file references and confidence scoring |
| intent-router | Lightweight intent classification — picks intent, rewrites question, selects skills/tags |
The visor:// protocol resolves to bundled workflows shipped with the package — no network fetch needed.
Learn more: docs/assistant-workflows.md | Examples: code-talk-workflow · code-talk-as-tool · intent-router
Visor runs the same YAML config across multiple surfaces:
| Mode | How to run | Best for |
|---|---|---|
| CLI | visor --check all --output table | Local dev, CI pipelines |
| GitHub Action | uses: probelabs/visor@v1 | PR reviews, issue triage, annotations |
| Slack bot | visor --slack --config .visor.yaml | Team assistants, ChatOps |
| Telegram bot | visor --telegram --config .visor.yaml | Personal assistants, group bots |
| Email bot | visor --email --config .visor.yaml | Email assistants, threaded conversations |
| WhatsApp bot | visor --whatsapp --config .visor.yaml | WhatsApp assistants, customer support |
| Teams bot | visor --teams --config .visor.yaml | Enterprise assistants, team ChatOps |
| HTTP server | http_server: { enabled: true, port: 8080 } | Webhooks, API integrations |
See Bot Integrations for a comparison of all bot transports.
Additional modes:
visor --tuiimport { runChecks } from '@probelabs/visor/sdk'# CLI examples
visor --check all --output table
visor --tags fast,local --max-parallelism 5
visor --analyze-branch-diff # PR-style diff analysis
visor --event pr_updated # Simulate GitHub events
visor --tui --config ./workflow.yaml # Interactive TUI
visor --debug-server --debug-port 3456 # Live web debugger
visor config snapshots # Config version history
visor validate # Validate config
visor test --progress compact # Run integration tests
Run modes: Default is CLI mode everywhere. For GitHub-specific behavior (comments, checks, annotations), run with --mode github-actions or set mode: github-actions in the Action. Force CLI mode inside Actions with VISOR_MODE=cli.
See docs/commands.md for the full CLI reference.
Trigger reviews and assistant actions via comments on PRs or issues:
/review # Re-run all checks
/review --check security # Re-run specific check
/visor how does caching work? # Ask the built-in assistant
Learn more: docs/commands.md
| Concept | What it is |
|---|---|
| Step (or Check) | Unit of work — a shell command, AI call, HTTP request, script, etc. |
| Provider | How a step runs: ai, command, script, mcp, utcp, http, claude-code, github, memory, workflow, … |
| depends_on | Execution order — independents run in parallel, dependents wait. |
| forEach | Fan-out — transform output into an array, run dependents per item. |
| Routing | on_fail, on_success, goto, retry — conditional flow with loop safety. |
| Transform | Reshape output with Liquid templates or JavaScript before passing downstream. |
| Schema | JSON Schema that validates step output (e.g., code-review). |
| Template | Renders validated output into Markdown/table for PR comments. |
| Group | Which PR comment a step posts into. |
| Tags | Label steps and filter with --tags fast,local. |
| Events | Trigger steps on PRs, issues, comments, webhooks, or cron schedules. |
| Provider | Description | Example use |
|---|---|---|
ai | Multi-provider AI (Gemini, Claude, OpenAI, Bedrock) | Code review, analysis, generation |
command | Shell commands with Liquid templating | Run tests, build, lint |
script | JavaScript in a secure sandbox | Transform data, custom logic |
mcp | MCP tool execution (stdio/SSE/HTTP) | External tool integration |
utcp | UTCP tool execution (HTTP/CLI/SSE) | Direct tool calling via manuals |
claude-code | Claude Code SDK with MCP tools | Deep code analysis, refactoring |
http | HTTP output/webhook sender | Notify Slack, trigger CI |
http_input | Webhook receiver | Accept external events |
http_client | HTTP API client | Call external APIs |
github | GitHub operations (labels, comments, checks) | Label PRs, post reviews |
memory | Key-value store (get/set/append/increment) | State across steps |
workflow | Reusable sub-workflows from files/URLs | Compose pipelines |
human-input | Interactive prompts (TUI/Slack) | Approvals, user input |
log / logger | Structured logging | Debug, audit trail |
noop | No-op placeholder | Orchestration nodes |
git-checkout | Git operations (clone, checkout, worktree) | Multi-repo workflows |
See docs/pluggable.md for building custom providers.
Steps without dependencies run in parallel waves. depends_on enforces ordering:
steps:
fetch-data:
type: command
exec: curl -s https://api.example.com/data
analyze:
type: ai
prompt: "Analyze: {{ outputs['fetch-data'] }}"
depends_on: [fetch-data]
report:
type: command
exec: 'echo "Done: {{ outputs[''analyze''] | truncate: 100 }}"'
depends_on: [analyze]
Transform output into an array, run dependents once per item:
steps:
list-services:
type: command
exec: 'echo ''["auth","payments","notifications"]'''
forEach: true
check-service:
type: command
exec: 'curl -s https://{{ outputs["list-services"] }}/health'
depends_on: [list-services]
Use outputs_raw in downstream steps to access the aggregated array of all forEach results:
summarize:
type: script
depends_on: [list-services]
content: |
const arr = outputs_raw['list-services'] || [];
return { total: arr.length };
Learn more: docs/foreach-dependency-propagation.md
Steps can retry, run remediation, or jump to other steps on failure:
version: "2.0"
routing:
max_loops: 5
steps:
build:
type: command
exec: make build
on_fail:
retry: { max: 2, backoff: { mode: exponential, delay_ms: 500 } }
goto: setup # Jump back on exhausted retries
deploy:
type: command
exec: make deploy
depends_on: [build]
on_success:
run: [notify] # Run extra steps on success
on_fail:
goto_js: |
return attempt <= 2 ? 'build' : null; # Dynamic routing
Learn more: docs/failure-routing.md
steps:
security-scan:
type: command
exec: npm audit
if: "!hasMinPermission('MEMBER')" # Only for external contributors
auto-approve:
type: github
op: labels.add
values: ["approved"]
if: "hasMinPermission('COLLABORATOR') && totalIssues === 0"
protect-secrets:
type: command
exec: echo "Checking permissions..."
fail_if: "!isMember() && files.some(f => f.filename.startsWith('secrets/'))"
Available permission functions: hasMinPermission(level), isOwner(), isMember(), isCollaborator(), isContributor(), isFirstTimer().
Learn more: docs/author-permissions.md
steps:
review:
type: ai
prompt: "Review this code for security issues"
ai:
provider: anthropic # or: google, openai, bedrock
model: claude-sonnet-4-20250514
fallback:
strategy: any # Try other providers on failure
Supported providers: Google Gemini, Anthropic Claude, OpenAI GPT, AWS Bedrock.
Set one key via environment: GOOGLE_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY, or AWS credentials.
Give AI steps access to MCP tools, or call MCP tools directly:
# AI step with MCP tools
steps:
analyze:
type: ai
prompt: "Use the search tool to find security patterns"
ai:
mcp_servers:
- name: code-search
command: npx
args: ["-y", "@probe/search"]
# Direct MCP tool execution
search:
type: mcp
transport: stdio
command: npx
args: ["-y", "@probe/search"]
method: search
arguments:
query: "{{ outputs['setup'].pattern }}"
Chain AI conversations across steps:
steps:
security:
type: ai
prompt: "Find security issues"
remediation:
type: ai
prompt: "Suggest fixes for the issues you found"
depends_on: [security]
reuse_ai_session: true # Carries conversation history
session_mode: append # Or: clone (default)
Full Claude Code SDK integration with MCP tools and subagents:
steps:
deep-review:
type: claude-code
prompt: "Analyze code complexity and suggest refactoring"
max_turns: 10
mcp_servers:
- name: filesystem
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
Learn more: docs/claude-code.md · docs/mcp-provider.md · docs/advanced-ai.md
Native GitHub operations (labels, comments, checks) without shelling out to gh:
steps:
apply-labels:
type: github
op: labels.add
values:
- "{{ outputs.overview.tags.label | default: '' | safe_label }}"
value_js: |
return values.filter(v => typeof v === 'string' && v.trim().length > 0);
Learn more: docs/github-ops.md
Steps can use Liquid templates in prompts, exec commands, HTTP bodies, and more:
steps:
greet:
type: command
exec: 'echo "Files changed: {{ files | size }}, branch: {{ branch }}"'
post-results:
type: http
url: https://api.example.com/results
body: |
{ "issues": {{ outputs["review"] | json }},
"pr": {{ pr.number }} }
Available context: outputs, outputs_raw, inputs, pr, files, env, memory, branch, event, conversation.
Transform step output before passing to dependents:
steps:
fetch:
type: command
exec: 'node -e "console.log(JSON.stringify({items:[1,2,3]}))"'
transform_js: |
return output.items.filter(i => i > 1);
steps:
check:
type: command
exec: npm test
on_fail:
goto_js: |
if (attempt > 3) return null; // Give up
return 'fix-and-retry'; // Jump to remediation
Prompts can live in external files with full Liquid variable access:
steps:
overview:
type: ai
schema: code-review
prompt: ./prompts/overview.liquid
Learn more: docs/liquid-templates.md · docs/schema-templates.md
Suppress a specific issue by adding a nearby visor-disable comment:
const testPassword = "demo123"; // visor-disable
Learn more: docs/suppressions.md
Write and run integration tests for your Visor config in YAML:
# .visor.tests.yaml
tests:
- name: "Security check finds issues"
config: .visor.yaml
steps:
security:
mock_output: '{"issues": [{"severity": "high"}]}'
assertions:
- step: security
called: { exactly: 1 }
- step: security
output_contains: "high"
visor test --progress compact # Run tests
visor test --list # List test cases
visor test --only "Security*" # Filter tests
visor test --bail # Stop on first failure
Docs: Getting started · DSL reference · Fixtures & mocks · Assertions · Cookbook
Run Visor programmatically from Node.js:
import { loadConfig, runChecks } from '@probelabs/visor/sdk';
const config = await loadConfig('.visor.yaml');
const result = await runChecks({
config,
checks: Object.keys(config.checks || {}),
output: { format: 'json' },
});
console.log('Issues:', result.reviewSummary.issues?.length ?? 0);
Learn more: docs/sdk.md
--config flag.visor.yaml in project rootextends:
- default
- ./team-standards.yaml
- https://raw.githubusercontent.com/org/policies/main/base.yaml
Long-running modes (Slack, Telegram, Email, HTTP) support live config reload:
visor --slack --config .visor.yaml --watch # Auto-reload on file change
visor --telegram --config .visor.yaml --watch # Telegram with hot reload
visor --email --config .visor.yaml --watch # Email with hot reload
visor config snapshots # List config versions
visor config diff 1 2 # Diff two snapshots
version: "1.0"
max_parallelism: 3 # Concurrent steps
max_ai_concurrency: 3 # Concurrent AI API calls
routing:
max_loops: 10 # Loop safety limit
http_server:
enabled: true
port: 8080
auth: { bearer_token: "${WEBHOOK_SECRET}" }
telemetry:
enabled: true
sink: otlp # or: file, console
steps:
# ... your pipeline
A common source of confusion is where to put AI settings. Here's the map:
version: "1.0"
# ── Global defaults (top level) ──────────────────────
ai_provider: google # default AI provider for all steps
ai_model: gemini-2.5-flash # default model for all steps
steps:
my-step:
type: ai
prompt: "Analyze the code"
# ── Per-step overrides (step level) ──────────────
ai_provider: anthropic # override provider for this step
ai_model: claude-sonnet-4-20250514 # override model for this step
ai_system_prompt: "You are..." # system prompt shorthand
# ── OR use the ai: block for full config ─────────
ai:
provider: anthropic
model: claude-sonnet-4-20250514
system_prompt: "You are a senior engineer."
retry:
maxRetries: 3
fallback:
providers: [{ provider: google, model: gemini-2.5-flash }]
Common mistakes:
system_promptat step level (ignored — useai_system_promptor put it insideai:). Top-levelai:block (not supported — useai_provider/ai_model).parseJsonon command steps (commands auto-parse JSON). Runvisor validateto catch these.
Learn more: docs/ai-configuration.md · docs/configuration.md
visor --output table # Terminal-friendly (default)
visor --output json --output-file results.json
visor --output sarif --output-file results.sarif
visor --output markdown
telemetry:
enabled: true
sink: otlp
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces visor --check all
Span hierarchy: visor.run → engine.state.* → visor.check.* → visor.foreach.item
visor --debug # Verbose logging
visor --debug-server --debug-port 3456 # Live web visualizer
Quick debugging tips:
Use log() in JavaScript expressions (if, fail_if, transform_js):
if: |
log("Outputs:", outputs);
outputs["fetch-data"]?.status === "ready"
Use the json filter in Liquid to inspect objects:
type: logger
message: "Outputs: {{ outputs | json }}"
TUI mode (visor --tui): Press Tab to switch between Chat and Logs tabs, q to exit.
Learn more: docs/observability.md · docs/debugging.md · docs/debug-visualizer.md
hasMinPermission(), isMember(), etc. for role-based logicvisor --no-remote-extends
visor --allowed-remote-patterns "https://raw.githubusercontent.com/myorg/"
Learn more: docs/security.md · docs/author-permissions.md
Enterprise Edition. Requires a Visor EE license. Contact hello@probelabs.com.
OPA-based policy enforcement for gating checks, MCP tools, and AI capabilities:
policy:
engine: local
rules: ./policies/
fallback: deny
roles:
admin: { author_association: [OWNER] }
developer: { author_association: [MEMBER, COLLABORATOR] }
Learn more: docs/enterprise-policy.md
Getting started: Configuration · AI config · CLI commands · GitHub Auth · CI/CLI mode · GitHub Action reference · Migration · FAQ · Glossary
Guides: Tools & Toolkits · Assistant workflows · TDD for assistant workflows · Workflow creation · Workflow style guide · Dependencies · forEach propagation · Failure routing · Router patterns · Lifecycle hooks · Liquid templates · Schema-template system · Fail conditions · Failure conditions schema · Failure conditions impl · Timeouts · Execution limits · Event triggers · Output formats · Output formatting · Default output schema · Output history · Reusable workflows · Criticality modes · Fault management
Providers: A2A · Command · Script · MCP · UTCP · MCP tools for AI · Claude Code · AI custom tools · AI custom tools usage · Custom tools · GitHub ops · Git checkout · HTTP integration · Memory · Human input · Custom providers
Operations: Security · Performance · Observability · Debugging · Debug visualizer · Telemetry setup · Dashboards · Troubleshooting · Suppressions · GitHub checks · Bot integrations · Slack · Telegram · Email · WhatsApp · Teams · Scheduler · Sandbox engines
Testing: Getting started · DSL reference · Flows · Fixtures & mocks · Assertions · Cookbook · TDD for assistants · CLI & reporters · CI integration · Troubleshooting
Enterprise: Licensing · Enterprise policy · Scheduler storage · Database operations · Capacity planning · Production deployment · Deployment
Architecture & RFCs: Architecture · Contributing · Failure routing RFC · Bot transports RFC · Debug visualizer RFC · Debug visualizer progress · Engine state machine plan · Engine pause/resume RFC · Event-driven GitHub RFC · Execution statistics RFC · Telemetry tracing RFC · Test framework RFC · SDK RFC · Goto/forward run plan · Loop routing refactor · Schema next PR · Fact validator gap analysis · Fact validator plan
Recipes & examples: Recipes · Dev playbook · Tag filtering · Author permissions · Session reuse · SDK
Learn more: CONTRIBUTING.md
MIT License — see LICENSE
FAQs
AI workflow engine for code review, assistants, and automation — orchestrate checks, MCP tools, and AI providers with YAML-driven pipelines
The npm package @probelabs/visor receives a total of 2,773 weekly downloads. As such, @probelabs/visor popularity was classified as popular.
We found that @probelabs/visor demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.