
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
opencode-workflows
Advanced tools
Workflow automation plugin for OpenCode using the Mastra workflow engine. Define deterministic, multi-step processes that agents can trigger to perform complex tasks reliably.
Add to your OpenCode config:
// opencode.jsonc
{
"plugin": ["opencode-workflows@latest"]
}
Using @latest ensures you always get the newest version automatically when OpenCode starts.
Restart OpenCode. The plugin will automatically load.
The plugin uses sensible defaults but can be configured via environment variables:
| Environment Variable | Default | Description |
|---|---|---|
WORKFLOW_DIRS | .opencode/workflows,~/.opencode/workflows | Comma-separated directories to scan for workflow JSON files |
WORKFLOW_DB_PATH | .opencode/data/workflows.db | SQLite database path for persisting workflow runs |
WORKFLOW_TIMEOUT | 300000 (5 min) | Global timeout for workflow execution in milliseconds |
WORKFLOW_VERBOSE | false | Enable verbose debug logging |
WORKFLOW_MAX_COMPLETED_RUNS | 1000 | Maximum number of completed runs to keep in memory |
WORKFLOW_MAX_RUN_AGE | 30 | Maximum age of runs to keep in database (days). Older runs are automatically deleted. |
WORKFLOW_ENCRYPTION_KEY | - | 32-character encryption key for encrypting secret inputs at rest |
The plugin includes several optimizations for fast startup, especially with large workflow histories:
Lazy Compilation: Workflows are compiled on-demand when first accessed, not at startup. This dramatically reduces initialization time when you have many workflow definitions.
Background Loading: Historical workflow runs are loaded in the background after initialization:
Automatic Cleanup: Runs older than maxRunAge days are automatically deleted at midnight to prevent unbounded database growth. Only terminal-state runs (completed, failed, cancelled) are deleted; active workflows are preserved.
Memory Management: The maxCompletedRuns setting limits how many completed runs are kept in memory. Older runs are automatically removed but remain accessible via database queries.
Database Optimization: The plugin uses SQLite with WAL mode and composite indexes for optimal query performance.
These optimizations ensure OpenCode starts instantly even with thousands of historical workflow runs.
The plugin supports structured logging for observability and log aggregation systems. Logs include contextual information like workflowId, runId, and stepId.
Programmatic Configuration:
import { createLogger } from "opencode-workflows/loader";
// JSON format for log aggregation (e.g., DataDog, Splunk, CloudWatch)
const logger = createLogger({ format: "json", verbose: true });
// Custom output handler
const logger = createLogger({
output: (entry) => {
// entry: { timestamp, level, message, workflowId?, runId?, stepId?, durationMs?, metadata? }
myLogAggregator.send(entry);
}
});
JSON Log Format:
{"timestamp":"2024-01-15T10:30:00.000Z","level":"info","message":"Workflow deploy-prod completed successfully","workflowId":"deploy-prod","runId":"abc-123","durationMs":45000}
Text Log Format (default):
[workflow] [INFO] [workflow=deploy-prod run=abc-1234 step=build duration=5000ms] Step completed
Workflow runs are automatically persisted to a LibSQL (SQLite) database. This enables:
mode=runs, or /workflow runs if you've added a slash alias)mode=resume, or /workflow resume <runId> if you configured a slash alias)The database is created automatically at the configured dbPath.
Workflows often need to handle sensitive data like API keys, passwords, and tokens. The plugin provides built-in security features to protect these values:
Add a secrets array to your workflow definition listing which input names contain sensitive data:
{
"id": "deploy-with-credentials",
"description": "Deploy using API credentials",
"inputs": {
"environment": "string",
"apiKey": "string",
"dbPassword": "string"
},
"secrets": ["apiKey", "dbPassword"],
"steps": [
{
"id": "deploy",
"type": "shell",
"command": "deploy.sh --env={{inputs.environment}} --key={{inputs.apiKey}}"
}
]
}
Log Masking: Any input listed in secrets is replaced with *** in console output
deploy.sh --env=prod --key=sk-12345 appears as deploy.sh --env=prod --key=***Environment Variables: All {{env.*}} interpolations are automatically treated as secrets
secrets array{{env.API_KEY}} is always masked in logsStorage Encryption: When an encryption key is configured, secret inputs are encrypted at rest in the SQLite database using AES-256-GCM
To enable encryption for secrets stored in the database, configure an encryption key:
import { createWorkflowPlugin } from "opencode-workflows";
const plugin = createWorkflowPlugin({
storage: {
encryptionKey: process.env.WORKFLOW_ENCRYPTION_KEY
}
});
The encryption key should be:
When encryption is enabled:
{
"id": "secure-deploy",
"description": "Deploy with encrypted credentials",
"inputs": {
"version": "string",
"apiToken": "string",
"webhookSecret": "string"
},
"secrets": ["apiToken", "webhookSecret"],
"steps": [
{
"id": "deploy",
"type": "shell",
"command": "deploy --version={{inputs.version}} --token={{inputs.apiToken}}"
},
{
"id": "notify",
"type": "http",
"method": "POST",
"url": "https://api.example.com/webhooks",
"headers": {
"Authorization": "Bearer {{inputs.webhookSecret}}"
},
"body": {
"version": "{{inputs.version}}",
"status": "deployed"
},
"after": ["deploy"]
}
]
}
In the logs, you'll see:
> deploy --version=1.0.0 --token=***
And in the database, the apiToken and webhookSecret values are stored encrypted.
Create workflow definitions in .opencode/workflows/ as JSON, JSONC, YAML, or TypeScript files. JSONC files support comments, while YAML files offer the best experience for multi-line content like prompts and shell scripts.
YAML is the recommended format for workflows that contain LLM prompts or multi-line shell scripts. Key benefits:
| block scalar operator for clean, readable prompts#Comparison:
JSON (hard to read):
"prompt": "Review this code.\n\nLook for:\n1. Security bugs\n2. Performance issues"
YAML (much cleaner):
prompt: |
Review this code.
Look for:
1. Security bugs
2. Performance issues
# .opencode/workflows/code-review.yaml
$schema: https://raw.githubusercontent.com/mark-hingston/opencode-workflows/main/schemas/workflow.schema.json
id: code-review
name: Multi-Agent Code Review
description: Parallel expert code review with synthesis
inputs:
file: string
steps:
- id: read_file
type: file
description: Read the source file
action: read
path: "{{inputs.file}}"
- id: security_review
type: agent
description: Security vulnerability analysis
agent: security-reviewer
after: [read_file]
# Multi-line prompts are clean and readable!
prompt: |
Review this code for security issues:
{{steps.read_file.content}}
- id: synthesize
type: agent
description: Combine reviews into a report
agent: tech-lead
after: [security_review]
prompt: |
Combine these reviews into a prioritized report:
## Security Review
{{steps.security_review.response}}
For type safety, code reuse, and dynamic configuration, you can define workflows in TypeScript:
// .opencode/workflows/deploy.ts
import type { WorkflowDefinition } from "opencode-workflows";
const workflow: WorkflowDefinition = {
id: "deploy-prod",
description: "Deploy to production with type safety",
inputs: {
version: "string",
environment: "string",
},
steps: [
{
id: "build",
type: "shell",
command: "npm run build",
},
{
id: "deploy",
type: "shell",
command: "deploy --version={{inputs.version}} --env={{inputs.environment}}",
after: ["build"],
},
],
};
export default workflow;
Dynamic Workflow Generation:
Export a function for workflows that need runtime configuration:
// .opencode/workflows/multi-deploy.ts
import type { WorkflowDefinition } from "opencode-workflows";
export default async function(): Promise<WorkflowDefinition> {
// Could fetch configuration from an API or file
const regions = ["us-east-1", "eu-west-1", "ap-south-1"];
return {
id: "multi-region-deploy",
steps: regions.map((region, index) => ({
id: `deploy-${region}`,
type: "shell" as const,
command: `deploy --region ${region}`,
after: index > 0 ? [`deploy-${regions[index - 1]}`] : undefined,
})),
};
}
Supported File Extensions: .ts, .js, .mts, .mjs
TypeScript files are compiled at runtime using jiti, so no build step is required.
Workflow files support JSON Schema for IDE validation and autocomplete. Add the $schema property to your workflow files:
{
"$schema": "https://raw.githubusercontent.com/mark-hingston/opencode-workflows/main/schemas/workflow.schema.json",
"id": "my-workflow",
...
}
This provides:
{
"$schema": "https://raw.githubusercontent.com/mark-hingston/opencode-workflows/main/schemas/workflow.schema.json",
// Unique workflow identifier
"id": "deploy-prod",
"description": "Deploys the application to production",
"inputs": {
"version": "string"
},
"steps": [
{
"id": "check-git",
"type": "shell",
"command": "git status --porcelain",
"description": "Ensure git is clean"
},
{
"id": "run-tests",
"type": "shell",
"command": "npm test",
"after": ["check-git"]
},
{
"id": "ask-approval",
"type": "suspend",
"description": "Wait for user to approve deployment",
"after": ["run-tests"]
},
{
"id": "deploy-script",
"type": "shell",
"command": "npm run deploy -- --tag {{inputs.version}}",
"after": ["ask-approval"]
}
]
}
When a workflow defines inputs, all inputs are required by default. If you try to run a workflow without providing all required inputs, the plugin will return a helpful error message listing the missing inputs:
$ workflow tool call (mode=run, workflowId=deploy-prod)
Missing required input(s) for workflow **deploy-prod**:
- **version** (string)
Usage: supply `version` via the workflow tool (mode=run workflowId=deploy-prod params.version=<value>) or `/workflow run deploy-prod version=<value>` if you've configured a slash alias.
This validation happens before the workflow starts, ensuring you don't waste time on a run that would fail due to missing inputs.
Execute shell commands:
{
"id": "build",
"type": "shell",
"command": "npm run build",
"cwd": "./packages/app",
"env": { "NODE_ENV": "production" },
"failOnError": true,
"timeout": 60000,
"retry": { "attempts": 3, "delay": 1000 }
}
| Option | Type | Default | Description |
|---|---|---|---|
command | string | required | Shell command to execute |
cwd | string | - | Working directory (supports interpolation) |
env | object | - | Environment variables (supports interpolation) |
failOnError | boolean | true | Fail workflow if command exits non-zero |
timeout | number | - | Step-specific timeout in milliseconds |
retry | object | - | Retry configuration: { attempts: number, delay?: number } |
safe | boolean | false | Use safe mode to prevent shell injection |
args | array | - | Command arguments (required when safe: true) |
Safe Mode (Recommended for User Input):
To prevent shell injection attacks when using user-provided input, use safe: true and provide arguments as an array. This bypasses the shell entirely by spawning the command directly:
- id: secure-echo
type: shell
command: echo
safe: true
args: ["Hello", "{{inputs.userInput}}"] # userInput cannot inject commands here
In safe mode:
;, |, &, etc.) in arguments are treated as literal textargs array is required and each argument is passed separately to the processInvoke OpenCode tools:
{
"id": "send-notification",
"type": "tool",
"tool": "slack_send",
"args": {
"channel": "#releases",
"text": "Deployed {{inputs.version}}"
}
}
Invoke a named OpenCode agent or prompt an LLM directly:
Named Agent (recommended):
{
"id": "security-review",
"type": "agent",
"agent": "security-reviewer",
"prompt": "Review this code for security issues:\n\n{{steps.read_file.result}}",
"maxTokens": 1000
}
This invokes a pre-defined OpenCode agent by name. The agent's system prompt, model, and other settings are configured in OpenCode's agent definitions.
Inline LLM (fallback):
{
"id": "generate-changelog",
"type": "agent",
"prompt": "Generate a changelog for version {{inputs.version}}",
"system": "You are a technical writer.",
"maxTokens": 1000
}
This makes a direct LLM call with an optional system prompt. Note that model selection may not be supported by the plugin system - the configured default model will be used.
| Option | Type | Description |
|---|---|---|
agent | string | Name of a pre-defined OpenCode agent to invoke |
prompt | string | The prompt to send (required, supports interpolation) |
system | string | System prompt for inline LLM calls (ignored if agent is specified) |
maxTokens | number | Maximum tokens for response |
Pause for human input:
{
"id": "approval",
"type": "suspend",
"message": "Ready to deploy. Resume to continue.",
"description": "Wait for deployment approval"
}
Resume Data Schema:
For workflows that need structured input when resuming, define a resumeSchema:
{
"id": "approval",
"type": "suspend",
"message": "Review the changes and provide approval.",
"resumeSchema": {
"approved": "boolean",
"comment": "string"
}
}
When a resumeSchema is defined:
{{steps.approval.data.approved}}Supported schema types: string, number, integer, boolean, array, object
Resume a workflow with data using the workflow tool:
{ "mode": "resume", "runId": "<runId>", "resumeData": { "approved": true, "comment": "LGTM" } }
Slash alias (if configured):
/workflow resume <runId> {"approved": true, "comment": "LGTM"}
Pause workflow execution for a specified duration (platform-independent alternative to shell: sleep):
{
"id": "wait-for-deploy",
"type": "wait",
"durationMs": 5000,
"description": "Wait for deployment to propagate"
}
Useful for waiting for external systems (e.g., waiting for a deployed URL to become live, rate limiting API calls, or giving services time to initialize).
| Option | Type | Default | Description |
|---|---|---|---|
durationMs | number | required | Duration to wait in milliseconds |
Wait step output includes:
completed - Whether the wait completed successfully (always true unless skipped)durationMs - The duration that was waitedMake HTTP requests:
{
"id": "notify-slack",
"type": "http",
"method": "POST",
"url": "https://hooks.slack.com/services/xxx",
"headers": {
"Content-Type": "application/json"
},
"body": {
"text": "Deployed {{inputs.version}}"
},
"failOnError": true
}
HTTP step output includes:
body - Parsed JSON response, or null if response is not valid JSONtext - Raw response text (useful for non-JSON responses or debugging)status - HTTP status codeheaders - Response headersRead, write, or delete files:
{
"id": "write-version",
"type": "file",
"action": "write",
"path": "./version.txt",
"content": "{{inputs.version}}"
}
{
"id": "read-config",
"type": "file",
"action": "read",
"path": "./config.json"
}
Iterate over an array and execute a step for each item (batch processing):
{
"id": "lint-files",
"type": "iterator",
"items": "{{steps.find-files.result}}",
"runStep": {
"type": "shell",
"command": "eslint {{inputs.item}}"
}
}
The iterator provides special context variables for each iteration:
{{inputs.item}} - The current item being processed{{inputs.index}} - The zero-based index of the current itemFor objects in the array, access nested properties:
{
"id": "deploy-services",
"type": "iterator",
"items": "{{inputs.services}}",
"runStep": {
"type": "shell",
"command": "deploy {{inputs.item.name}} --region {{inputs.item.region}}"
}
}
The iterator step collects results from all iterations:
{
"id": "use-results",
"type": "shell",
"command": "echo 'Processed {{steps.lint-files.count}} files'",
"after": ["lint-files"]
}
Iterator step output includes:
results - Array of outputs from each iterationcount - Number of items processed| Option | Type | Description |
|---|---|---|
items | string | Interpolation expression resolving to an array (required) |
runStep | object | Step definition to execute for each item. Supports shell, tool, agent, http, and file step types. |
runSteps | array | Array of step definitions to execute sequentially for each item (alternative to runStep). |
Sequential Processing:
To run multiple steps for each item, use runSteps instead of runStep. Inner steps can access results from previous steps in the sequence:
- id: process-repos
type: iterator
items: "{{inputs.repos}}"
runSteps:
- id: clone
type: shell
command: git clone {{inputs.item.url}}
- id: test
type: shell
command: npm test
cwd: "./{{inputs.item.name}}"
When using runSteps, the output for each iteration contains results from all steps in the sequence:
{
"results": [
{ "clone": { "stdout": "...", "exitCode": 0 }, "test": { "stdout": "...", "exitCode": 0 } },
{ "clone": { "stdout": "...", "exitCode": 0 }, "test": { "stdout": "...", "exitCode": 0 } }
],
"count": 2
}
Limitations:
runStep or runSteps must be provided, but not bothExecute JavaScript code in a sandboxed environment for dynamic logic and workflow generation:
{
"id": "calculate-shards",
"type": "eval",
"script": "return inputs.items.filter(x => x.enabled).map(x => x.id);",
"scriptTimeout": 5000
}
The script has access to:
inputs - Workflow input parameterssteps - Previous step outputsenv - Environment variables (read-only)console - log, warn, error methods (output routed to plugin logger)JSON, Math, Date, Array, Object, String, Number, Boolean, RegExp, Map, Set, Promise| Option | Type | Default | Description |
|---|---|---|---|
script | string | required | JavaScript code to execute |
scriptTimeout | number | 30000 | Script execution timeout in milliseconds |
Dynamic Workflow Generation (Agentic Planning):
Eval steps can generate workflows dynamically at runtime, enabling "agentic planning" where an agent decides how to solve a problem:
{
"id": "plan-deployment",
"type": "eval",
"script": "return { workflow: { id: 'dynamic-deploy', steps: inputs.services.map(s => ({ id: `deploy-${s}`, type: 'shell', command: `deploy ${s}` })) } };"
}
When the script returns { workflow: WorkflowDefinition }, the generated workflow is validated and executed as a sub-workflow.
Accessing Dynamic Results:
When a dynamic sub-workflow executes, the parent step returns the sub-workflow's Run ID. Currently, the parent workflow does not automatically inherit the outputs of the child workflow. To use results from a dynamic workflow, the child workflow should write to a shared resource (like a file or database) that subsequent steps in the parent workflow can read.
Security:
The eval sandbox blocks access to:
require, process, global, Buffer (Node.js internals)fetch, setTimeout, setInterval (async operations - use http/wait steps instead)Inputs and steps are frozen (immutable) within the script.
Enhanced Security with isolated-vm:
For production environments handling untrusted code, install the optional isolated-vm package for true V8 isolate sandboxing:
npm install isolated-vm
When isolated-vm is installed, eval steps run in a completely isolated V8 context with:
If isolated-vm is not installed, the plugin falls back to Node.js vm module with a warning logged.
Limitations:
import statementsUse the workflow tool. If you've registered slash aliases, the equivalent /workflow ... forms are shown in parentheses:
mode=list (/workflow list) - List available workflowsmode=show workflowId=<id> (/workflow show <id>) - Show workflow detailsmode=graph workflowId=<id> (/workflow graph <id>) - Show workflow DAG as Mermaid diagrammode=run workflowId=<id> params... (/workflow run <id> [param=value ...]) - Run a workflowmode=status runId=<runId> (/workflow status <runId>) - Check run statusmode=resume runId=<runId> resumeData... (/workflow resume <runId> [data]) - Resume a suspended workflowmode=cancel runId=<runId> (/workflow cancel <runId>) - Cancel a running workflowmode=runs [workflowId] (/workflow runs [workflowId]) - List recent runsThe graph command generates a Mermaid diagram showing the workflow's step dependencies:
Tool call:
{ "mode": "graph", "workflowId": "deploy-prod" }
Slash alias (if configured):
/workflow graph deploy-prod
Output:
graph TD
check-git["check-git (shell)"]
run-tests["run-tests (shell)"]
ask-approval(["ask-approval (suspend)"])
deploy-script["deploy-script (shell)"]
check-git --> run-tests
run-tests --> ask-approval
ask-approval --> deploy-script
Different step types are shown with distinct shapes:
["..."] - shell, tool, http, file steps([...]) - suspend steps (human-in-the-loop){{...}} - agent steps (LLM calls)You can predefine shortcuts in your opencode.jsonc so you don't have to remember full prompts:
// opencode.jsonc or ~/.config/opencode/opencode.jsonc
{
"plugin": ["opencode-workflows@latest"],
"command": {
"workflow-list": {
"template": "Use the workflow tool with mode=list",
"description": "List all workflows"
},
"workflow-graph": {
"template": "Use the workflow tool with mode=graph and workflowId=$ARGUMENTS",
"description": "Show a workflow DAG as Mermaid"
},
"workflow-run": {
"template": "Use the workflow tool with mode=run and workflowId=$ARGUMENTS",
"description": "Start a workflow run"
},
"workflow-status": {
"template": "Use the workflow tool with mode=status and runId=$ARGUMENTS",
"description": "Check workflow run status"
}
}
}
Then you can invoke /workflow-graph deploy-prod or /workflow-status 123 directly in OpenCode.
When passing parameters via the workflow tool run mode (or /workflow run if you've set a slash alias), values are automatically converted to their appropriate types:
| Input | Parsed As |
|---|---|
count=5 | number (5) |
ratio=3.14 | number (3.14) |
enabled=true | boolean (true) |
debug=false | boolean (false) |
name=hello | string ("hello") |
url=http://example.com?foo=bar | string (preserved) |
This ensures workflow inputs match their expected schema types without manual conversion.
Agents can trigger workflows using the workflow tool:
// List workflows
workflow({ mode: "list" })
// Run a workflow
workflow({
mode: "run",
workflowId: "deploy-prod",
params: { version: "1.2.0" }
})
// Check status
workflow({ mode: "status", runId: "abc-123" })
// Resume suspended workflow
workflow({
mode: "resume",
runId: "abc-123",
resumeData: { approved: true }
})
Use {{expression}} syntax to reference:
{{inputs.paramName}} - Workflow input parameters{{steps.stepId.stdout}} - Shell step stdout{{steps.stepId.response}} - Agent step response{{steps.stepId.result}} - Tool step result{{steps.stepId.body}} - HTTP step response body (parsed JSON or null){{steps.stepId.text}} - HTTP step raw response text{{steps.stepId.content}} - File step content (read action){{env.VAR_NAME}} - Environment variables{{run.id}} - Current workflow run ID{{run.workflowId}} - Workflow definition ID{{run.startedAt}} - ISO timestamp when run startedYou can access deeply nested properties using dot notation:
{
"id": "use-api-data",
"type": "shell",
"command": "echo 'User ID: {{steps.api-call.body.data.user.id}}'"
}
This works for:
{{steps.http.body.users[0].name}}{{steps.tool.result.metadata.version}}{{inputs.config.database.host}}When a template contains only a single variable reference (e.g., "{{inputs.count}}"), the original type is preserved. This means:
"{{inputs.count}}" with count=42 returns the number 42, not the string "42""Count: {{inputs.count}}" returns "Count: 42" (string interpolation)Steps can include a condition to control execution:
{
"id": "deploy-prod",
"type": "shell",
"command": "deploy.sh",
"condition": "{{inputs.environment}}"
}
The step is skipped if the condition evaluates to "false", "0", or "".
Workflows support cleanup and failure handling blocks. These steps run outside the main dependency graph.
Runs only if the workflow fails (throws an error). The error details are available in {{inputs.error}}:
onFailure:
- id: alert-slack
type: http
method: POST
url: "{{env.SLACK_WEBHOOK}}"
body:
text: "Workflow failed at step {{inputs.error.stepId}}: {{inputs.error.message}}"
The error context includes:
{{inputs.error.message}} - The error message{{inputs.error.stepId}} - The ID of the step that failed (if applicable){{inputs.error.stack}} - The error stack trace (if available)Runs after the workflow finishes, regardless of success or failure. Useful for resource cleanup:
finally:
- id: cleanup-temp
type: shell
command: rm -rf ./temp-build-artifacts
- id: release-lock
type: http
method: DELETE
url: "{{env.LOCK_SERVICE}}/locks/{{run.id}}"
Notes:
onFailure steps run before finally stepssuspend or iterator stepsSteps can declare dependencies using after:
{
"id": "deploy",
"type": "shell",
"command": "deploy.sh",
"after": ["build", "test"]
}
Steps at the same dependency level run in parallel.
Workflow state is persisted to SQLite after each step completes. This provides automatic crash recovery:
After each step completes, the workflow state (including all step results) is saved to the database. On restart:
This means you can safely restart OpenCode without losing workflow progress.
Workflows can be automatically triggered by cron schedules or file change events.
Use cron expressions to run workflows on a schedule:
{
"id": "nightly-backup",
"trigger": {
"schedule": "0 2 * * *"
},
"steps": [
{
"id": "backup",
"type": "shell",
"command": "backup.sh"
}
]
}
Common cron patterns:
| Pattern | Description |
|---|---|
* * * * * | Every minute |
0 * * * * | Every hour |
0 0 * * * | Daily at midnight |
0 2 * * * | Daily at 2am |
0 0 * * 0 | Weekly on Sunday |
*/5 * * * * | Every 5 minutes |
0 9-17 * * 1-5 | Every hour 9-5 Mon-Fri |
Trigger workflows when files matching a glob pattern change:
{
"id": "test-on-save",
"trigger": {
"event": "file.change",
"pattern": "src/**/*.ts"
},
"steps": [
{
"id": "run-tests",
"type": "shell",
"command": "npm test"
}
]
}
The changedFile input is automatically passed to the workflow, containing the path of the file that triggered the workflow:
{
"id": "lint-on-save",
"trigger": {
"event": "file.change",
"pattern": "**/*.{ts,tsx}"
},
"steps": [
{
"id": "lint",
"type": "shell",
"command": "eslint {{inputs.changedFile}}"
}
]
}
Glob pattern examples:
| Pattern | Matches |
|---|---|
**/*.ts | All TypeScript files |
src/**/*.{ts,tsx} | TypeScript files in src/ |
*.json | JSON files in root |
src/components/**/* | Everything in components/ |
File change triggers are debounced (300ms) to prevent rapid repeated executions when multiple file system events fire for a single save operation.
| Option | Type | Description |
|---|---|---|
schedule | string | Cron expression for scheduled execution |
event | string | Event type (currently only file.change is supported) |
pattern | string | Glob pattern for file matching (used with file.change event) |
One of the most powerful use cases is orchestrating multiple AI agents in a deterministic pipeline. This lets you build reliable, repeatable AI workflows where specialized agents collaborate on complex tasks.
This example chains multiple specialized agents to review code from different perspectives, then synthesizes their findings:
{
"id": "code-review",
"name": "Multi-Agent Code Review",
"description": "Parallel expert review with synthesis",
"inputs": {
"file": "string"
},
"steps": [
{
"id": "read_file",
"type": "tool",
"tool": "read",
"args": { "filePath": "{{inputs.file}}" }
},
{
"id": "security_review",
"type": "agent",
"agent": "security-reviewer",
"prompt": "Review this code for security issues:\n\n{{steps.read_file.result}}",
"after": ["read_file"]
},
{
"id": "perf_review",
"type": "agent",
"agent": "performance-reviewer",
"prompt": "Review this code for performance issues:\n\n{{steps.read_file.result}}",
"after": ["read_file"]
},
{
"id": "quality_review",
"type": "agent",
"agent": "quality-reviewer",
"prompt": "Review this code for quality issues:\n\n{{steps.read_file.result}}",
"after": ["read_file"]
},
{
"id": "synthesize",
"type": "agent",
"agent": "tech-lead",
"prompt": "Combine these reviews into a single report:\n\n## Security\n{{steps.security_review.response}}\n\n## Performance\n{{steps.perf_review.response}}\n\n## Quality\n{{steps.quality_review.response}}",
"after": ["security_review", "perf_review", "quality_review"]
},
{
"id": "approve_fixes",
"type": "suspend",
"message": "Review complete:\n\n{{steps.synthesize.response}}\n\nResume to generate fixes.",
"after": ["synthesize"]
},
{
"id": "generate_fixes",
"type": "agent",
"agent": "code-fixer",
"prompt": "Fix the critical and high severity issues:\n\nOriginal:\n{{steps.read_file.result}}\n\nIssues:\n{{steps.synthesize.response}}",
"after": ["approve_fixes"]
}
]
}
Note: This example assumes you have agents named
security-reviewer,performance-reviewer,quality-reviewer,tech-lead, andcode-fixerconfigured in OpenCode. Alternatively, you can use inline LLM calls withsystemprompts instead of named agents.
Run it with the workflow tool:
{ "mode": "run", "workflowId": "code-review", "params": { "file": "src/api/auth.ts" } }
Or with a slash alias (if configured):
/workflow run code-review file=src/api/auth.ts
| Pattern | Description | Example |
|---|---|---|
| Sequential Chain | Each agent uses the previous agent's output | Planner → Executor → Reviewer |
| Parallel Experts | Multiple agents analyze independently, then synthesize | Security + Performance + Quality → Summary |
| Tool-Augmented | Agents use tools to read files, search code, make API calls | Read file → Analyze → Write fix |
| Human-in-the-Loop | suspend steps for approval between agent actions | Generate → Approve → Apply |
| Conditional Routing | Use condition to skip agents based on results | Skip deploy agent if tests failed |
MIT
FAQs
Workflow automation plugin for OpenCode using Mastra engine
We found that opencode-workflows demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.