
Security News
rv Is a New Rust-Powered Ruby Version Manager Inspired by Python's uv
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
@blahai/cli
Advanced tools
Because these docs are just mashed together and mostly AI generated I also asked OpenAI 4.5 Deep Research to summarise everything thus far done which can be found here https://gist.github.com/thomasdavis/3827a1647533e488c107e64aefa54831
A comprehensive CLI tool for building, testing, and deploying AI tools across multiple protocols. This package provides a complete implementation with support for both Model Context Protocol (MCP) and Simple Language and Object Protocol (SLOP), enabling seamless integration with various AI systems and tools.
# Install from npm
npm install @blahai/cli --global
# Or install from the repo
git clone https://github.com/thomasdavis/blah.git
cd blah
pnpm install
pnpm run build
Ideas TODO
blah slop start
is called on an inline blah.config that loads the jsonresume mcp server over slopCreate a .env
file in your project directory or in the packages/cli
directory with the following variables:
# Required for OpenAI model access in simulations
OPENAI_API_KEY=your_openai_api_key_here
# Default BLAH manifest host (optional)
BLAH_HOST=https://example.com/blah-config.json
# Other API keys for tools
BRAVE_API_KEY=your_brave_api_key
GITHUB_TOKEN=your_github_token
You can also specify these environment variables directly in your blah.json
configuration file under the env
property.
If installed globally:
# Initialize a new blah.json configuration file
blah init
# Start the MCP server
blah mcp start
# List available tools from your configuration
blah mcp tools
# Run a simulation of an MCP client interacting with the server
blah mcp simulate
# Start a SLOP server to expose all tools via HTTP
blah slop start
# Validate a BLAH manifest file
blah validate path/to/blah.json
# Launch the Flow Editor
blah flows
Or if using from the repo:
# Start the MCP server
npx tsx src/index.ts mcp start
# Run a simulation with options
npx tsx src/index.ts mcp simulate --model gpt-4o-mini --prompt "create a tool that writes poetry"
# Initialize a new configuration file
npx tsx src/index.ts init
These options are available across multiple commands:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)blah mcp
)The mcp
command group provides access to Model Context Protocol functionality.
blah mcp start
)Starts an MCP server that connects to a BLAH manifest and exposes tools via the Model Context Protocol.
Options:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)--sse
- Start server in SSE (Server-Sent Events) mode--port <number>
- Port to run the SSE server on (default: 4200)The server supports:
When started in SSE mode, the server exposes the following endpoints:
MCP Standard Endpoints:
/sse
- Official MCP SSE connection endpoint/messages
- Message handling for MCP SSE communicationCustom Endpoints:
/events
- Custom SSE event stream for real-time updates/tools
- List all available tools (including both regular and SLOP tools)/config
- Access the current configuration/health
- Health check endpointFor detailed examples of how to test and interact with the SSE server, see the MCP SSE Server Testing Guide section below.
blah mcp tools
)Lists all available tools from your configuration.
Options:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)blah mcp simulate
)Runs a simulated interaction between an AI model and the MCP server to test tool selection and execution.
Options:
-m, --model <model>
- OpenAI model to use (default: gpt-4o-mini)-s, --system-prompt <prompt>
- System prompt for the simulation-p, --prompt <prompt>
- User prompt to send-c, --config <path>
- Path to a blah.json configuration file (local path or URL)blah slop
)The slop
command group provides access to Simple Language and Object Protocol (SLOP) functionality.
blah slop start
)Starts a SLOP server that exposes all tools from your configuration via HTTP endpoints.
Options:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)--port <number>
- Port to run the SLOP server on (default: 5000)The server exposes the following endpoints:
GET /tools
- Lists all available tools in SLOP formatGET /models
- Lists available models (default model)POST /tools/:toolName
- Execute a specific toolGET /health
- Health check endpointGET /config
- Server configuration informationblah slop tools
)Lists all SLOP tools from the manifest or a specific SLOP server.
Options:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)-u, --url <url>
- Directly query a SLOP server URL for tools-m, --manifest-only
- Only show tools defined in the manifest without fetching from endpointsblah slop models
)Lists all models available from SLOP servers defined in the manifest or from a specific URL.
Options:
-c, --config <path>
- Path to a blah.json configuration file (local path or URL)-u, --url <url>
- Directly query a SLOP server URL for modelsblah validate
)Validates a BLAH manifest file against the schema.
Options:
[file]
- Path to the BLAH manifest file (defaults to ./blah.json)-c, --config <path>
- Path to a blah.json configuration file (local path or URL)blah flows
)Launches a visual editor for creating and editing agent workflows.
Options:
-p, --port <number>
- Port to run the server on (default: 3333)-c, --config <path>
- Path to a blah.json configuration file (local path or URL)blah init
)Initializes a new blah.json configuration file with a default template.
Options:
[file]
- Path to create the blah.json file (defaults to ./blah.json)The flow editor automatically reads from and writes to your blah.json
file. If the file doesn't exist, it will be created when you save a flow.
@blahai/cli serves as a bridge between different AI tool protocols, allowing tools built for one protocol to be used with systems that support another. The core of the project is a flexible architecture that supports multiple protocol implementations:
Model Context Protocol (MCP)
Simple Language and Object Protocol (SLOP)
Remote Tool Execution via ValTown
Local Tool Execution
URI-based Tool Execution
SLOP Tool Execution
Fallback Mechanism
A BLAH manifest is a JSON file that defines the tools available through your MCP server. You can create one locally or host it on ValTown:
Create a blah.json
file in your project directory:
{
"name": "my-blah-manifest",
"version": "1.0.0",
"description": "My BLAH manifest with custom tools",
"env": {
"OPENAI_API_KEY": "your_openai_api_key",
"VALTOWN_USERNAME": "your_valtown_username"
},
"tools": [
{
"name": "hello_world",
"description": "Says hello to the world",
"inputSchema": {
"type": "object",
"properties": {},
"required": []
}
},
{
"name": "brave_search",
"command": "npx -y @modelcontextprotocol/server-brave-search",
"description": "Search the web using Brave Search API",
"inputSchema": {}
}
]
}
blah
export default async function server(request: Request): Promise<Response> {
const tools = [
{
name: "hello_name",
description: `Says hello to the name`,
inputSchema: {
type: "object",
properties: {
name: {
type: "string",
description: `Name to say hello to`,
},
},
},
},
];
return new Response(JSON.stringify(tools), {
headers: {
"Content-Type": "application/json",
},
status: 200,
});
}
The BLAH manifest (blah.json
) follows this schema:
{
"name": "string", // Required: Name of your BLAH manifest
"version": "string", // Required: Version number
"description": "string", // Optional: Description of your manifest
"extends": {
// Optional: Extend from other BLAH manifests
"extension-name": "./path/to/local/config.json",
"remote-name": "https://example.com/remote-config.json"
},
"env": {
// Optional: Environment variables
"OPENAI_API_KEY": "string",
"VALTOWN_USERNAME": "string",
"BRAVE_API_KEY": "string"
},
"tools": [
// Required: Array of tool definitions
{
"name": "string", // Required: Tool name (used for invocation)
"description": "string", // Required: Tool description
"command": "string", // Optional: Command to execute for local tools
"originalName": "string", // Optional: Original name for MCP server tools
"slop": "string", // Optional: URL to a SLOP endpoint for this tool
"slopUrl": "string", // Optional: Alternative to slop property
"inputSchema": {
// Required: JSON Schema for tool inputs
"type": "object",
"properties": {}
}
}
],
"flows": [] // Optional: Array of flow definitions
}
The CLI supports several types of tools:
command
property that executes a local commandsource
property that execute via HTTP endpointsslop
or slopUrl
property that connect to SLOP endpointsCreate a blah-simulation.json
file for default simulation settings:
{
"model": "gpt-4o-mini",
"systemPrompt": "You are a coding assistant that when given a list of tools, you will call a tool from that list based off the conversation. Once you have enough information to respond to the user based off tool results, just give them a nice answer. If someone asks to create a tool, and then it does, the next time it should invoke the tool. Don't create tools if they already exist.",
"blah": "https://ajax-blah.web.val.run",
"prompt": "say hello to julie"
}
Flows are stored in the flows
array of your blah.json
file. Each flow has the following structure:
{
"flows": [
{
"id": "flow_1",
"name": "image_workflow",
"description": "A workflow for image generation",
"nodes": [
{
"id": "start1",
"type": "start",
"position": { "x": 250, "y": 50 },
"data": {},
"retry": { "maxAttempts": 0, "delay": 0 },
"errorHandling": { "onError": "log" }
},
{
"id": "agent1",
"type": "ai_agent",
"position": { "x": 250, "y": 150 },
"data": {
"name": "ImageGenerator",
"configuration": {
"prompt": "Generate image based on description"
}
},
"retry": { "maxAttempts": 3, "delay": 5 },
"errorHandling": { "onError": "log" }
},
{
"id": "end1",
"type": "end",
"position": { "x": 250, "y": 250 },
"data": {},
"retry": { "maxAttempts": 0, "delay": 0 },
"errorHandling": { "onError": "log" }
}
],
"edges": [
{ "source": "start1", "target": "agent1" },
{ "source": "agent1", "target": "end1" }
]
}
]
}
The flow editor supports the following node types:
start
: Entry point for the flowend
: Exit point for the flowai_agent
: AI agent node that can process informationdecision
: Decision node that routes the flow based on conditionsaction
: Action node that performs a specific taskinput
: Node that collects input from usersoutput
: Node that provides output to usersBLAH works as a bridge between different AI tool protocols, enabling interoperability between systems:
Works with any system that supports the Model Context Protocol:
Connects with systems that implement the Simple Language and Object Protocol:
The CLI supports extending configurations from other sources:
To use this feature, add an extends
property to your blah.json
:
{
"name": "my-config",
"version": "1.0.0",
"extends": {
"shared-tools": "./shared-tools.json",
"remote-tools": "https://example.com/shared-blah.json"
},
"tools": [
// Your local tools here...
]
}
See the Configuration Extensions section for more details.
The CLI is built on a flexible architecture that supports multiple protocols:
The CLI provides seamless integration with HTTP endpoints:
source
property in tool configuration to specify HTTP endpointsThe CLI supports executing tools locally:
The CLI supports nesting protocol servers:
The CLI provides comprehensive support for SLOP tools:
When publishing the @blahai/cli package to npm, workspace dependencies need to be replaced with actual version numbers:
workspace:*
references in package.json with specific versionsThe MCP server can run in two modes: standard (stdio) mode and SSE mode. The SSE mode enables web-based access to tools via HTTP endpoints.
# Start with default settings (port 4200)
blah mcp start --sse
# Start on a specific port
blah mcp start --sse --port 4444
# Start with a specific configuration
blah mcp start --sse --config path/to/blah.json
When running in SSE mode, the server exposes:
MCP Standard Endpoints:
/sse
- SSE connection endpoint for MCP clients/messages
- JSON-RPC message endpoint for MCP communicationCustom Endpoints:
/events
- Custom SSE event stream for real-time updates/tools
- List available tools with metadata/config
- Access the current configuration/health
- Server health check# Health check
curl http://localhost:4200/health
# Get available tools
curl http://localhost:4200/tools
# Execute a tool via JSON-RPC
curl -X POST http://localhost:4200/messages \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "translate_to_leet",
"arguments": {
"text": "hello world"
}
},
"id": 1
}'
// Connect to custom event stream
const eventSource = new EventSource("http://localhost:4200/events");
eventSource.addEventListener("connected", (event) => {
console.log("Connected:", event.data);
});
eventSource.addEventListener("tools-updated", (event) => {
console.log("Tools updated:", JSON.parse(event.data));
});
// Get tools via custom endpoint
fetch("http://localhost:4200/tools")
.then((response) => response.json())
.then((data) => console.log("Tools:", data.tools));
The CLI includes a built-in playground client for testing:
# Test SSE mode
tsx src/playground/client.ts --sse
# Test SSE mode with specific port
tsx src/playground/client.ts --sse --port 4444
# Test standard stdio mode
tsx src/playground/client.ts
For more detailed examples in JavaScript, Python, and with the MCP SDK, see the full documentation in CLAUDE.md.
BLAH is released under the MIT License.
FAQs
Barely Logical Agent Host
We found that @blahai/cli demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.
Security News
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.