
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
experimental-ash
Advanced tools
ASH stands for Agentic Serverless Harness. Ash is a filesystem-first framework for durable backend agents on Vercel.
ASH stands for Agentic Serverless Harness. Ash is a filesystem-first framework for durable backend agents on Vercel.
Its core idea is simple: an agent should feel like a product surface on disk, not a prompt string trapped inside application code. In Ash, the core system prompt lives in system.md, additive always-on prompt layers can live in system/, reusable procedures live in skills/, shared authored code lives in lib/, named downstream dependencies live in connections/, isolated command environments live in sandboxes/, and typed executable integrations live in tools/. Simple agents stay mostly markdown. When behavior needs code, you add TypeScript without leaving the model.
.ash/Most agent frameworks start by asking you to assemble abstractions. Ash starts with the agent itself.
You author a directory. That directory is the contract. The markdown is readable by engineering, product, and operations. The TypeScript is reserved for the parts that benefit from real code. The result is an agent architecture that stays legible as it grows.
system.md defines the core system prompt for the agent.system/ adds ordered always-on prompt layers after the root system prompt.skills/ package reusable capabilities and procedures that can be loaded on demand.lib/ is the canonical place for shared authored source code imported by tools and other module-backed files.connections/ declare named downstream dependencies with runtime-owned auth, policy, and optional MCP tool lowering.sandboxes/ define named backend-managed bash-style environments with lifecycle hooks.tools/ turn capabilities into typed executable integrations.schedules/ let the same agent run recurring jobs.subagents/ extend the model toward specialist delegation.This is the shift Ash is designed around: agents should be easy to read before they are clever to execute.
A useful Ash agent can be understood in one glance:
weather-agent/
├── package.json
└── agent/
├── agent.ts
├── system.md
├── system/
│ ├── forecast-guidelines.md
│ └── my-location.md
├── lib/
│ └── weather/
│ └── client.ts
├── sandboxes/
│ └── repo-shell.ts
├── skills/
│ └── get-weather.md
└── tools/
└── get-weather.ts
agent/system.md
You are a weather-focused assistant. Be concise, accurate, and explicit about when you are using the local weather tool.
agent/system/my-location.md
The default user location is Brooklyn, New York, unless the user specifies a different city.
agent/skills/get-weather.md
---
description: Use the weather tool before answering forecast or temperature questions.
---
When the user asks about weather, temperature, or forecast conditions, call the `get_weather` tool before answering.
agent/tools/get-weather.ts
import { createWeatherClient } from "../lib/weather/client";
import { defineTool } from "ash";
import { z } from "zod";
const weatherClient = createWeatherClient();
export default defineTool({
name: "get_weather",
description: "Get the current weather for a city.",
inputSchema: z.object({
city: z.string(),
}),
async execute(input) {
return weatherClient.getForecast(input.city);
},
});
agent/sandboxes/repo-shell.ts
import { defineSandbox } from "ash";
export default defineSandbox({
description: "Use this isolated repo shell for command-line maintenance tasks.",
async bootstrap({ sandbox }) {
await sandbox.runCommand("mkdir -p repo");
},
async onSession({ sandbox }) {
await sandbox.runCommand("touch .session-ready");
},
});
agent/lib/weather/client.ts
export function createWeatherClient() {
return {
async getForecast(city: string) {
return {
city,
temperatureF: 72,
condition: "Sunny",
summary: `Sunny in ${city} with a light breeze.`,
};
},
};
}
agent/agent.ts
import { defineAgent } from "ash";
import { openai } from "@ai-sdk/openai";
export default defineAgent({
model: openai("gpt-5.4-mini"),
});
defineAgent also accepts provider model ids such as "openai/gpt-5.4-mini" when you prefer the gateway-style string form.
That is the point of Ash. The parts that should be prose stay as prose. The parts that should be code stay as code.
From those files, Ash will:
.ash/system.md plus ordered system/ layers into the base promptskills/ into the default runtime workspace rootThe experience is intentionally simple at the surface, but it does real runtime work underneath.
activate_skill.getSession(), including caller auth and parent lineage.await getConnection(...).skills/ files seed the default runtime workspace root.health, message, and stream routes.Ash's governing internal model is channel-harness-runtime:
continuationToken, and owns delivery policy{ session, next }next, streams events, and owns workflow APIsThat split is why Ash exposes two identifiers:
continuationToken for the next user messagerunId for streaming and inspectionOnly the runtime layer should talk to workflow primitives. Channel code should call runtime contracts, and harness code should focus on model/tool behavior.
Ash now supports authored sandboxes/ for cases where the model should get a command environment,
but a typed JSON tool is the wrong shape.
Important rules:
sandboxes/*.ts file keeps its internal sandbox name from the file path, but the
model-visible tool name is lowered to lower_snake_case with a _sandbox suffixsandboxes/ slot is supported on the root agent and inside local subagent packagesnamebootstrap() prepares reusable template stateonSession() runs once per durable Ash session for that sandboxvercel on Vercel and local everywhere elseVercel-specific behavior:
ash build runs inside a hosted Vercel build and both VERCEL and
VERCEL_DEPLOYMENT_ID are present, Ash now prewarms authored Vercel sandbox templates during
buildbootstrap() for reusable template state onlyonSession() still runs later, inside the runtime turn path, once per durable Ash sessionThis is intentionally different from the default workspace:
bash tool for the run24.xpnpm# 1) Scaffold a new agent
npx experimental-ash@latest init my-agent
cd my-agent
pnpm install
# 2) Start local dev (REPL is enabled by default)
pnpm dev
This command scaffolds a project from the built-in template and starts your local runtime with the interactive REPL.
# 3) Deploy to Vercel
npx vercel deploy
Copy the deployment URL from Vercel output (for example https://my-agent-abc123.vercel.app).
# 4) Point your local REPL at the deployed URL
pnpm dev https://my-agent-abc123.vercel.app
That command keeps your local REPL but sends messages to the deployed server.
Use these if you want to inspect what Ash is doing:
ash info shows resolved authoring details for the current project.ash build compiles .ash/ artifacts and host output.ash dev --url <deployment-url> is the same as pnpm dev <deployment-url>.If your Vercel preview is protected, set any required local auth env vars before step 4 (for example VERCEL_AUTOMATION_BYPASS_SECRET).
Ash exposes one default message route:
POST /.well-known/ash/v1/message
Start a run:
curl -X POST http://127.0.0.1:3000/.well-known/ash/v1/message \
-H 'content-type: application/json' \
-d '{"message":"What is the weather in Brooklyn?"}'
The response returns a continuationToken in the body and a runId in the response headers.
Stream the run with:
curl http://127.0.0.1:3000/.well-known/ash/v1/runs/<runId>/stream
The stream is newline-delimited JSON and emits runtime lifecycle events such as:
run.startedturn.startedmessage.receivedactions.requestedsubagent.startedsubagent.eventsubagent.completedaction.resultthinking.completedmessage.completed (zero or more per turn, includes finishReason)turn.completedsession.waitingrun.failedrun.completedThis is an important part of the Ash model. The runtime is not a black box. You can watch a durable agent run as a structured sequence of state transitions.
When a parent turn delegates to a local subagent, the parent stream emits inline subagent lifecycle
events on the same run. Clients watch subagent.started, subagent.event, and
subagent.completed on the parent stream rather than attaching to a separate child-run stream.
Ash can protect its own HTTP surfaces from agent.ts:
import { defineAgent } from "ash";
export default defineAgent({
model: "openai/gpt-5.4-mini",
network: {
ipAllowList: ["127.0.0.1", "10.0.0.0/8"],
},
auth: {
strategies: [
{
kind: "http-basic",
username: "ops",
password: process.env.ASH_BASIC_PASSWORD,
},
{
kind: "jwt-hmac",
issuer: "https://internal.example",
audiences: ["weather-agent"],
subjects: ["worker:*"],
algorithm: "HS256",
secret: process.env.ASH_HMAC_SECRET,
},
],
},
});
Ash currently supports four inbound strategy kinds:
http-basicjwt-hmacjwt-ecdsaoidcBehavior:
auth and network are omitted, /.well-known/ash/v1/health, /.well-known/ash/v1/message, and /.well-known/ash/v1/runs/:runId/stream are open by default.network is configured, Ash enforces the IP allow list but still treats the request as unauthenticated.auth is configured, the protected routes require Authorization. http-basic uses Basic .... Token-backed strategies use Bearer ....schedules/ runs do not come from HTTP at all. They always execute with a framework-owned runtime principal so authored code still sees a caller in getSession().auth.Quick examples:
curl -u ops:top-secret http://127.0.0.1:3000/.well-known/ash/v1/health
curl -X POST http://127.0.0.1:3000/.well-known/ash/v1/message \
-H 'authorization: Basic b3BzOnRvcC1zZWNyZXQ=' \
-H 'content-type: application/json' \
-d '{"message":"Hello"}'
Ash supports both a nested agent/ layout and a flat project-root layout. The nested layout is the recommended default.
| Surface | Purpose | Typical Format |
|---|---|---|
system | Base system prompt and behavior | system.md, system.ts |
system/ | Additive always-on prompt layers | markdown or modules |
agent.ts | Additive configuration such as model selection and metadata | agent.ts |
skills/ | Reusable capability packs and procedures | flat markdown, modules, or packaged skills |
lib/ | Package-local helper modules imported by authored entrypoints | TypeScript or JavaScript modules |
connections/ | Named downstream dependencies with runtime-owned auth, policy, and optional MCP tool lowering | TypeScript or JavaScript modules |
sandboxes/ | Named isolated bash-style environments with lifecycle hooks | TypeScript or JavaScript modules |
tools/ | Executable integrations | TypeScript or JavaScript modules |
schedules/ | Recurring jobs such as digests, syncs, and maintenance | markdown or modules |
subagents/ | Specialist local subagents | subagent packages |
The design rule behind all of this is straightforward: filesystem authoring and programmatic authoring should compile to the same internal agent model.
Each local subagent package can also define its own package-local lib/, tools/, sandboxes/,
and nested subagents/ tree. schedules/ remain root-only.
When the default harness workspace is created, shipped authored files appear at the workspace root:
skills/**/* -> skills/**/*The live shell root is adapter-specific:
/workspace/vercel/sandbox/workspaceAsh includes the active live root in the runtime prompt so the model can inspect the correct path instead of guessing.
Authored lib/**/* modules are not mounted into the harness workspace. They stay package-local
implementation code that entrypoint modules import through normal ESM resolution.
Authored sandboxes are also not mounted into the shared workspace. They are provisioned as separate named execution environments and exposed as their own tools.
Those files are not injected wholesale into the always-on prompt. The base prompt only gets a short workspace-awareness section that points the model at the relevant root entries, and deeper inspection happens through the runtime workspace tools.
Ash is not trying to avoid code. It is trying to make code earn its place.
The framework exports typed public definitions such as defineAgent, defineSystem, defineSkill, defineConnection, defineSandbox, defineTool, defineSchedule, and defineSubagent. That gives you a clean path from markdown-first authoring into more dynamic behavior without abandoning the original mental model.
In practice, that means:
lib/, connections, sandboxes, tools, model configuration, dynamic authored modules, and advanced compositionAuthored runtime functions can read the active durable Ash session with getSession().
import { defineTool, getSession } from "ash";
import { z } from "zod";
export default defineTool({
name: "get_weather",
description: "Get the current weather for a city.",
inputSchema: z.object({
city: z.string(),
}),
async execute(input) {
const session = getSession();
return {
city: input.city,
currentCallerId: session.auth.current?.principalId,
initiatorCallerId: session.auth.initiator?.principalId,
runId: session.runId,
sessionId: session.sessionId,
turnId: session.turn.id,
parentRunId: session.parent?.runId,
};
},
});
Today the public Session shape includes:
interface SessionTurn {
id: string;
sequence: number;
}
interface SessionAuthContext {
attributes: Readonly<Record<string, string | readonly string[]>>;
authenticator: "http-basic" | "jwt-hmac" | "jwt-ecdsa" | "oidc" | "schedule";
issuer?: string;
principalId: string;
principalType: "service" | "user" | "runtime" | "unknown";
subject?: string;
}
interface Session {
auth: {
current: SessionAuthContext | null;
initiator: SessionAuthContext | null;
};
sessionId: string;
runId: string;
turn: SessionTurn;
parent?: {
runId: string;
sessionId: string;
turn: SessionTurn;
};
}
Notes:
auth.current is the caller for the active inbound turn.auth.initiator is the caller that started the durable session.null.auth.current may change while auth.initiator stays stable.schedule principal.runId and sessionId identify the current durable execution.turn identifies the current authored turn fragment and is always present.parent is present only when the current execution is a child subagent run.parent.turn identifies the delegating parent turn when parent is present.getSession() is backed by async local storage and only works inside authored runtime execution such as tools and other Ash-invoked function bodies.getSession() during top-level module evaluation throws because no authored runtime session is active yet.Authored connections live under connections/*.ts and let Ash own downstream auth, retry and timeout policy, and optional MCP tool lowering.
agent/connections/snowflake.ts
import { defineConnection } from "ash";
export default defineConnection({
kind: "mcp",
transport: {
type: "streamable-http",
url: process.env.SNOWFLAKE_MCP_URL,
},
auth: {
kind: "bearer",
token: process.env.SNOWFLAKE_MCP_TOKEN,
},
policy: {
timeoutMs: 30_000,
retryAttempts: 1,
},
tools: {
mode: "allow",
allow: ["query", "explore"],
namespace: "snowflake",
},
});
Authored runtime code can then bind the live handle lazily with await getConnection(name):
import { defineTool, getConnection } from "ash";
import { z } from "zod";
export default defineTool({
name: "execute_sql",
description: "Execute a read-only Snowflake query.",
inputSchema: z.object({
sql: z.string(),
}),
async execute(input) {
const snowflake = await getConnection("snowflake");
if (snowflake.kind !== "mcp") {
throw new Error('Expected connection "snowflake" to be an MCP connection.');
}
return await snowflake.callTool("query", {
sql: input.sql,
});
},
});
Current boundary:
connections/ is a root-agent slot today.snowflake.query.await getConnection(...).request(...).none, api-key, basic, bearer, and service-account flows.service-account auth uses Google's OAuth 2 service-account token flow, which fits Google APIs such as Sheets.service-account auth for MCP and user-passthrough remain later-phase work.The current implementation is already useful, but it is intentionally opinionated about what is finished and what is still maturing.
That boundary is important because Ash is optimizing for correctness and a coherent long-term model, not for piling on loosely connected features.
If you are evaluating Ash as a framework, the internal architecture is intentionally split into clear phases:
.ash/.{ session, next }.next, persists state, emits stream events, and wraps the loop in workflows where needed.That separation is a large part of why the top-level authoring model can stay simple without the runtime becoming opaque.
If you want to go deeper:
.
├── apps/
│ └── weather-agent/ # minimal end-to-end example
├── docs/
│ ├── public/ # end-user framework docs
│ └── internals/ # framework architecture notes
├── packages/
│ └── ash/ # framework package + CLI
└── README.md
Ash is built to make agent systems easier to author, easier to inspect, and easier to trust. The goal is not only to make agents more capable. It is to make them much easier to think about.
FAQs
ASH stands for Agentic Serverless Harness. Ash is a filesystem-first framework for durable backend agents on Vercel.
The npm package experimental-ash receives a total of 1,111 weekly downloads. As such, experimental-ash popularity was classified as popular.
We found that experimental-ash demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.