New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

experimental-ash

Package Overview
Dependencies
Maintainers
1
Versions
30
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

experimental-ash

ASH stands for Agentic Serverless Harness. Ash is a filesystem-first framework for durable backend agents on Vercel.

latest
npmnpm
Version
0.1.0-alpha.32
Version published
Weekly downloads
2.3K
357.96%
Maintainers
1
Weekly downloads
 
Created
Source

Ash

ASH stands for Agentic Serverless Harness. Ash is a filesystem-first framework for durable backend agents on Vercel.

Its core idea is simple: an agent should feel like a product surface on disk, not a prompt string trapped inside application code. In Ash, the core system prompt lives in system.md, additive always-on prompt layers can live in system/, reusable procedures live in skills/, shared authored code lives in lib/, named downstream dependencies live in connections/, isolated command environments live in sandboxes/, and typed executable integrations live in tools/. Simple agents stay mostly markdown. When behavior needs code, you add TypeScript without leaving the model.

  • Markdown-first authoring that reads like a living spec
  • TypeScript where types, schemas, and execution actually matter
  • Durable message runs by default, not as an afterthought
  • Authored sandboxes for isolated command environments
  • Inspectable compiled artifacts under .ash/
  • A single message endpoint with structured run streaming
  • A Vercel-native runtime model built on Nitro and Workflows

Why Ash Feels Different

Most agent frameworks start by asking you to assemble abstractions. Ash starts with the agent itself.

You author a directory. That directory is the contract. The markdown is readable by engineering, product, and operations. The TypeScript is reserved for the parts that benefit from real code. The result is an agent architecture that stays legible as it grows.

  • system.md defines the core system prompt for the agent.
  • system/ adds ordered always-on prompt layers after the root system prompt.
  • skills/ package reusable capabilities and procedures that can be loaded on demand.
  • lib/ is the canonical place for shared authored source code imported by tools and other module-backed files.
  • connections/ declare named downstream dependencies with runtime-owned auth, policy, and optional MCP tool lowering.
  • sandboxes/ define named backend-managed bash-style environments with lifecycle hooks.
  • tools/ turn capabilities into typed executable integrations.
  • schedules/ let the same agent run recurring jobs.
  • subagents/ extend the model toward specialist delegation.

This is the shift Ash is designed around: agents should be easy to read before they are clever to execute.

The Magic: Markdown Simplicity, TypeScript Power

A useful Ash agent can be understood in one glance:

weather-agent/
├── package.json
└── agent/
    ├── agent.ts
    ├── system.md
    ├── system/
    │   ├── forecast-guidelines.md
    │   └── my-location.md
    ├── lib/
    │   └── weather/
    │       └── client.ts
    ├── sandboxes/
    │   └── repo-shell.ts
    ├── skills/
    │   └── get-weather.md
    └── tools/
        └── get-weather.ts

agent/system.md

You are a weather-focused assistant. Be concise, accurate, and explicit about when you are using the local weather tool.

agent/system/my-location.md

The default user location is Brooklyn, New York, unless the user specifies a different city.

agent/skills/get-weather.md

---
description: Use the weather tool before answering forecast or temperature questions.
---

When the user asks about weather, temperature, or forecast conditions, call the `get_weather` tool before answering.

agent/tools/get-weather.ts

import { createWeatherClient } from "../lib/weather/client";
import { defineTool } from "ash";
import { z } from "zod";

const weatherClient = createWeatherClient();

export default defineTool({
  name: "get_weather",
  description: "Get the current weather for a city.",
  inputSchema: z.object({
    city: z.string(),
  }),
  async execute(input) {
    return weatherClient.getForecast(input.city);
  },
});

agent/sandboxes/repo-shell.ts

import { defineSandbox } from "ash";

export default defineSandbox({
  description: "Use this isolated repo shell for command-line maintenance tasks.",
  async bootstrap({ sandbox }) {
    await sandbox.runCommand("mkdir -p repo");
  },
  async onSession({ sandbox }) {
    await sandbox.runCommand("touch .session-ready");
  },
});

agent/lib/weather/client.ts

export function createWeatherClient() {
  return {
    async getForecast(city: string) {
      return {
        city,
        temperatureF: 72,
        condition: "Sunny",
        summary: `Sunny in ${city} with a light breeze.`,
      };
    },
  };
}

agent/agent.ts

import { defineAgent } from "ash";
import { openai } from "@ai-sdk/openai";

export default defineAgent({
  model: openai("gpt-5.4-mini"),
});

defineAgent also accepts provider model ids such as "openai/gpt-5.4-mini" when you prefer the gateway-style string form.

That is the point of Ash. The parts that should be prose stay as prose. The parts that should be code stay as code.

From those files, Ash will:

  • discover and validate the authored agent surface
  • lower markdown into the same typed public definitions used by code
  • compile inspectable artifacts into .ash/
  • compile system.md plus ordered system/ layers into the base prompt
  • seed skills/ into the default runtime workspace root
  • when present, materialize named authored connections and optionally lower allowed MCP tools into namespaced model-visible tools
  • provision named authored sandboxes as isolated bash-style tool surfaces
  • expose a stable message endpoint
  • execute message requests as durable conversation runs and schedules/subagents as durable task runs
  • stream structured lifecycle events while the run is happening

The experience is intentionally simple at the surface, but it does real runtime work underneath.

What You Get Out Of The Box

  • A pure-markdown agent is valid. You only need TypeScript when you want dynamic behavior.
  • Markdown and TypeScript compile to the same underlying model instead of splitting the framework into two competing APIs.
  • Skills are discoverable capabilities, and runtime skill loading is framework-owned through activate_skill.
  • Authored sandboxes create named bash-style execution environments with template and per-session lifecycle hooks.
  • Tools are strongly typed and schema-validated, but do not force the rest of the agent into code.
  • Authored connections let Ash own downstream auth, policy, and optional MCP tool lowering instead of hiding them inside app-local clients.
  • Sessions are durable across turns, with runtime-owned state carried forward by the workflow layer.
  • Authored runtime code can read the active durable session with getSession(), including caller auth and parent lineage.
  • Authored runtime code can resolve a named downstream handle with await getConnection(...).
  • Authored skills/ files seed the default runtime workspace root.
  • Schedules compile into runtime-owned scheduled executions.
  • Optional route auth and IP allow lists protect Ash-owned health, message, and stream routes.
  • Compiled output stays inspectable, which makes debugging and deployment behavior easier to reason about.

The Runtime Shape

Ash's governing internal model is channel-harness-runtime:

  • the channel normalizes transport input, defines the continuationToken, and owns delivery policy
  • the harness does one atomic unit of AI work and returns { session, next }
  • the runtime persists session state, follows next, streams events, and owns workflow APIs

That split is why Ash exposes two identifiers:

  • continuationToken for the next user message
  • runId for streaming and inspection

Only the runtime layer should talk to workflow primitives. Channel code should call runtime contracts, and harness code should focus on model/tool behavior.

Sandbox Environments

Ash now supports authored sandboxes/ for cases where the model should get a command environment, but a typed JSON tool is the wrong shape.

Important rules:

  • each sandboxes/*.ts file keeps its internal sandbox name from the file path, but the model-visible tool name is lowered to lower_snake_case with a _sandbox suffix
  • the same sandboxes/ slot is supported on the root agent and inside local subagent packages
  • sandbox definitions export lifecycle hooks, not an explicit name
  • bootstrap() prepares reusable template state
  • onSession() runs once per durable Ash session for that sandbox
  • the default backend is vercel on Vercel and local everywhere else

Vercel-specific behavior:

  • when ash build runs inside a hosted Vercel build and both VERCEL and VERCEL_DEPLOYMENT_ID are present, Ash now prewarms authored Vercel sandbox templates during build
  • that build-time prewarm runs bootstrap() for reusable template state only
  • onSession() still runs later, inside the runtime turn path, once per durable Ash session
  • if that hosted Vercel build-time prewarm fails, the build now fails too
  • runtime still falls back to lazy template creation only when build-time prewarm was not attempted in the first place

This is intentionally different from the default workspace:

  • the workspace is one shared bash tool for the run
  • sandboxes are additional isolated named tools with their own backend-managed state

Quick Start

Prerequisites

  • Node 24.x
  • pnpm
  • A Vercel account (for deploy step)

Create and run a new agent

# 1) Scaffold a new agent
npx experimental-ash@latest init my-agent
cd my-agent
pnpm install

# 2) Start local dev (REPL is enabled by default)
pnpm dev

This command scaffolds a project from the built-in template and starts your local runtime with the interactive REPL.

Run a remote check against a deployed agent

# 3) Deploy to Vercel
npx vercel deploy

Copy the deployment URL from Vercel output (for example https://my-agent-abc123.vercel.app).

# 4) Point your local REPL at the deployed URL
pnpm dev https://my-agent-abc123.vercel.app

That command keeps your local REPL but sends messages to the deployed server.

Use these if you want to inspect what Ash is doing:

  • ash info shows resolved authoring details for the current project.
  • ash build compiles .ash/ artifacts and host output.
  • ash dev --url <deployment-url> is the same as pnpm dev <deployment-url>.

If your Vercel preview is protected, set any required local auth env vars before step 4 (for example VERCEL_AUTOMATION_BYPASS_SECRET).

Talk To The Agent

Ash exposes one default message route:

POST /.well-known/ash/v1/message

Start a run:

curl -X POST http://127.0.0.1:3000/.well-known/ash/v1/message \
  -H 'content-type: application/json' \
  -d '{"message":"What is the weather in Brooklyn?"}'

The response returns a continuationToken in the body and a runId in the response headers. Stream the run with:

curl http://127.0.0.1:3000/.well-known/ash/v1/runs/<runId>/stream

The stream is newline-delimited JSON and emits runtime lifecycle events such as:

  • run.started
  • turn.started
  • message.received
  • actions.requested
  • subagent.started
  • subagent.event
  • subagent.completed
  • action.result
  • thinking.completed
  • message.completed (zero or more per turn, includes finishReason)
  • turn.completed
  • session.waiting
  • run.failed
  • run.completed

This is an important part of the Ash model. The runtime is not a black box. You can watch a durable agent run as a structured sequence of state transitions.

When a parent turn delegates to a local subagent, the parent stream emits inline subagent lifecycle events on the same run. Clients watch subagent.started, subagent.event, and subagent.completed on the parent stream rather than attaching to a separate child-run stream.

Protect Ash Routes

Ash can protect its own HTTP surfaces from agent.ts:

import { defineAgent } from "ash";

export default defineAgent({
  model: "openai/gpt-5.4-mini",
  network: {
    ipAllowList: ["127.0.0.1", "10.0.0.0/8"],
  },
  auth: {
    strategies: [
      {
        kind: "http-basic",
        username: "ops",
        password: process.env.ASH_BASIC_PASSWORD,
      },
      {
        kind: "jwt-hmac",
        issuer: "https://internal.example",
        audiences: ["weather-agent"],
        subjects: ["worker:*"],
        algorithm: "HS256",
        secret: process.env.ASH_HMAC_SECRET,
      },
    ],
  },
});

Ash currently supports four inbound strategy kinds:

  • http-basic
  • jwt-hmac
  • jwt-ecdsa
  • oidc

Behavior:

  • If both auth and network are omitted, /.well-known/ash/v1/health, /.well-known/ash/v1/message, and /.well-known/ash/v1/runs/:runId/stream are open by default.
  • If only network is configured, Ash enforces the IP allow list but still treats the request as unauthenticated.
  • If auth is configured, the protected routes require Authorization. http-basic uses Basic .... Token-backed strategies use Bearer ....
  • Ash does not enforce a second per-run ownership layer after route auth. Any caller that passes route auth may start, resume, or stream any run for that agent.
  • Scheduled schedules/ runs do not come from HTTP at all. They always execute with a framework-owned runtime principal so authored code still sees a caller in getSession().auth.

Quick examples:

curl -u ops:top-secret http://127.0.0.1:3000/.well-known/ash/v1/health

curl -X POST http://127.0.0.1:3000/.well-known/ash/v1/message \
  -H 'authorization: Basic b3BzOnRvcC1zZWNyZXQ=' \
  -H 'content-type: application/json' \
  -d '{"message":"Hello"}'

Authoring Surface

Ash supports both a nested agent/ layout and a flat project-root layout. The nested layout is the recommended default.

SurfacePurposeTypical Format
systemBase system prompt and behaviorsystem.md, system.ts
system/Additive always-on prompt layersmarkdown or modules
agent.tsAdditive configuration such as model selection and metadataagent.ts
skills/Reusable capability packs and proceduresflat markdown, modules, or packaged skills
lib/Package-local helper modules imported by authored entrypointsTypeScript or JavaScript modules
connections/Named downstream dependencies with runtime-owned auth, policy, and optional MCP tool loweringTypeScript or JavaScript modules
sandboxes/Named isolated bash-style environments with lifecycle hooksTypeScript or JavaScript modules
tools/Executable integrationsTypeScript or JavaScript modules
schedules/Recurring jobs such as digests, syncs, and maintenancemarkdown or modules
subagents/Specialist local subagentssubagent packages

The design rule behind all of this is straightforward: filesystem authoring and programmatic authoring should compile to the same internal agent model.

Each local subagent package can also define its own package-local lib/, tools/, sandboxes/, and nested subagents/ tree. schedules/ remain root-only.

When the default harness workspace is created, shipped authored files appear at the workspace root:

  • skills/**/* -> skills/**/*

The live shell root is adapter-specific:

  • local workspaces use /workspace
  • Vercel workspaces use /vercel/sandbox/workspace

Ash includes the active live root in the runtime prompt so the model can inspect the correct path instead of guessing.

Authored lib/**/* modules are not mounted into the harness workspace. They stay package-local implementation code that entrypoint modules import through normal ESM resolution.

Authored sandboxes are also not mounted into the shared workspace. They are provisioned as separate named execution environments and exposed as their own tools.

Those files are not injected wholesale into the always-on prompt. The base prompt only gets a short workspace-awareness section that points the model at the relevant root entries, and deeper inspection happens through the runtime workspace tools.

TypeScript Without Losing The Plot

Ash is not trying to avoid code. It is trying to make code earn its place.

The framework exports typed public definitions such as defineAgent, defineSystem, defineSkill, defineConnection, defineSandbox, defineTool, defineSchedule, and defineSubagent. That gives you a clean path from markdown-first authoring into more dynamic behavior without abandoning the original mental model.

In practice, that means:

  • use markdown for instruction layers, reusable procedures, and scheduled task bodies
  • use TypeScript for lib/, connections, sandboxes, tools, model configuration, dynamic authored modules, and advanced composition
  • keep the authored surface understandable even as runtime behavior becomes more capable

Runtime Session Context

Authored runtime functions can read the active durable Ash session with getSession().

import { defineTool, getSession } from "ash";
import { z } from "zod";

export default defineTool({
  name: "get_weather",
  description: "Get the current weather for a city.",
  inputSchema: z.object({
    city: z.string(),
  }),
  async execute(input) {
    const session = getSession();

    return {
      city: input.city,
      currentCallerId: session.auth.current?.principalId,
      initiatorCallerId: session.auth.initiator?.principalId,
      runId: session.runId,
      sessionId: session.sessionId,
      turnId: session.turn.id,
      parentRunId: session.parent?.runId,
    };
  },
});

Today the public Session shape includes:

interface SessionTurn {
  id: string;
  sequence: number;
}

interface SessionAuthContext {
  attributes: Readonly<Record<string, string | readonly string[]>>;
  authenticator: "http-basic" | "jwt-hmac" | "jwt-ecdsa" | "oidc" | "schedule";
  issuer?: string;
  principalId: string;
  principalType: "service" | "user" | "runtime" | "unknown";
  subject?: string;
}

interface Session {
  auth: {
    current: SessionAuthContext | null;
    initiator: SessionAuthContext | null;
  };
  sessionId: string;
  runId: string;
  turn: SessionTurn;
  parent?: {
    runId: string;
    sessionId: string;
    turn: SessionTurn;
  };
}

Notes:

  • auth.current is the caller for the active inbound turn.
  • auth.initiator is the caller that started the durable session.
  • For unprotected agents, both auth fields are null.
  • For authenticated follow-up messages, auth.current may change while auth.initiator stays stable.
  • For top-level schedule runs, both auth fields point at a framework-owned schedule principal.
  • runId and sessionId identify the current durable execution.
  • turn identifies the current authored turn fragment and is always present.
  • parent is present only when the current execution is a child subagent run.
  • parent.turn identifies the delegating parent turn when parent is present.
  • getSession() is backed by async local storage and only works inside authored runtime execution such as tools and other Ash-invoked function bodies.
  • Calling getSession() during top-level module evaluation throws because no authored runtime session is active yet.

Runtime Connections

Authored connections live under connections/*.ts and let Ash own downstream auth, retry and timeout policy, and optional MCP tool lowering.

agent/connections/snowflake.ts

import { defineConnection } from "ash";

export default defineConnection({
  kind: "mcp",
  transport: {
    type: "streamable-http",
    url: process.env.SNOWFLAKE_MCP_URL,
  },
  auth: {
    kind: "bearer",
    token: process.env.SNOWFLAKE_MCP_TOKEN,
  },
  policy: {
    timeoutMs: 30_000,
    retryAttempts: 1,
  },
  tools: {
    mode: "allow",
    allow: ["query", "explore"],
    namespace: "snowflake",
  },
});

Authored runtime code can then bind the live handle lazily with await getConnection(name):

import { defineTool, getConnection } from "ash";
import { z } from "zod";

export default defineTool({
  name: "execute_sql",
  description: "Execute a read-only Snowflake query.",
  inputSchema: z.object({
    sql: z.string(),
  }),
  async execute(input) {
    const snowflake = await getConnection("snowflake");

    if (snowflake.kind !== "mcp") {
      throw new Error('Expected connection "snowflake" to be an MCP connection.');
    }

    return await snowflake.callTool("query", {
      sql: input.sql,
    });
  },
});

Current boundary:

  • connections/ is a root-agent slot today.
  • MCP over streamable HTTP is implemented end to end, including optional namespaced tool lowering such as snowflake.query.
  • HTTP connections are implemented end to end through await getConnection(...).request(...).
  • HTTP auth currently supports service-owned none, api-key, basic, bearer, and service-account flows.
  • Current service-account auth uses Google's OAuth 2 service-account token flow, which fits Google APIs such as Sheets.
  • service-account auth for MCP and user-passthrough remain later-phase work.

Current Scope

The current implementation is already useful, but it is intentionally opinionated about what is finished and what is still maturing.

  • Discovery, compilation, prompt layering, tools, skills, schedules, durable message runs, and stream events are implemented today.
  • Authored connections are part of discovery, compilation, runtime resolution, async-local authored execution, MCP tool lowering, and HTTP request execution today.
  • Authored sandboxes are part of discovery, compilation, runtime provisioning, and harness injection today.
  • Local subagents are part of the authored surface, compiler output, and runtime delegation flow.
  • Local subagent activity is emitted inline on the parent run stream rather than through a separate public child-run stream.
  • Local and Vercel sandbox backends are implemented today. Docker is intentionally deferred.

That boundary is important because Ash is optimizing for correctness and a coherent long-term model, not for piling on loosely connected features.

Framework Internals

If you are evaluating Ash as a framework, the internal architecture is intentionally split into clear phases:

  • Discovery walks the filesystem and emits a manifest plus diagnostics without executing authored modules.
  • The compiler writes framework-owned artifacts under .ash/.
  • Runtime loaders hydrate compiled inputs into runtime-owned models and channel/runtime contracts.
  • Channels normalize inbound input, define continuation tokens, and provide delivery handlers.
  • The harness executes one unit of model and tool work and returns { session, next }.
  • The runtime follows next, persists state, emits stream events, and wraps the loop in workflows where needed.

That separation is a large part of why the top-level authoring model can stay simple without the runtime becoming opaque.

If you want to go deeper:

  • Start with docs/public/README.md for end-user framework docs.
  • Start with apps/weather-agent for the smallest complete example.
  • Read packages/ash/src/public/index.ts for the public framework surface.
  • Read docs/internals/README.md for the implementation architecture.
  • Explore packages/ash for the framework and CLI itself.

Repository Layout

.
├── apps/
│   └── weather-agent/   # minimal end-to-end example
├── docs/
│   ├── public/          # end-user framework docs
│   └── internals/       # framework architecture notes
├── packages/
│   └── ash/            # framework package + CLI
└── README.md

Ash is built to make agent systems easier to author, easier to inspect, and easier to trust. The goal is not only to make agents more capable. It is to make them much easier to think about.

FAQs

Package last updated on 04 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts