New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

agentry

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

agentry

Compose and reuse your AI agents like React components

latest
Source
npmnpm
Version
0.1.0
Version published
Maintainers
1
Created
Source

Agentry 🤖 🏗️

Compose and reuse your AI agents like React components.

npm version GitHub Actions npm downloads

What is Agentry?

Agentry adapts React’s component model for AI agents. Define behavior declaratively, compose agents like you would components, and let the framework manage the flow and execution.

[!WARNING] This library is in active development.

[!NOTE] Supports OpenAI and Anthropic models.

Quick Start

Installation

bun add agentry react zod

# for anthropic
bun add @anthropic-ai/sdk
export ANTHROPIC_API_KEY="sk-ant-***"

# for openai
bun add openai
export OPENAI_API_KEY="sk-***"

Next, in your tsconfig.json:

{
  "compilerOptions": {
    "jsx": "react-jsx",
    "jsxImportSource": "react",
    "module": "ESNext",
    "target": "ESNext",
    "moduleResolution": "bundler"
  }
}

Creating an Agent

In agent.tsx:

import Anthropic from '@anthropic-ai/sdk'
import { run, Agent, System, Tools, Tool, Message } from 'agentry'
import { z } from 'zod'

const result = await run(
  <Agent provider="anthropic" model="claude-haiku-4-5" maxTokens={1024}>
    <System>You are a helpful math assistant</System>
    <Tools>
      <Tool
        name="calculator"
        description="Perform calculations"
        parameters={z.object({
          operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
          a: z.number(),
          b: z.number(),
        })}
        handler={async ({ operation, a, b }) => {
          const ops = {
            add: a + b,
            subtract: a - b,
            multiply: a * b,
            divide: a / b,
          }
          return String(ops[operation])
        }}
      />
    </Tools>
    <Message role="user">What is 42 + 17?</Message>
  </Agent>,
  {
    providers: { anthropic: { client: new Anthropic() } },
  },
)

console.log(result.content)

Run it:

bun run agent.tsx

Features

  • Dynamic tools via React state - Add/remove tools during execution with useState
  • React hooks - useExecutionState(), useMessages() for reactive state
  • Declarative subagents - Use <AgentTool> to create subagents with type-safe parameters
  • Type-safe tools - Handler params inferred from Zod schemas
  • Streaming support - Stream responses
  • WebSocket mode (OpenAI) - Persistent connection via websocket prop reduces per-turn latency in multi-tool loops
  • Programmatic agent spawning - Spawn and execute agents on-demand from tool handlers using context.runAgent()
  • Cross-provider subagents - Mix providers across parent/subagent boundaries
  • Compaction control - Automatic message compaction for long conversations to manage context window usage
  • Conditional rendering - Use <Condition> to conditionally render agent components based on state or natural language intent
  • Structured outputs - Use strict on tools
  • Prompt caching - Supports Anthropic's prompt caching

Providers

Agentry supports multiple providers with a single declarative API.

  • Root agentry exports the provider-agnostic core (run, Agent, hooks, custom tools).
  • Provider modules export provider clients and built-ins:
    • agentry/anthropic
    • agentry/openai
  • Built-ins are treated as regular tools in execution (no per-provider capability matrix to maintain).

Reusable instance

import Anthropic from '@anthropic-ai/sdk'
import { createAI, Agent, Message, Tools } from 'agentry'
import { WebSearch } from 'agentry/anthropic'

const ai = createAI({
  providers: { anthropic: { client: new Anthropic() } },
})

const result = await ai.run(
  <Agent provider="anthropic" model="claude-sonnet-4-5" maxTokens={1024}>
    <Tools>
      <WebSearch maxUses={3} />
    </Tools>
    <Message role="user">Find the latest React release notes</Message>
  </Agent>,
)

Examples

Want to see code? See examples/

ExampleDescription
demo.tsxCompany research with web search
basic.tsxSimple calculator tool
interactive.tsxMulti-turn conversations with streaming
subagents.tsxManager delegating to specialists
hooks.tsxHooks, composition, and dynamic tools
web-search.tsxWeb search workflows
mcp.tsxMCP server integration
chatbot.tsxTerminal-based chatbot
create-subagent.tsxDynamic subagent creation
anthropic/cache-ephemeral.tsxPrompt caching with ephemeral content
conditions.tsxState-based and NL condition rendering
anthropic/thinking.tsxExtended thinking with interleaved support
workflow.tsxInteractive authentication workflow
conversation-persistence.tsxConversation save/load
openai/basic.tsxOpenAI Responses API basic usage
cross-provider/subagents.tsxOpenAI parent + Anthropic subagents
openai/codex-subagent.tsxOpenAI Codex subagent
openai/built-ins.tsxOpenAI built-ins: WebSearch, CodeExecution, and MCP
openai/websocket.tsxOpenAI WebSocket mode for lower-latency multi-tool loops
compaction.tsxContext compaction demo (works with Anthropic and OpenAI)

Run an example:

echo "ANTHROPIC_API_KEY=sk-ant-***" > .env
# echo "OPENAI_API_KEY=sk-***" >> .env
bun run example:basic
# OpenAI examples:
bun run example:openai:basic
# provider-agnostic examples (set EXAMPLE_PROVIDER=openai if needed):
bun run example:chatbot
# Anthropic-specific examples:
bun run example:anthropic:thinking

Core Concepts

Batch vs Interactive Mode

Batch mode (default) - Runs to completion:

const result = await run(<Agent provider="anthropic">...</Agent>, {
  providers: { anthropic: { client: new Anthropic() } },
})

Interactive mode - Returns a handle for ongoing interaction:

const agent = await run(<Agent provider="anthropic">...</Agent>, {
  mode: 'interactive',
  providers: { anthropic: { client: new Anthropic() } },
})
await agent.sendMessage('Hello')
for await (const event of agent.stream('Tell me more')) {
  if (event.type === 'text') process.stdout.write(event.text)
}
agent.close()

Subagents

Create subagents using <AgentTool> with type-safe parameters:

<Agent name="manager" provider="anthropic" model="claude-haiku-4-5">
  <Tools>
    <AgentTool
      name="researcher"
      description="Research specialist"
      parameters={z.object({
        topic: z.string().describe('The topic to research'),
      })}
      agent={(input) => (
        <Agent name="researcher">
          <System>You are a research expert.</System>
          <Message role="user">Research: {input.topic}</Message>
        </Agent>
      )}
    />
  </Tools>
</Agent>

The manager can call researcher(topic="...") and the framework spawns and runs the subagent with the provided parameters.

Programmatic Agent Spawning

Spawn agents programmatically from within tool handlers using context.runAgent(). This allows for conditional agent creation, parallel execution, and dynamic agent selection based on runtime data:

<Agent provider="anthropic" model="claude-haiku-4-5">
  <Tools>
    <Tool
      name="analyze_code"
      description="Analyze code by spawning a specialist agent"
      parameters={z.object({
        code: z.string(),
        language: z.enum(['python', 'typescript', 'rust']),
      })}
      handler={async (input, context) => {
        // Spawn different agents based on language
        const result = await context.runAgent(
          input.language === 'python' ? (
            <Agent name="python-expert">
              <System>You are a Python expert</System>
              <Message role="user">Analyze: {input.code}</Message>
            </Agent>
          ) : (
            <Agent name="typescript-expert">
              <System>You are a TypeScript expert</System>
              <Message role="user">Analyze: {input.code}</Message>
            </Agent>
          ),
        )
        return result.content
      }}
    />
  </Tools>
</Agent>

You can also spawn multiple agents in parallel:

handler={async (input, context) => {
  const [techResult, bizResult] = await Promise.all([
    context.runAgent(<TechnicalAnalyst content={input.content} />),
    context.runAgent(<BusinessAnalyst content={input.content} />),
  ])
  return `Tech: ${techResult.content}\nBiz: ${bizResult.content}`
}}

Cross-provider subagents

<AgentTool> and context.runAgent(...) can run subagents on a different provider than the parent:

import Anthropic from '@anthropic-ai/sdk'
import OpenAI from 'openai'
import { createAI, Agent, AgentTool, Message, Tools } from 'agentry'
import { z } from 'zod'

const ai = createAI({
  providers: {
    openai: { client: new OpenAI() },
    anthropic: { client: new Anthropic() },
  },
})

await ai.run(
  <Agent provider="openai" model="gpt-5-mini">
    <Tools>
      <AgentTool
        name="claude_researcher"
        description="Research with Anthropic"
        parameters={z.object({ topic: z.string() })}
        agent={({ topic }) => (
          <Agent provider="anthropic" model="claude-sonnet-4-5">
            <Message role="user">Research: {topic}</Message>
          </Agent>
        )}
      />
    </Tools>
    <Message role="user">Use claude_researcher for React 19 updates.</Message>
  </Agent>,
)

State-Driven Tools

Tools can be added/removed during execution using React state:

function DynamicAgent() {
  const [hasAdvanced, setHasAdvanced] = useState(false)
  return (
    <Agent provider="anthropic" model="claude-haiku-4-5">
      <System>
        You are a helpful assistant that can analyze technical and business content.
        You can unlock advanced analysis tools by calling the unlock_advanced tool.
      </System>
      <Tools>
        <Tool
          name="unlock_advanced"
          parameters={z.object({})}
          handler={async () => {
            setHasAdvanced(true) // Adds new tool on next render
            return 'Unlocked!'
          }}
        />
        {hasAdvanced && <Tool name="advanced_analysis" ... />}
      </Tools>
      <Message role="user">Analyze the following content: {input.content}</Message>
    </Agent>
  )
}

Conditions

⚠️ Experimental: <Condition /> is experimental and might change in future versions.

Use <Condition> to conditionally render agent components based on state or natural language intent. Conditions support both boolean and natural language evaluation:

function AuthAgent() {
  const [isAuthenticated, setIsAuthenticated] = useState(false)
  const [isPremium, setIsPremium] = useState(false)

  return (
    <Agent provider="anthropic" model="claude-haiku-4-5">
      {/* Boolean condition */}
      <Condition when={!isAuthenticated}>
        <System>Please authenticate first</System>
        <Tools>
          <Tool
            name="authenticate"
            handler={async () => {
              setIsAuthenticated(true)
              return 'Authenticated!'
            }}
          />
        </Tools>
      </Condition>

      <Condition when={isAuthenticated}>
        <System>You are authenticated</System>
        <Tools>
          <Tool name="protected_action" ... />
        </Tools>

        {/* Nested condition - only accessible when authenticated AND premium */}
        <Condition when={isPremium}>
          <System>Premium features enabled</System>
          <Tools>
            <Tool name="premium_feature" ... />
          </Tools>
        </Condition>
      </Condition>

      {/* Natural language condition - evaluated via LLM */}
      <Condition when="user wants to do math or calculations">
        <Tools>
          <Tool name="calculate" ... />
        </Tools>
      </Condition>
    </Agent>
  )
}

Conditions are evaluated before each API call:

  • Boolean conditions (when={boolean}) are checked first
  • Natural language conditions (when="...") are evaluated via LLM

Prompt Caching

Use cache="ephemeral" on <System> or <Context> components to mark dynamic content that shouldn't be cached.

<Agent provider="anthropic" model="claude-sonnet-4-5">
  {/* Stable instructions - will be cached */}
  <System>You are a helpful assistant. Always be concise and accurate.</System>

  {/* Dynamic context - NOT cached (ephemeral) */}
  <Context cache="ephemeral">
    Current user: {user.name}
    Current time: {new Date().toISOString()}
  </Context>

  <Message role="user">What's my name?</Message>
</Agent>

Compaction Control

For long-running conversations, you can enable automatic message compaction to manage context window usage. When the token threshold is exceeded, the framework automatically summarizes previous messages:

<Agent
  provider="anthropic"
  model="claude-haiku-4-5"
  compactionControl={{
    enabled: true,
    contextTokenThreshold: 100000, // Compact when total tokens exceed this
    model: 'claude-haiku-4-5', // Optional: model to use for summarization
    summaryPrompt: 'Summarize the conversation so far', // Optional: custom prompt
  }}
>
  <System>You are a helpful assistant</System>
  <Message role="user">Start a long conversation...</Message>
</Agent>

CompactionControl options:

  • enabled: boolean - Enable/disable compaction
  • contextTokenThreshold?: number - Token threshold to trigger compaction (default: 100000)
  • model?: Model - Model to use for summarization (defaults to agent's model)
  • summaryPrompt?: string - Custom prompt for summarization (optional)

API Reference

run(element, options?)

Runs an agent and returns a result or handle.

// Batch mode
const result: AgentResult = await run(<Agent provider="anthropic">...</Agent>, {
  providers: { anthropic: { client: new Anthropic() } },
})

// Interactive mode
const handle: AgentHandle = await run(<Agent provider="anthropic">...</Agent>, {
  mode: 'interactive',
  providers: { anthropic: { client: new Anthropic() } },
})

Options:

  • mode?: 'batch' | 'interactive' - Execution mode (default: 'batch')
  • providers?: { anthropic?: { client?: Anthropic }; openai?: { client?: OpenAI } } - Provider client map
    • provider is chosen from <Agent provider=\"...\">
    • if omitted, provider clients are created from environment variables by default

createAI(defaults)

Create a defaults-bound runner so you can use ai.run(...) and ai.createAgent(...).

import OpenAI from 'openai'
import { createAI, Agent, Message } from 'agentry'

const ai = createAI({
  providers: { openai: { client: new OpenAI() } },
})

const result = await ai.run(
  <Agent provider="openai" model="gpt-5-mini">
    <Message role="user">Hello</Message>
  </Agent>,
)

Components

Built-ins are provider-owned exports:

  • WebSearch, CodeExecution, MCP from agentry/anthropic or agentry/openai
  • Memory from agentry/anthropic

<Agent>

PropTypeDescription
provider?'anthropic' | 'openai'AI provider for this agent
modelstringProvider model id (e.g. claude-sonnet-4-5, gpt-5-mini)
name?stringAgent identifier
description?stringAgent description
maxTokens?numberMax output tokens (default: 4096)
maxIterations?numberMax tool call iterations (default: 20)
stopSequences?string[]Stop sequences
temperature?numberSampling temperature (0-1)
stream?booleanEnable streaming (default: true)
betas?string[]Additional Anthropic beta features to enable
thinking?ThinkingConfigExtended thinking config (provider-dependent). Anthropic: { type: 'enabled', budget_tokens: number, interleaved?: boolean }. OpenAI: { type: 'enabled', effort: 'low' | 'medium' | 'high', summary: 'auto' | 'concise' | 'detailed' }. Disable: { type: 'disabled' }.
websocket?booleanEnable WebSocket mode for OpenAI (reduces per-turn latency ~40% in multi-tool loops via persistent WebSocket connection
compactionControl?CompactionControlContext compaction settings (see below)
onMessage?(event: AgentStreamEvent) => voidStream event callback
onComplete?(result: AgentResult) => voidCompletion callback
onError?(error: Error) => voidError callback
onStepFinish?(result: OnStepFinishResult) => voidStep completion callback

CompactionControl:

FieldTypeDescription
enabledbooleanEnable/disable compaction
contextTokenThreshold?numberToken threshold to trigger (default: 100000)
model?stringModel for summarization (default: agent's model)
summaryPrompt?stringCustom summary prompt

<System> / <Context>

PropTypeDescription
childrenReactNodeContent
cache?'ephemeral'Mark as non-cacheable for prompt caching

<Message>

PropTypeDescription
role'user' | 'assistant'Message role
childrenReactNodeMessage content

<Tools>

PropTypeDescription
childrenReactNodeTool components

<Tool>

PropTypeDescription
namestringTool name
descriptionstringDescription for the model
parametersZodSchemaZod schema for input validation
strict?booleanEnable structured outputs (auto-enables beta)
handler(input, context: ToolContext) => Promise<ToolResult>Tool handler

<AgentTool>

PropTypeDescription
namestringTool name
descriptionstringDescription for the model
parametersZodSchemaZod schema for input validation
agent(input) => ReactElement<Agent>Function returning the Agent

<Condition>

PropTypeDescription
whenboolean | stringCondition (boolean or NL description evaluated by LLM)
provider?'anthropic' | 'openai'Override provider for NL evaluation (first NL condition's override applies to the batch)
model?AnthropicModel | OpenAIModelOverride model for NL evaluation (defaults to claude-haiku-4-5 / gpt-4.1-mini if not set)
childrenReactNodeContent to render when condition is true

<WebSearch>

PropTypeDescription
maxUses?numberMax searches allowed
allowedDomains?string[]Restrict to these domains
blockedDomains?string[]Block these domains
userLocation?{ city?: string, region?: string, country?: string, timezone?: string }Location for localized results

<CodeExecution>

No props. Enables sandboxed code execution.

<Memory>

PropTypeDescription
onView?(input: { path: string, view_range?: [number, number] }) => Promise<string>View file/directory handler
onCreate?(input: { path: string, file_text: string }) => Promise<string>Create file handler
onStrReplace?(input: { path: string, old_str: string, new_str: string }) => Promise<string>Replace text handler
onInsert?(input: { path: string, insert_line: number, insert_text: string }) => Promise<string>Insert text handler
onDelete?(input: { path: string }) => Promise<string>Delete file handler
onRename?(input: { old_path: string, new_path: string }) => Promise<string>Rename/move handler

<MCP>

PropTypeDescription
namestringServer name
urlstringSSE endpoint URL
authorization_token?stringAuth token
tool_configuration?{ enabled?: boolean, allowed_tools?: string[] }Tool filtering config

Hooks

HookReturnsDescription
useExecutionState()AgentStateCurrent execution state
useMessages()AgentMessageParam[]Conversation messages
useAgentState()AgentStoreStateFull agent state

AgentHandle (Interactive Mode)

Method / PropertyTypeDescription
sendMessage(content)(string) => Promise<AgentResult>Send a message and get response
stream(message)(string) => AsyncGenerator<AgentStreamEvent, AgentResult>Stream a response
run(firstMessage?)(string?) => Promise<AgentResult>Run agent to completion
abort()() => voidAbort current execution
close()() => voidClean up resources
stateAgentStateCurrent execution state
messagesAgentMessageParam[]Conversation history
isRunningbooleanWhether agent is processing

Utilities

FunctionDescription
defineTool(options)Define a tool programmatically. Options: name, description, parameters, strict?, handler
defineAgentTool(options)Define a subagent tool. Options: name, description, parameters, agent
createAgent(element, options?)Create an agent handle without running

ToolContext

Tool handlers receive a context object:

PropertyTypeDescription
agentNamestringName of the current agent
provider'anthropic' | 'openai'Current provider
client?Anthropic | OpenAICurrent provider client
clients?{ anthropic?: Anthropic; openai?: OpenAI }Provider client map
model?stringCurrent agent's model
signal?AbortSignalAbort signal for cancellation
metadata?JsonObjectCustom JSON-like metadata
runAgent(agent: ReactElement, options?: RunAgentOptions) => Promise<AgentResult>Run an agent programmatically

RunAgentOptions:

FieldTypeDescription
provider?'anthropic' | 'openai'Override provider
clients?{ anthropic?: Anthropic; openai?: OpenAI }Override client map
model?stringOverride parent's model
maxTokens?numberOverride max tokens
temperature?numberOverride temperature
signal?AbortSignalCustom abort signal

Requirements

  • Node.js 18+ or Bun
  • React 19+
  • TypeScript 5+
  • Provider API key(s): Anthropic and/or OpenAI

Development

bun install
bun run typecheck
bun test

FAQ

Why call it "Agentry"?

Agent 🤖 + Gantry 🏗️

Why make this?

I wanted to build an AI Agent and was exploring different ways to represent one. I started sketching it out in React and realized the component model made composition and structure really intuitive. React's concepts like hooks, lifecycles, and state made developing the functionality straightforward. Since I wanted it to feel just like writing React, it was the perfect excuse to dig into how the React Reconciler works under the hood and how I could use it for this project.

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Keywords

ai

FAQs

Package last updated on 02 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts