
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
Compose and reuse your AI agents like React components.
Agentry adapts React’s component model for AI agents. Define behavior declaratively, compose agents like you would components, and let the framework manage the flow and execution.
[!WARNING] This library is in active development.
[!NOTE] Supports OpenAI and Anthropic models.
bun add agentry react zod
# for anthropic
bun add @anthropic-ai/sdk
export ANTHROPIC_API_KEY="sk-ant-***"
# for openai
bun add openai
export OPENAI_API_KEY="sk-***"
Next, in your tsconfig.json:
{
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "react",
"module": "ESNext",
"target": "ESNext",
"moduleResolution": "bundler"
}
}
In agent.tsx:
import Anthropic from '@anthropic-ai/sdk'
import { run, Agent, System, Tools, Tool, Message } from 'agentry'
import { z } from 'zod'
const result = await run(
<Agent provider="anthropic" model="claude-haiku-4-5" maxTokens={1024}>
<System>You are a helpful math assistant</System>
<Tools>
<Tool
name="calculator"
description="Perform calculations"
parameters={z.object({
operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
a: z.number(),
b: z.number(),
})}
handler={async ({ operation, a, b }) => {
const ops = {
add: a + b,
subtract: a - b,
multiply: a * b,
divide: a / b,
}
return String(ops[operation])
}}
/>
</Tools>
<Message role="user">What is 42 + 17?</Message>
</Agent>,
{
providers: { anthropic: { client: new Anthropic() } },
},
)
console.log(result.content)
Run it:
bun run agent.tsx
useStateuseExecutionState(), useMessages() for reactive state<AgentTool> to create subagents with type-safe parameterswebsocket prop reduces per-turn latency in multi-tool loopscontext.runAgent()<Condition> to conditionally render agent components based on state or natural language intentstrict on toolsAgentry supports multiple providers with a single declarative API.
agentry exports the provider-agnostic core (run, Agent, hooks, custom tools).agentry/anthropicagentry/openaiimport Anthropic from '@anthropic-ai/sdk'
import { createAI, Agent, Message, Tools } from 'agentry'
import { WebSearch } from 'agentry/anthropic'
const ai = createAI({
providers: { anthropic: { client: new Anthropic() } },
})
const result = await ai.run(
<Agent provider="anthropic" model="claude-sonnet-4-5" maxTokens={1024}>
<Tools>
<WebSearch maxUses={3} />
</Tools>
<Message role="user">Find the latest React release notes</Message>
</Agent>,
)
Want to see code? See examples/
| Example | Description |
|---|---|
demo.tsx | Company research with web search |
basic.tsx | Simple calculator tool |
interactive.tsx | Multi-turn conversations with streaming |
subagents.tsx | Manager delegating to specialists |
hooks.tsx | Hooks, composition, and dynamic tools |
web-search.tsx | Web search workflows |
mcp.tsx | MCP server integration |
chatbot.tsx | Terminal-based chatbot |
create-subagent.tsx | Dynamic subagent creation |
anthropic/cache-ephemeral.tsx | Prompt caching with ephemeral content |
conditions.tsx | State-based and NL condition rendering |
anthropic/thinking.tsx | Extended thinking with interleaved support |
workflow.tsx | Interactive authentication workflow |
conversation-persistence.tsx | Conversation save/load |
openai/basic.tsx | OpenAI Responses API basic usage |
cross-provider/subagents.tsx | OpenAI parent + Anthropic subagents |
openai/codex-subagent.tsx | OpenAI Codex subagent |
openai/built-ins.tsx | OpenAI built-ins: WebSearch, CodeExecution, and MCP |
openai/websocket.tsx | OpenAI WebSocket mode for lower-latency multi-tool loops |
compaction.tsx | Context compaction demo (works with Anthropic and OpenAI) |
Run an example:
echo "ANTHROPIC_API_KEY=sk-ant-***" > .env
# echo "OPENAI_API_KEY=sk-***" >> .env
bun run example:basic
# OpenAI examples:
bun run example:openai:basic
# provider-agnostic examples (set EXAMPLE_PROVIDER=openai if needed):
bun run example:chatbot
# Anthropic-specific examples:
bun run example:anthropic:thinking
Batch mode (default) - Runs to completion:
const result = await run(<Agent provider="anthropic">...</Agent>, {
providers: { anthropic: { client: new Anthropic() } },
})
Interactive mode - Returns a handle for ongoing interaction:
const agent = await run(<Agent provider="anthropic">...</Agent>, {
mode: 'interactive',
providers: { anthropic: { client: new Anthropic() } },
})
await agent.sendMessage('Hello')
for await (const event of agent.stream('Tell me more')) {
if (event.type === 'text') process.stdout.write(event.text)
}
agent.close()
Create subagents using <AgentTool> with type-safe parameters:
<Agent name="manager" provider="anthropic" model="claude-haiku-4-5">
<Tools>
<AgentTool
name="researcher"
description="Research specialist"
parameters={z.object({
topic: z.string().describe('The topic to research'),
})}
agent={(input) => (
<Agent name="researcher">
<System>You are a research expert.</System>
<Message role="user">Research: {input.topic}</Message>
</Agent>
)}
/>
</Tools>
</Agent>
The manager can call researcher(topic="...") and the framework spawns and runs the subagent with the provided parameters.
Spawn agents programmatically from within tool handlers using context.runAgent(). This allows for conditional agent creation, parallel execution, and dynamic agent selection based on runtime data:
<Agent provider="anthropic" model="claude-haiku-4-5">
<Tools>
<Tool
name="analyze_code"
description="Analyze code by spawning a specialist agent"
parameters={z.object({
code: z.string(),
language: z.enum(['python', 'typescript', 'rust']),
})}
handler={async (input, context) => {
// Spawn different agents based on language
const result = await context.runAgent(
input.language === 'python' ? (
<Agent name="python-expert">
<System>You are a Python expert</System>
<Message role="user">Analyze: {input.code}</Message>
</Agent>
) : (
<Agent name="typescript-expert">
<System>You are a TypeScript expert</System>
<Message role="user">Analyze: {input.code}</Message>
</Agent>
),
)
return result.content
}}
/>
</Tools>
</Agent>
You can also spawn multiple agents in parallel:
handler={async (input, context) => {
const [techResult, bizResult] = await Promise.all([
context.runAgent(<TechnicalAnalyst content={input.content} />),
context.runAgent(<BusinessAnalyst content={input.content} />),
])
return `Tech: ${techResult.content}\nBiz: ${bizResult.content}`
}}
<AgentTool> and context.runAgent(...) can run subagents on a different provider than the parent:
import Anthropic from '@anthropic-ai/sdk'
import OpenAI from 'openai'
import { createAI, Agent, AgentTool, Message, Tools } from 'agentry'
import { z } from 'zod'
const ai = createAI({
providers: {
openai: { client: new OpenAI() },
anthropic: { client: new Anthropic() },
},
})
await ai.run(
<Agent provider="openai" model="gpt-5-mini">
<Tools>
<AgentTool
name="claude_researcher"
description="Research with Anthropic"
parameters={z.object({ topic: z.string() })}
agent={({ topic }) => (
<Agent provider="anthropic" model="claude-sonnet-4-5">
<Message role="user">Research: {topic}</Message>
</Agent>
)}
/>
</Tools>
<Message role="user">Use claude_researcher for React 19 updates.</Message>
</Agent>,
)
Tools can be added/removed during execution using React state:
function DynamicAgent() {
const [hasAdvanced, setHasAdvanced] = useState(false)
return (
<Agent provider="anthropic" model="claude-haiku-4-5">
<System>
You are a helpful assistant that can analyze technical and business content.
You can unlock advanced analysis tools by calling the unlock_advanced tool.
</System>
<Tools>
<Tool
name="unlock_advanced"
parameters={z.object({})}
handler={async () => {
setHasAdvanced(true) // Adds new tool on next render
return 'Unlocked!'
}}
/>
{hasAdvanced && <Tool name="advanced_analysis" ... />}
</Tools>
<Message role="user">Analyze the following content: {input.content}</Message>
</Agent>
)
}
⚠️ Experimental:
<Condition />is experimental and might change in future versions.
Use <Condition> to conditionally render agent components based on state or natural language intent. Conditions support both boolean and natural language evaluation:
function AuthAgent() {
const [isAuthenticated, setIsAuthenticated] = useState(false)
const [isPremium, setIsPremium] = useState(false)
return (
<Agent provider="anthropic" model="claude-haiku-4-5">
{/* Boolean condition */}
<Condition when={!isAuthenticated}>
<System>Please authenticate first</System>
<Tools>
<Tool
name="authenticate"
handler={async () => {
setIsAuthenticated(true)
return 'Authenticated!'
}}
/>
</Tools>
</Condition>
<Condition when={isAuthenticated}>
<System>You are authenticated</System>
<Tools>
<Tool name="protected_action" ... />
</Tools>
{/* Nested condition - only accessible when authenticated AND premium */}
<Condition when={isPremium}>
<System>Premium features enabled</System>
<Tools>
<Tool name="premium_feature" ... />
</Tools>
</Condition>
</Condition>
{/* Natural language condition - evaluated via LLM */}
<Condition when="user wants to do math or calculations">
<Tools>
<Tool name="calculate" ... />
</Tools>
</Condition>
</Agent>
)
}
Conditions are evaluated before each API call:
when={boolean}) are checked firstwhen="...") are evaluated via LLMUse cache="ephemeral" on <System> or <Context> components to mark dynamic content that shouldn't be cached.
<Agent provider="anthropic" model="claude-sonnet-4-5">
{/* Stable instructions - will be cached */}
<System>You are a helpful assistant. Always be concise and accurate.</System>
{/* Dynamic context - NOT cached (ephemeral) */}
<Context cache="ephemeral">
Current user: {user.name}
Current time: {new Date().toISOString()}
</Context>
<Message role="user">What's my name?</Message>
</Agent>
For long-running conversations, you can enable automatic message compaction to manage context window usage. When the token threshold is exceeded, the framework automatically summarizes previous messages:
<Agent
provider="anthropic"
model="claude-haiku-4-5"
compactionControl={{
enabled: true,
contextTokenThreshold: 100000, // Compact when total tokens exceed this
model: 'claude-haiku-4-5', // Optional: model to use for summarization
summaryPrompt: 'Summarize the conversation so far', // Optional: custom prompt
}}
>
<System>You are a helpful assistant</System>
<Message role="user">Start a long conversation...</Message>
</Agent>
CompactionControl options:
enabled: boolean - Enable/disable compactioncontextTokenThreshold?: number - Token threshold to trigger compaction (default: 100000)model?: Model - Model to use for summarization (defaults to agent's model)summaryPrompt?: string - Custom prompt for summarization (optional)run(element, options?)Runs an agent and returns a result or handle.
// Batch mode
const result: AgentResult = await run(<Agent provider="anthropic">...</Agent>, {
providers: { anthropic: { client: new Anthropic() } },
})
// Interactive mode
const handle: AgentHandle = await run(<Agent provider="anthropic">...</Agent>, {
mode: 'interactive',
providers: { anthropic: { client: new Anthropic() } },
})
Options:
mode?: 'batch' | 'interactive' - Execution mode (default: 'batch')providers?: { anthropic?: { client?: Anthropic }; openai?: { client?: OpenAI } } - Provider client map
<Agent provider=\"...\">createAI(defaults)Create a defaults-bound runner so you can use ai.run(...) and ai.createAgent(...).
import OpenAI from 'openai'
import { createAI, Agent, Message } from 'agentry'
const ai = createAI({
providers: { openai: { client: new OpenAI() } },
})
const result = await ai.run(
<Agent provider="openai" model="gpt-5-mini">
<Message role="user">Hello</Message>
</Agent>,
)
Built-ins are provider-owned exports:
WebSearch, CodeExecution, MCP from agentry/anthropic or agentry/openaiMemory from agentry/anthropic<Agent>| Prop | Type | Description |
|---|---|---|
provider? | 'anthropic' | 'openai' | AI provider for this agent |
model | string | Provider model id (e.g. claude-sonnet-4-5, gpt-5-mini) |
name? | string | Agent identifier |
description? | string | Agent description |
maxTokens? | number | Max output tokens (default: 4096) |
maxIterations? | number | Max tool call iterations (default: 20) |
stopSequences? | string[] | Stop sequences |
temperature? | number | Sampling temperature (0-1) |
stream? | boolean | Enable streaming (default: true) |
betas? | string[] | Additional Anthropic beta features to enable |
thinking? | ThinkingConfig | Extended thinking config (provider-dependent). Anthropic: { type: 'enabled', budget_tokens: number, interleaved?: boolean }. OpenAI: { type: 'enabled', effort: 'low' | 'medium' | 'high', summary: 'auto' | 'concise' | 'detailed' }. Disable: { type: 'disabled' }. |
websocket? | boolean | Enable WebSocket mode for OpenAI (reduces per-turn latency ~40% in multi-tool loops via persistent WebSocket connection |
compactionControl? | CompactionControl | Context compaction settings (see below) |
onMessage? | (event: AgentStreamEvent) => void | Stream event callback |
onComplete? | (result: AgentResult) => void | Completion callback |
onError? | (error: Error) => void | Error callback |
onStepFinish? | (result: OnStepFinishResult) => void | Step completion callback |
CompactionControl:
| Field | Type | Description |
|---|---|---|
enabled | boolean | Enable/disable compaction |
contextTokenThreshold? | number | Token threshold to trigger (default: 100000) |
model? | string | Model for summarization (default: agent's model) |
summaryPrompt? | string | Custom summary prompt |
<System> / <Context>| Prop | Type | Description |
|---|---|---|
children | ReactNode | Content |
cache? | 'ephemeral' | Mark as non-cacheable for prompt caching |
<Message>| Prop | Type | Description |
|---|---|---|
role | 'user' | 'assistant' | Message role |
children | ReactNode | Message content |
<Tools>| Prop | Type | Description |
|---|---|---|
children | ReactNode | Tool components |
<Tool>| Prop | Type | Description |
|---|---|---|
name | string | Tool name |
description | string | Description for the model |
parameters | ZodSchema | Zod schema for input validation |
strict? | boolean | Enable structured outputs (auto-enables beta) |
handler | (input, context: ToolContext) => Promise<ToolResult> | Tool handler |
<AgentTool>| Prop | Type | Description |
|---|---|---|
name | string | Tool name |
description | string | Description for the model |
parameters | ZodSchema | Zod schema for input validation |
agent | (input) => ReactElement<Agent> | Function returning the Agent |
<Condition>| Prop | Type | Description |
|---|---|---|
when | boolean | string | Condition (boolean or NL description evaluated by LLM) |
provider? | 'anthropic' | 'openai' | Override provider for NL evaluation (first NL condition's override applies to the batch) |
model? | AnthropicModel | OpenAIModel | Override model for NL evaluation (defaults to claude-haiku-4-5 / gpt-4.1-mini if not set) |
children | ReactNode | Content to render when condition is true |
<WebSearch>| Prop | Type | Description |
|---|---|---|
maxUses? | number | Max searches allowed |
allowedDomains? | string[] | Restrict to these domains |
blockedDomains? | string[] | Block these domains |
userLocation? | { city?: string, region?: string, country?: string, timezone?: string } | Location for localized results |
<CodeExecution>No props. Enables sandboxed code execution.
<Memory>| Prop | Type | Description |
|---|---|---|
onView? | (input: { path: string, view_range?: [number, number] }) => Promise<string> | View file/directory handler |
onCreate? | (input: { path: string, file_text: string }) => Promise<string> | Create file handler |
onStrReplace? | (input: { path: string, old_str: string, new_str: string }) => Promise<string> | Replace text handler |
onInsert? | (input: { path: string, insert_line: number, insert_text: string }) => Promise<string> | Insert text handler |
onDelete? | (input: { path: string }) => Promise<string> | Delete file handler |
onRename? | (input: { old_path: string, new_path: string }) => Promise<string> | Rename/move handler |
<MCP>| Prop | Type | Description |
|---|---|---|
name | string | Server name |
url | string | SSE endpoint URL |
authorization_token? | string | Auth token |
tool_configuration? | { enabled?: boolean, allowed_tools?: string[] } | Tool filtering config |
| Hook | Returns | Description |
|---|---|---|
useExecutionState() | AgentState | Current execution state |
useMessages() | AgentMessageParam[] | Conversation messages |
useAgentState() | AgentStoreState | Full agent state |
| Method / Property | Type | Description |
|---|---|---|
sendMessage(content) | (string) => Promise<AgentResult> | Send a message and get response |
stream(message) | (string) => AsyncGenerator<AgentStreamEvent, AgentResult> | Stream a response |
run(firstMessage?) | (string?) => Promise<AgentResult> | Run agent to completion |
abort() | () => void | Abort current execution |
close() | () => void | Clean up resources |
state | AgentState | Current execution state |
messages | AgentMessageParam[] | Conversation history |
isRunning | boolean | Whether agent is processing |
| Function | Description |
|---|---|
defineTool(options) | Define a tool programmatically. Options: name, description, parameters, strict?, handler |
defineAgentTool(options) | Define a subagent tool. Options: name, description, parameters, agent |
createAgent(element, options?) | Create an agent handle without running |
Tool handlers receive a context object:
| Property | Type | Description |
|---|---|---|
agentName | string | Name of the current agent |
provider | 'anthropic' | 'openai' | Current provider |
client? | Anthropic | OpenAI | Current provider client |
clients? | { anthropic?: Anthropic; openai?: OpenAI } | Provider client map |
model? | string | Current agent's model |
signal? | AbortSignal | Abort signal for cancellation |
metadata? | JsonObject | Custom JSON-like metadata |
runAgent | (agent: ReactElement, options?: RunAgentOptions) => Promise<AgentResult> | Run an agent programmatically |
RunAgentOptions:
| Field | Type | Description |
|---|---|---|
provider? | 'anthropic' | 'openai' | Override provider |
clients? | { anthropic?: Anthropic; openai?: OpenAI } | Override client map |
model? | string | Override parent's model |
maxTokens? | number | Override max tokens |
temperature? | number | Override temperature |
signal? | AbortSignal | Custom abort signal |
bun install
bun run typecheck
bun test
Agent 🤖 + Gantry 🏗️
I wanted to build an AI Agent and was exploring different ways to represent one. I started sketching it out in React and realized the component model made composition and structure really intuitive. React's concepts like hooks, lifecycles, and state made developing the functionality straightforward. Since I wanted it to feel just like writing React, it was the perfect excuse to dig into how the React Reconciler works under the hood and how I could use it for this project.
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
FAQs
Compose and reuse your AI agents like React components
We found that agentry demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.