
Research
Malicious npm Packages Impersonate Flashbots SDKs, Targeting Ethereum Wallet Credentials
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
@flatfile/improv
Advanced tools
A powerful TypeScript library for building AI agents with multi-threaded conversations, tool execution, and event handling capabilities
A powerful TypeScript library for building AI-powered applications with three complementary APIs: Solo for simple structured outputs, Agent for complex tool-enabled workflows, and Gig for orchestrating multi-step AI operations. Features type-safe outputs, built-in retry/fallback mechanisms, and support for multiple LLM providers.
import { Agent, Tool, BedrockThreadDriver } from '@flatfile/improv';
import { z } from 'zod';
// Create a custom tool
const calculatorTool = new Tool({
name: 'calculator',
description: 'Performs basic arithmetic operations',
parameters: z.object({
operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
a: z.number(),
b: z.number(),
}),
executeFn: async (args) => {
const { operation, a, b } = args;
switch (operation) {
case 'add': return a + b;
case 'subtract': return a - b;
case 'multiply': return a * b;
case 'divide': return a / b;
}
}
});
// Initialize the Bedrock driver
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-haiku-20240307-v1:0',
temperature: 0.7,
});
// Create an agent
const agent = new Agent({
knowledge: [
{ fact: 'The agent can perform basic arithmetic operations.' }
],
instructions: [
{ instruction: 'Use the calculator tool for arithmetic operations.', priority: 1 }
],
tools: [calculatorTool],
driver,
});
// Create and use a thread
const thread = agent.createThread({
prompt: 'What is 25 multiplied by 4?',
onResponse: async (message) => {
console.log('Agent response:', message.content);
}
});
// Send the thread
await thread.send();
// Stream the response
const stream = await thread.stream();
for await (const text of stream) {
process.stdout.write(text); // Print each chunk as it arrives
}
Comprehensive guides and API references:
Improv now provides simplified APIs for different AI use cases:
For simple, one-off LLM calls with structured output:
import { Solo } from '@flatfile/improv';
import { z } from 'zod';
const solo = new Solo({
driver,
outputSchema: z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number().min(0).max(1)
})
});
const result = await solo.ask("Analyze the sentiment: 'This product is amazing!'");
// result.output is fully typed: { sentiment: "positive", confidence: 0.95 }
Key Features:
For complex, multi-step workflows that require tool usage:
import { Agent, Tool } from '@flatfile/improv';
const searchTool = new Tool({
name: "search",
description: "Search the knowledge base",
parameters: z.object({ query: z.string() }),
executeFn: async ({ query }) => {
// Your search implementation
return { results: ["Result 1", "Result 2"] };
}
});
const agent = new Agent({
driver,
tools: [searchTool],
instructions: [
{ instruction: "Always search before answering", priority: 1 }
]
});
const thread = agent.createThread({
prompt: "What are the best practices for error handling?"
});
await thread.send();
Orchestrate multiple AI operations with dependencies and control flow:
import { Gig, PieceDefinition } from '@flatfile/improv';
const gig = new Gig({
label: "Customer Support Workflow",
driver
});
// Add pieces sequentially (default behavior)
gig
.add("classify", groove =>
`Classify this support request: "${groove.feelVibe("request")}"`, {
outputSchema: z.enum(["technical", "billing", "general"])
})
.add("sentiment", groove =>
`Analyze sentiment: "${groove.feelVibe("request")}"`, {
outputSchema: z.enum(["positive", "negative", "neutral"])
})
// Pieces can access previous results
.add("research", groove => {
const category = groove.recall("classify");
return `Research solutions for ${category} issue`;
}, {
tools: [searchTool],
driver: cheaperModel // Override driver for cost optimization
})
// Explicit parallel execution when needed
.parallel([
["check_status", "Check system status"],
["find_similar", "Find similar resolved issues"]
])
.add("respond", groove => {
const sentiment = groove.recall("sentiment");
const research = groove.recall("research");
return `Generate ${sentiment} response using: ${research}`;
});
// Execute the workflow
const results = await gig.perform({
request: "My invoice is wrong and I'm very frustrated!"
});
console.log(results.recordings.get("classify")); // "billing"
console.log(results.recordings.get("sentiment")); // "negative"
Key Features:
parallel()
for concurrent operationsgroove.recall()
Create reusable workflow components that can be shared across projects:
import { PieceDefinition } from '@flatfile/improv';
import { z } from 'zod';
// Define a reusable piece
export const sentimentAnalysis: PieceDefinition<"positive" | "negative" | "neutral"> = {
name: "sentiment",
play: (groove) => {
const text = groove.feelVibe("text");
return `Analyze sentiment of: "${text}"`;
},
config: {
outputSchema: z.enum(["positive", "negative", "neutral"]),
temperature: 0.1
},
meta: {
version: "1.0.0",
description: "Analyzes emotional tone of text"
}
};
// Use in any Gig
gig.add(sentimentAnalysis);
Organizing Pieces with Evaluations:
src/
pieces/
sentiment/
index.ts # Piece definition (production)
eval.ts # Evaluation data (dev only)
This separation ensures evaluation datasets don't get bundled in production builds.
Improv supports advanced reasoning capabilities through the reasoning_config
option in the thread driver. This allows the AI to perform step-by-step reasoning before providing a final answer.
import { Agent, BedrockThreadDriver } from '@flatfile/improv';
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-7-sonnet-20250219-v1:0',
temperature: 1,
reasoning_config: {
budget_tokens: 1024,
type: 'enabled',
},
});
const agent = new Agent({
driver,
});
const thread = agent.createThread({
systemPrompt: 'You are a helpful assistant that can answer questions about the world.',
prompt: 'How many people will live in the world in 2040?',
});
const result = await thread.send();
console.log(result.last());
This example enables the AI to work through its reasoning process with a token budget of 1024 tokens before providing a final answer about population projections.
The main agent class that manages knowledge, instructions, tools, and conversation threads.
const agent = new Agent({
knowledge?: AgentKnowledge[], // Array of facts with optional source and timestamp
instructions?: AgentInstruction[], // Array of prioritized instructions
memory?: AgentMemory[], // Array of stored thread histories
systemPrompt?: string, // Base system prompt
tools?: Tool[], // Array of available tools
driver: ThreadDriver, // Thread driver implementation
evaluators?: Evaluator[] // Array of evaluators for response processing
});
Manages a single conversation thread with message history and tool execution.
const thread = new Thread({
messages?: Message[], // Array of conversation messages
tools?: Tool[], // Array of available tools
driver: ThreadDriver, // Thread driver implementation
toolChoice?: 'auto' | 'any', // Tool selection mode
maxSteps?: number // Maximum number of tool execution steps
});
Define custom tools that the agent can use during conversations.
const tool = new Tool({
name: string, // Tool name
description: string, // Tool description
parameters: z.ZodTypeAny, // Zod schema for parameter validation
followUpMessage?: string, // Optional message to guide response evaluation
executeFn: (args: Record<string, any>, toolCall: ToolCall) => Promise<any> // Tool execution function
});
Represents a single message in a conversation thread.
const message = new Message({
content?: string, // Message content
role: 'system' | 'user' | 'assistant' | 'tool', // Message role
toolCalls?: ToolCall[], // Array of tool calls
toolResults?: ToolResult[], // Array of tool results
attachments?: Attachment[], // Array of attachments
cache?: boolean // Whether to cache the message
});
The library uses an event-driven architecture. All major components extend EventSource
, allowing you to listen for various events:
// Agent events
agent.on('agent.thread-added', ({ agent, thread }) => {});
agent.on('agent.thread-removed', ({ agent, thread }) => {});
agent.on('agent.knowledge-added', ({ agent, knowledge }) => {});
agent.on('agent.instruction-added', ({ agent, instruction }) => {});
// Thread events
thread.on('thread.response', ({ thread, message }) => {});
thread.on('thread.max_steps_reached', ({ thread, steps }) => {});
// Tool events
tool.on('tool.execution.started', ({ tool, name, args }) => {});
tool.on('tool.execution.completed', ({ tool, name, args, result }) => {});
tool.on('tool.execution.failed', ({ tool, name, args, error }) => {});
Choosing the Right API
Piece Design
groove.recall()
to access previous resultsTool Design
Workflow Organization
parallel()
only when operations are truly independenteval.ts
filesError Handling & Resilience
onError
handlers in Gig piecesType Safety
PieceDefinition<T>
for type-safe piecesgroove.recall()
MIT
Contributions are welcome! Please read our contributing guidelines for details.
Tools can include follow-up messages that guide the AI's evaluation of tool responses. This is particularly useful for:
const tool = new Tool({
name: 'dataAnalyzer',
description: 'Analyzes data and returns insights',
parameters: z.object({
data: z.array(z.any()),
metrics: z.array(z.string())
}),
followUpMessage: `Review the analysis results:
1. What are the key insights from the data?
2. Are there any concerning patterns?
3. What actions should be taken based on these results?`,
executeFn: async (args) => {
// Tool implementation
}
});
The library provides several mechanisms for managing state:
Evaluators provide a way to process and validate agent responses:
const evaluator: Evaluator = async ({ thread, agent }, complete) => {
// Process the thread response
const lastMessage = thread.last();
if (lastMessage?.content.includes('done')) {
complete(); // Signal completion
} else {
// Continue processing
thread.send(new Message({
content: 'Please continue with the task...'
}));
}
};
const agent = new Agent({
// ... other options ...
evaluators: [evaluator]
});
Evaluators can:
The three-keyed lock pattern is a state management pattern that ensures controlled flow through tool execution, evaluation, and completion phases. It's implemented as a reusable evaluator:
import { threeKeyedLockEvaluator } from '@flatfile/improv';
const agent = new Agent({
// ... other options ...
evaluators: [
threeKeyedLockEvaluator({
evalPrompt: "Are there other items to process? If not, say 'done'",
exitPrompt: "Please provide a final summary of all actions taken."
})
]
});
The pattern works through three distinct states:
stateDiagram-v2
[*] --> ToolExecution
state "Tool Execution" as ToolExecution {
[*] --> Running
Running --> Complete
Complete --> [*]
}
state "Evaluation" as Evaluation {
[*] --> CheckMore
CheckMore --> [*]
}
state "Summary" as Summary {
[*] --> Summarize
Summarize --> [*]
}
ToolExecution --> Evaluation: Non-tool response
Evaluation --> ToolExecution: Tool called
Evaluation --> Summary: No more items
Summary --> [*]: Complete
note right of ToolExecution
isEvaluatingTools = true
Handles tool execution
end note
note right of Evaluation
isEvaluatingTools = false
nextMessageIsSummary = false
Checks for more work
end note
note right of Summary
nextMessageIsSummary = true
Gets final summary
end note
The evaluator manages these states through:
Tool Execution State
Evaluation State
Summary State
Key features:
Example usage with custom prompts:
const workflowAgent = new Agent({
// ... agent configuration ...
evaluators: [
threeKeyedLockEvaluator({
evalPrompt: "Review the results. Should we process more items?",
exitPrompt: "Provide a detailed summary of all processed items."
})
]
});
// The evaluator will automatically:
// 1. Let tools execute freely
// 2. After each tool completion, check if more processing is needed
// 3. When no more items need processing, request a final summary
// 4. Complete the evaluation after receiving the summary
This pattern is particularly useful for:
The library uses AWS Bedrock (Claude) as its default LLM provider. Configure your AWS credentials:
// Required environment variables
process.env.AWS_ACCESS_KEY_ID = 'your-access-key';
process.env.AWS_SECRET_ACCESS_KEY = 'your-secret-key';
process.env.AWS_REGION = 'your-region';
// Initialize the driver
const driver = new BedrockThreadDriver({
model: 'anthropic.claude-3-haiku-20240307-v1:0', // Default model
temperature?: number, // Default: 0.7
maxTokens?: number, // Default: 4096
cache?: boolean // Default: false
});
Improv supports multiple LLM providers through dedicated thread drivers:
Driver | Provider | Documentation |
---|---|---|
BedrockThreadDriver | AWS Bedrock (Claude) | Bedrock Driver Documentation |
OpenAIThreadDriver | OpenAI | OpenAI Driver Documentation |
CohereThreadDriver | Cohere | Cohere Driver Documentation |
GeminiThreadDriver | Google Gemini | Gemini Driver Documentation |
CerebrasThreadDriver | Cerebras | Cerebras Driver Documentation |
Each driver provides a consistent interface while supporting model-specific features:
// OpenAI example
import { OpenAIThreadDriver } from '@flatfile/improv';
const driver = new OpenAIThreadDriver({
model: 'gpt-4o',
apiKey: process.env.OPENAI_API_KEY,
temperature: 0.7
});
// Cohere example
import { CohereThreadDriver } from '@flatfile/improv';
const driver = new CohereThreadDriver({
model: 'command-r-plus',
apiKey: process.env.COHERE_API_KEY
});
// Gemini example
import { GeminiThreadDriver } from '@flatfile/improv';
const driver = new GeminiThreadDriver({
model: 'gemini-1.5-pro',
apiKey: process.env.GOOGLE_API_KEY
});
// Cerebras example
import { CerebrasThreadDriver } from '@flatfile/improv';
const driver = new CerebrasThreadDriver({
model: 'llama-4-scout-17b-16e-instruct',
apiKey: process.env.CEREBRAS_API_KEY
});
Refer to each driver's documentation for available models and specific configuration options.
The library provides decorators for creating tools directly on agent classes:
class CustomAgent extends Agent {
@ToolName("sampleData")
@ToolDescription("Sample the original data with the mapping program")
private async sampleData(
@ToolParam("count", "Number of records to sample", z.number())
count: number,
@ToolParam("seed", "Random seed", z.number().optional())
seed?: number
): Promise<any> {
return { count, seed };
}
}
This provides:
The library provides built-in support for streaming responses from the AI model. Keys features:
const thread = agent.createThread({
prompt: 'What is 25 multiplied by 4?',
});
const stream = await thread.stream();
for await (const text of stream) {
process.stdout.write(text);
}
// The final response is also available in the thread
console.log(thread.last()?.content);
FAQs
A powerful TypeScript library for building AI agents with multi-threaded conversations, tool execution, and event handling capabilities
We found that @flatfile/improv demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 15 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.