@openassistant/utils
Utility functions for OpenAssistant tools.
Installation
npm install @openassistant/utils
Usage
import { z } from 'zod';
import {
type OpenAssistantTool,
convertToAiTool,
convertToLangchainTool,
} from '@openassistant/utils';
const myTool: OpenAssistantTool = {
name: 'my-tool',
description: 'My tool description',
parameters: z.object({
}),
context: {},
execute: async (args) => {
return { llmResult: 'result' };
},
};
import { tool as aiToolFactory } from 'ai';
const aiTool = convertToAiTool(myTool, aiToolFactory);
import { tool as lcToolFactory } from '@langchain/core/tools';
const lcTool = convertToLangchainTool(myTool, lcToolFactory);
ConversationCache
The ConversationCache class provides conversation-scoped caching for ToolOutputManager instances, enabling persistent tool outputs across multiple requests within the same conversation while maintaining isolation between different conversations.
Conversation ID Generation
The cache uses a multi-strategy approach to generate unique conversation IDs:
- Message ID (Recommended): Uses
message.id if available
- Conversation/Thread/Session IDs: Looks for
conversationId, threadId, or sessionId properties
- Enhanced Message Content Hash: Combines first 3 messages with available metadata
- Includes timestamps, user IDs, message positions, and total message count
- Stability: Once 3+ messages exist, the ID remains stable across requests
- Uniqueness: Enhanced with metadata to prevent collisions between different conversations
- Random ID: Last resort - generates random ID (cache won't persist across requests)
Recommended Message Structure:
interface RecommendedMessage {
id: string;
role: string;
content: string;
conversationId?: string;
threadId?: string;
sessionId?: string;
timestamp?: number;
createdAt?: string;
userId?: string;
}
Basic Usage
import { ConversationCache } from '@openassistant/utils';
const conversationCache = new ConversationCache();
export async function POST(req: Request) {
const { messages } = await req.json();
const { toolOutputManager, conversationId } =
await conversationCache.getToolOutputManagerForMessages(messages);
}
Configuration Options
const conversationCache = new ConversationCache({
maxConversations: 100,
ttlMs: 1000 * 60 * 60 * 2,
cleanupProbability: 0.1,
enableLogging: true,
});
Advanced Usage
const conversationId = conversationCache.generateConversationId(messages);
const toolOutputManager = await conversationCache.getToolOutputManager(conversationId);
const status = await conversationCache.getStatus();
console.log(status);
conversationCache.clearAll();
Features
- Data isolation: Prevents cache sharing between different users/conversations
- Flexible ID generation: Multiple strategies for conversation identification
- Automatic cleanup: Old conversations are removed based on TTL and max count
- Memory efficient: Probabilistic cleanup avoids performance impact
- Type-safe: Full TypeScript support with proper interfaces
- Configurable: All settings can be customized
- Persistent: Tool outputs persist across requests within same conversation (when using stable IDs)