@dainprotocol/llm
LLM adapters for the Dain Protocol Agent SDK.
Overview
This package provides unified adapters for multiple LLM providers, allowing you to easily switch between different models while maintaining a consistent interface.
Features
- Provider-agnostic interface: Write code once, use with any LLM
- Direct SDK adapters: Native support for Anthropic, OpenAI, and Vercel AI SDK
- Streaming support: Built-in token streaming for all adapters
- Tool calling: Unified tool/function calling across providers
- TypeScript-first: Full type safety and IntelliSense support
Installation
pnpm add @dainprotocol/llm
Supported Providers
Anthropic (Claude)
Direct integration with Anthropic's Claude models.
import { createAnthropicAdapter } from '@dainprotocol/llm';
const llm = createAnthropicAdapter({
apiKey: process.env.ANTHROPIC_API_KEY!,
defaultModel: 'claude-3-5-sonnet-20241022',
});
const response = await llm.generate(
[{ role: 'user', content: 'Hello!' }],
{ model: 'claude-3-5-sonnet-20241022' }
);
OpenAI (GPT)
Direct integration with OpenAI's GPT models.
import { createOpenAIAdapter } from '@dainprotocol/llm';
const llm = createOpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: 'gpt-4-turbo-preview',
});
const response = await llm.generate(
[{ role: 'user', content: 'Hello!' }],
{ model: 'gpt-4-turbo-preview' }
);
Vercel AI SDK
Universal adapter supporting multiple providers through Vercel AI SDK.
import { createVercelAdapter } from '@dainprotocol/llm';
import { anthropic } from '@ai-sdk/anthropic';
import { openai } from '@ai-sdk/openai';
const claudeLLM = createVercelAdapter(anthropic('claude-3-5-sonnet-20241022'));
const gptLLM = createVercelAdapter(openai('gpt-4-turbo'));
const response = await claudeLLM.generate(
[{ role: 'user', content: 'Hello!' }],
{ model: 'claude-3-5-sonnet-20241022' }
);
API Reference
LLMAdapter Interface
All adapters implement the LLMAdapter interface:
interface LLMAdapter {
provider: string;
generate(
messages: LLMMessage[],
config: LLMConfig,
signal?: AbortSignal
): Promise<LLMResponse>;
stream(
messages: LLMMessage[],
config: LLMConfig,
signal?: AbortSignal
): AsyncGenerator<LLMStreamChunk>;
normalizeMessage(message: any): LLMMessage;
denormalizeMessage(message: LLMMessage): any;
}
Streaming
All adapters support streaming:
for await (const chunk of llm.stream(messages, config)) {
if (chunk.type === 'content') {
process.stdout.write(chunk.content);
} else if (chunk.type === 'tool_call') {
console.log('Tool call:', chunk.toolCall);
} else if (chunk.type === 'done') {
console.log('Finish reason:', chunk.finishReason);
}
}
Tool Calling
Define tools and let the LLM use them:
const response = await llm.generate(
[{ role: 'user', content: 'What is 25 * 4?' }],
{
model: 'claude-3-5-sonnet-20241022',
tools: [
{
name: 'calculator',
description: 'Perform mathematical calculations',
parameters: {
type: 'object',
properties: {
expression: { type: 'string' }
},
required: ['expression']
}
}
]
}
);
if (response.toolCalls) {
for (const toolCall of response.toolCalls) {
console.log(`Tool: ${toolCall.name}`, toolCall.arguments);
}
}
Configuration Options
interface LLMConfig {
model: string;
temperature?: number;
maxTokens?: number;
topP?: number;
frequencyPenalty?: number;
presencePenalty?: number;
stopSequences?: string[];
tools?: LLMTool[];
toolChoice?: 'auto' | 'required' | 'none' | { type: 'tool'; name: string };
}
Message Format
All adapters use a unified message format:
interface LLMMessage {
role: 'system' | 'user' | 'assistant' | 'tool';
content: string;
name?: string;
toolCallId?: string;
toolCalls?: ToolCall[];
}
License
MIT