ai.matey.react.hooks
Additional specialized React hooks for AI applications.
Part of the ai.matey monorepo.
Installation
npm install ai.matey.react.hooks
Quick Start
import { useAssistant } from 'ai.matey.react.hooks';
function AssistantChat() {
const { messages, input, handleInputChange, handleSubmit, status } = useAssistant({
api: '/api/assistant',
assistantId: 'asst_xxx',
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit" disabled={status === 'in_progress'}>
Send
</button>
</form>
<p>Status: {status}</p>
</div>
);
}
Exports
Hooks
useAssistant - OpenAI Assistants API integration with thread management
useTokenCount - Token counting and context window tracking
useStream - Low-level stream consumption hook
Types
AssistantMessage, Annotation, AssistantStatus - Assistant types
UseAssistantOptions, UseAssistantReturn - useAssistant types
UseTokenCountOptions, UseTokenCountReturn - useTokenCount types
UseStreamOptions, UseStreamReturn - useStream types
API Reference
useAssistant
React hook for OpenAI Assistants API with thread and run management.
const {
messages,
input,
setInput,
handleInputChange,
handleSubmit,
append,
threadId,
status,
stop,
setMessages,
error,
} = useAssistant({
api: '/api/assistant',
assistantId: 'asst_xxx',
threadId: 'thread_xxx',
headers: {},
body: {},
onStatus: (status) => {},
onError: (error) => {},
});
AssistantStatus values:
awaiting_message - Ready for input
in_progress - Processing request
requires_action - Tool call pending
completed - Run finished
failed - Run failed
cancelled - Run cancelled
expired - Run expired
useTokenCount
Track token usage and context window limits.
const {
tokenCount,
maxTokens,
remainingTokens,
isNearLimit,
isOverLimit,
updateText,
} = useTokenCount({
model: 'gpt-4',
text: '',
warningThreshold: 0.9,
});
Supported models:
gpt-4, gpt-4-turbo: 128,000 tokens
gpt-3.5-turbo: 16,385 tokens
claude-3-opus, claude-3-sonnet: 200,000 tokens
- And more...
useStream
Low-level hook for consuming async iterables/streams.
const {
data,
isStreaming,
error,
start,
stop,
reset,
} = useStream<ChunkType>({
onChunk: (chunk) => {},
onComplete: (data) => {},
onError: (error) => {},
});
License
MIT - see LICENSE for details.