@tanstack/ai-react
React hooks for building AI chat interfaces with TanStack AI.
Installation
npm install @tanstack/ai-react @tanstack/ai-client
useChat Hook
The useChat hook manages chat state, handles streaming responses, and provides a complete chat interface in a single hook.
Design Philosophy (v5 API):
- You control input state
- Just call
sendMessage() when ready
- No form-centric API - use buttons, keyboard events, or any trigger
- More flexible and less opinionated
Basic Usage
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
import { useState } from "react";
function ChatComponent() {
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
const [input, setInput] = useState("");
const handleSend = () => {
sendMessage(input);
setInput("");
};
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleSend()}
disabled={isLoading}
/>
<button onClick={handleSend} disabled={isLoading || !input.trim()}>
Send
</button>
</div>
);
}
API
Options
interface UseChatOptions {
connection: ConnectionAdapter
initialMessages?: UIMessage[]
id?: string
body?: Record<string, any>
onResponse?: (response?: Response) => void
onChunk?: (chunk: StreamChunk) => void
onFinish?: (message: UIMessage) => void
onError?: (error: Error) => void
}
Return Value
interface UseChatReturn {
messages: UIMessage[]
sendMessage: (content: string) => Promise<void>
append: (message) => Promise<void>
reload: () => Promise<void>
stop: () => void
isLoading: boolean
error: Error | undefined
setMessages: (messages) => void
clear: () => void
}
Connection Adapters
Connection adapters provide flexible streaming for different scenarios. See the complete guides:
Quick Examples
SSE (Most Common):
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat'),
})
Server Functions:
import { useChat, stream } from '@tanstack/ai-react'
const chat = useChat({
connection: stream((messages) => serverChatFunction({ messages })),
})
Custom (e.g., WebSockets):
import { useChat } from '@tanstack/ai-react'
import type { ConnectionAdapter } from '@tanstack/ai-client'
const wsAdapter: ConnectionAdapter = {
async *connect(messages) {
},
}
const chat = useChat({ connection: wsAdapter })
Backend Endpoint
Your backend should use the chat() method which automatically handles tool execution in a loop:
- Receive POST requests with this body:
{
messages: Message[];
data?: Record<string, any>;
}
- Use
chat() to stream responses (with automatic tool execution):
import { chat, toStreamResponse } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
export async function POST(request: Request) {
const { messages } = await request.json()
const stream = chat({
adapter: openai(),
model: 'gpt-4o',
messages,
tools: [weatherTool],
agentLoopStrategy: maxIterations(5),
})
return toStreamResponse(stream)
}
The response streams StreamChunk objects as Server-Sent Events:
data: {"type":"content","delta":"Hello","content":"Hello",...}
data: {"type":"tool_call","toolCall":{...},...}
data: {"type":"tool_result","toolCallId":"...","content":"...",...}
data: {"type":"content","delta":" world","content":"Hello world",...}
data: {"type":"done","finishReason":"stop","usage":{...}}
Note: The chat() method automatically executes tools and emits tool_result chunks - you don't need to handle tool execution manually!
Advanced Usage
With Callbacks
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents('/api/chat'),
onChunk: (chunk) => {
if (chunk.type === 'content') {
console.log('New token:', chunk.delta)
}
},
onFinish: (message) => {
console.log('Final message:', message)
},
onError: (error) => {
console.error('Chat error:', error)
},
})
await sendMessage('Tell me a joke')
Flexible Triggering
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
const { sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat")
});
const [input, setInput] = useState("");
<button onClick={() => sendMessage(input)}>Send</button>
<input onKeyDown={(e) => e.key === "Enter" && sendMessage(input)} />
<button onClick={async () => {
const transcript = await voiceToText();
sendMessage(transcript);
}}>🎤 Speak</button>
<button onClick={() => sendMessage("Explain quantum computing")}>
Ask about quantum computing
</button>
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat', {
headers: {
Authorization: `Bearer ${token}`,
'X-Custom-Header': 'value',
},
}),
body: {
userId: '123',
sessionId: 'abc',
},
})
Programmatic Control
const { messages, sendMessage, append, reload, stop, clear } = useChat()
await sendMessage('Hello!')
await append({
role: 'user',
content: 'Hello!',
id: 'custom-id',
})
await reload()
stop()
clear()
Multiple Chats
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
function App() {
const chat1 = useChat({
id: 'chat-1',
connection: fetchServerSentEvents('/api/chat'),
})
const chat2 = useChat({
id: 'chat-2',
connection: fetchServerSentEvents('/api/chat'),
})
}
Example Backend (Node.js/Express)
import express from 'express'
import { AI, toStreamResponse } from '@tanstack/ai'
import { OpenAIAdapter } from '@tanstack/ai-openai'
const app = express()
app.use(express.json())
const ai = new AI(new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }))
app.post('/api/chat', async (req, res) => {
const { messages } = req.body
const stream = ai.streamChat({
model: 'gpt-3.5-turbo',
messages,
})
const response = toStreamResponse(stream)
response.headers.forEach((value, key) => {
res.setHeader(key, value)
})
const reader = response.body?.getReader()
if (reader) {
while (true) {
const { done, value } = await reader.read()
if (done) break
res.write(value)
}
}
res.end()
})
app.listen(3000)
Example Backend (Next.js App Router)
import { AI, toStreamResponse } from '@tanstack/ai'
import { OpenAIAdapter } from '@tanstack/ai-openai'
export const runtime = 'edge'
const ai = new AI(new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }))
export async function POST(req: Request) {
const { messages } = await req.json()
return toStreamResponse(
ai.streamChat({
model: 'gpt-3.5-turbo',
messages,
}),
)
}
Example Backend (TanStack Start)
import { createFileRoute } from '@tanstack/react-router'
import { AI, toStreamResponse } from '@tanstack/ai'
import { AnthropicAdapter } from '@tanstack/ai-anthropic'
const ai = new AI(
new AnthropicAdapter({ apiKey: process.env.ANTHROPIC_API_KEY }),
)
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages } = await request.json()
return toStreamResponse(
ai.streamChat({
model: 'claude-3-5-sonnet-20241022',
messages,
tools,
}),
)
},
},
},
})
TypeScript Types
All types are fully exported:
import type {
UIMessage,
UseChatOptions,
UseChatReturn,
ChatRequestBody,
} from '@tanstack/ai-react'
Features
- ✅ Automatic message state management
- ✅ Streaming response handling
- ✅ Loading and error states
- ✅ Simple
sendMessage() API (v5 style)
- ✅ You control input state (flexible)
- ✅ Abort/stop generation
- ✅ Reload last response
- ✅ Clear conversation
- ✅ Custom headers and body data (via connection adapter options)
- ✅ Callback hooks for lifecycle events
- ✅ Multiple concurrent chats
- ✅ Full TypeScript support
License
MIT