
Security News
CVE Volume Surges Past 48,000 in 2025 as WordPress Plugin Ecosystem Drives Growth
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.
@tanstack/ai-solid
Advanced tools
React hooks for building AI chat interfaces with TanStack AI.
npm install @tanstack/ai-react @tanstack/ai-client
The useChat hook manages chat state, handles streaming responses, and provides a complete chat interface in a single hook.
Design Philosophy (v5 API):
sendMessage() when readyimport { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
import { useState } from "react";
function ChatComponent() {
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
const [input, setInput] = useState("");
const handleSend = () => {
sendMessage(input);
setInput("");
};
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleSend()}
disabled={isLoading}
/>
<button onClick={handleSend} disabled={isLoading || !input.trim()}>
Send
</button>
</div>
);
}
interface UseChatOptions {
// Connection adapter (required)
connection: ConnectionAdapter
// Configuration
initialMessages?: UIMessage[] // Starting messages
id?: string // Unique chat ID
body?: Record<string, any> // Extra data to send
// Callbacks
onResponse?: (response?: Response) => void
onChunk?: (chunk: StreamChunk) => void
onFinish?: (message: UIMessage) => void
onError?: (error: Error) => void
}
interface UseChatReturn {
messages: UIMessage[] // Current conversation
sendMessage: (content: string) => Promise<void> // Send a message
append: (message) => Promise<void> // Add message programmatically
reload: () => Promise<void> // Reload last response
stop: () => void // Stop current generation
isLoading: boolean // Is generating a response
error: Error | undefined // Current error
setMessages: (messages) => void // Set messages manually
clear: () => void // Clear all messages
}
Connection adapters provide flexible streaming for different scenarios. See the complete guides:
SSE (Most Common):
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat'),
})
Server Functions:
import { useChat, stream } from '@tanstack/ai-react'
const chat = useChat({
connection: stream((messages) => serverChatFunction({ messages })),
})
Custom (e.g., WebSockets):
import { useChat } from '@tanstack/ai-react'
import type { ConnectionAdapter } from '@tanstack/ai-client'
const wsAdapter: ConnectionAdapter = {
async *connect(messages) {
// Your WebSocket logic
},
}
const chat = useChat({ connection: wsAdapter })
Your backend should use the chat() method which automatically handles tool execution in a loop:
{
messages: Message[];
data?: Record<string, any>;
}
chat() to stream responses (with automatic tool execution):import { chat, toStreamResponse } from '@tanstack/ai'
import { openai } from '@tanstack/ai-openai'
export async function POST(request: Request) {
const { messages } = await request.json()
const stream = chat({
adapter: openai(),
model: 'gpt-4o',
messages,
tools: [weatherTool], // Optional: auto-executed in loop
agentLoopStrategy: maxIterations(5), // Optional: control loop
})
// Convert to HTTP streaming response with SSE headers
return toStreamResponse(stream)
}
The response streams StreamChunk objects as Server-Sent Events:
data: {"type":"content","delta":"Hello","content":"Hello",...}
data: {"type":"tool_call","toolCall":{...},...}
data: {"type":"tool_result","toolCallId":"...","content":"...",...}
data: {"type":"content","delta":" world","content":"Hello world",...}
data: {"type":"done","finishReason":"stop","usage":{...}}
Note: The chat() method automatically executes tools and emits tool_result chunks - you don't need to handle tool execution manually!
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents('/api/chat'),
onChunk: (chunk) => {
if (chunk.type === 'content') {
console.log('New token:', chunk.delta)
}
},
onFinish: (message) => {
console.log('Final message:', message)
// Save to database, log analytics, etc.
},
onError: (error) => {
console.error('Chat error:', error)
// Show toast notification, log error, etc.
},
})
// Send messages programmatically
await sendMessage('Tell me a joke')
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
const { sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat")
});
const [input, setInput] = useState("");
// Button click
<button onClick={() => sendMessage(input)}>Send</button>
// Enter key
<input onKeyDown={(e) => e.key === "Enter" && sendMessage(input)} />
// Voice input
<button onClick={async () => {
const transcript = await voiceToText();
sendMessage(transcript);
}}>🎤 Speak</button>
// Predefined prompts
<button onClick={() => sendMessage("Explain quantum computing")}>
Ask about quantum computing
</button>
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat', {
headers: {
Authorization: `Bearer ${token}`,
'X-Custom-Header': 'value',
},
}),
body: {
userId: '123',
sessionId: 'abc',
},
})
const { messages, sendMessage, append, reload, stop, clear } = useChat()
// Send a simple message
await sendMessage('Hello!')
// Add a message with more control
await append({
role: 'user',
content: 'Hello!',
id: 'custom-id',
})
// Reload the last AI response
await reload()
// Stop the current generation
stop()
// Clear all messages
clear()
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
function App() {
const chat1 = useChat({
id: 'chat-1',
connection: fetchServerSentEvents('/api/chat'),
})
const chat2 = useChat({
id: 'chat-2',
connection: fetchServerSentEvents('/api/chat'),
})
// Each hook manages independent state
}
import express from 'express'
import { AI, toStreamResponse } from '@tanstack/ai'
import { OpenAIAdapter } from '@tanstack/ai-openai'
const app = express()
app.use(express.json())
const ai = new AI(new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }))
app.post('/api/chat', async (req, res) => {
const { messages } = req.body
// One line to create streaming response!
const stream = ai.streamChat({
model: 'gpt-3.5-turbo',
messages,
})
const response = toStreamResponse(stream)
// Copy headers and stream to Express response
response.headers.forEach((value, key) => {
res.setHeader(key, value)
})
const reader = response.body?.getReader()
if (reader) {
while (true) {
const { done, value } = await reader.read()
if (done) break
res.write(value)
}
}
res.end()
})
app.listen(3000)
// app/api/chat/route.ts
import { AI, toStreamResponse } from '@tanstack/ai'
import { OpenAIAdapter } from '@tanstack/ai-openai'
export const runtime = 'edge'
const ai = new AI(new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }))
export async function POST(req: Request) {
const { messages } = await req.json()
// One line!
return toStreamResponse(
ai.streamChat({
model: 'gpt-3.5-turbo',
messages,
}),
)
}
import { createFileRoute } from '@tanstack/react-router'
import { AI, toStreamResponse } from '@tanstack/ai'
import { AnthropicAdapter } from '@tanstack/ai-anthropic'
const ai = new AI(
new AnthropicAdapter({ apiKey: process.env.ANTHROPIC_API_KEY }),
)
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages } = await request.json()
// One line with automatic tool execution!
return toStreamResponse(
ai.streamChat({
model: 'claude-3-5-sonnet-20241022',
messages,
tools, // Tools with execute functions
}),
)
},
},
},
})
All types are fully exported:
import type {
UIMessage,
UseChatOptions,
UseChatReturn,
ChatRequestBody,
} from '@tanstack/ai-react'
sendMessage() API (v5 style)MIT
FAQs
Solid hooks for TanStack AI
The npm package @tanstack/ai-solid receives a total of 106 weekly downloads. As such, @tanstack/ai-solid popularity was classified as not popular.
We found that @tanstack/ai-solid demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.

Security News
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.

Security News
Tailwind Labs laid off 75% of its engineering team after revenue dropped 80%, as LLMs redirect traffic away from documentation where developers discover paid products.