
Research
2025 Report: Destructive Malware in Open Source Packages
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.
@tanstack/ai-solid
Advanced tools
React hooks for building AI chat interfaces with TanStack AI.
npm install @tanstack/ai-react @tanstack/ai-client
The useChat hook manages chat state, handles streaming responses, and provides a complete chat interface in a single hook.
Design Philosophy (v5 API):
sendMessage() when readyimport { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
import { useState } from "react";
function ChatComponent() {
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat"),
});
const [input, setInput] = useState("");
const handleSend = () => {
sendMessage(input);
setInput("");
};
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === "Enter" && handleSend()}
disabled={isLoading}
/>
<button onClick={handleSend} disabled={isLoading || !input.trim()}>
Send
</button>
</div>
);
}
interface UseChatOptions {
// Connection adapter (required)
connection: ConnectionAdapter
// Configuration
initialMessages?: UIMessage[] // Starting messages
id?: string // Unique chat ID
body?: Record<string, any> // Extra data to send
// Callbacks
onResponse?: (response?: Response) => void
onChunk?: (chunk: StreamChunk) => void
onFinish?: (message: UIMessage) => void
onError?: (error: Error) => void
}
interface UseChatReturn {
messages: UIMessage[] // Current conversation
sendMessage: (content: string) => Promise<void> // Send a message
append: (message) => Promise<void> // Add message programmatically
reload: () => Promise<void> // Reload last response
stop: () => void // Stop current generation
isLoading: boolean // Is generating a response
error: Error | undefined // Current error
setMessages: (messages) => void // Set messages manually
clear: () => void // Clear all messages
}
Connection adapters provide flexible streaming for different scenarios. See the complete guides:
SSE (Most Common):
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat'),
})
Server Functions:
import { useChat, stream } from '@tanstack/ai-react'
const chat = useChat({
connection: stream((messages) => serverChatFunction({ messages })),
})
Custom (e.g., WebSockets):
import { useChat } from '@tanstack/ai-react'
import type { ConnectionAdapter } from '@tanstack/ai-client'
const wsAdapter: ConnectionAdapter = {
async *connect(messages) {
// Your WebSocket logic
},
}
const chat = useChat({ connection: wsAdapter })
Your backend should use the chat() method which automatically handles tool execution in a loop:
{
messages: Message[];
data?: Record<string, any>;
}
chat() to stream responses (with automatic tool execution):import { chat, toStreamResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
export async function POST(request: Request) {
const { messages } = await request.json()
const stream = chat({
adapter: openaiText(),
model: 'gpt-4o',
messages,
tools: [weatherTool], // Optional: auto-executed in loop
agentLoopStrategy: maxIterations(5), // Optional: control loop
})
// Convert to HTTP streaming response with SSE headers
return toStreamResponse(stream)
}
The response streams StreamChunk objects as Server-Sent Events:
data: {"type":"content","delta":"Hello","content":"Hello",...}
data: {"type":"tool_call","toolCall":{...},...}
data: {"type":"tool_result","toolCallId":"...","content":"...",...}
data: {"type":"content","delta":" world","content":"Hello world",...}
data: {"type":"done","finishReason":"stop","usage":{...}}
Note: The chat() method automatically executes tools and emits tool_result chunks - you don't need to handle tool execution manually!
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents('/api/chat'),
onChunk: (chunk) => {
if (chunk.type === 'content') {
console.log('New token:', chunk.delta)
}
},
onFinish: (message) => {
console.log('Final message:', message)
// Save to database, log analytics, etc.
},
onError: (error) => {
console.error('Chat error:', error)
// Show toast notification, log error, etc.
},
})
// Send messages programmatically
await sendMessage('Tell me a joke')
import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
const { sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents("/api/chat")
});
const [input, setInput] = useState("");
// Button click
<button onClick={() => sendMessage(input)}>Send</button>
// Enter key
<input onKeyDown={(e) => e.key === "Enter" && sendMessage(input)} />
// Voice input
<button onClick={async () => {
const transcript = await voiceToText();
sendMessage(transcript);
}}>🎤 Speak</button>
// Predefined prompts
<button onClick={() => sendMessage("Explain quantum computing")}>
Ask about quantum computing
</button>
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
const chat = useChat({
connection: fetchServerSentEvents('/api/chat', {
headers: {
Authorization: `Bearer ${token}`,
'X-Custom-Header': 'value',
},
}),
body: {
userId: '123',
sessionId: 'abc',
},
})
const { messages, sendMessage, append, reload, stop, clear } = useChat()
// Send a simple message
await sendMessage('Hello!')
// Add a message with more control
await append({
role: 'user',
content: 'Hello!',
id: 'custom-id',
})
// Reload the last AI response
await reload()
// Stop the current generation
stop()
// Clear all messages
clear()
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'
function App() {
const chat1 = useChat({
id: 'chat-1',
connection: fetchServerSentEvents('/api/chat'),
})
const chat2 = useChat({
id: 'chat-2',
connection: fetchServerSentEvents('/api/chat'),
})
// Each hook manages independent state
}
import express from 'express'
import { chat, toStreamResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
const app = express()
app.use(express.json())
app.post('/api/chat', async (req, res) => {
const { messages } = req.body
// One line to create streaming response!
const stream = chat({
adapter: openaiText(),
model: 'gpt-4o',
messages,
})
const response = toStreamResponse(stream)
// Copy headers and stream to Express response
response.headers.forEach((value, key) => {
res.setHeader(key, value)
})
const reader = response.body?.getReader()
if (reader) {
while (true) {
const { done, value } = await reader.read()
if (done) break
res.write(value)
}
}
res.end()
})
app.listen(3000)
// app/api/chat/route.ts
import { chat, toStreamResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
export const runtime = 'edge'
export async function POST(req: Request) {
const { messages } = await req.json()
// One line!
return toStreamResponse(
chat({
adapter: openaiText(),
model: 'gpt-4o',
messages,
}),
)
}
import { createFileRoute } from '@tanstack/react-router'
import { chat, toStreamResponse } from '@tanstack/ai'
import { anthropicText } from '@tanstack/ai-anthropic'
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages } = await request.json()
// One line with automatic tool execution!
return toStreamResponse(
chat({
adapter: anthropicText(),
model: 'claude-sonnet-4-20250514',
messages,
tools, // Tools with execute functions
}),
)
},
},
},
})
All types are fully exported:
import type {
UIMessage,
UseChatOptions,
UseChatReturn,
ChatRequestBody,
} from '@tanstack/ai-react'
sendMessage() API (v5 style)MIT
FAQs
Solid hooks for TanStack AI
The npm package @tanstack/ai-solid receives a total of 12 weekly downloads. As such, @tanstack/ai-solid popularity was classified as not popular.
We found that @tanstack/ai-solid demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.

Security News
Socket CTO Ahmad Nassri shares practical AI coding techniques, tools, and team workflows, plus what still feels noisy and why shipping remains human-led.

Research
/Security News
A five-month operation turned 27 npm packages into durable hosting for browser-run lures that mimic document-sharing portals and Microsoft sign-in, targeting 25 organizations across manufacturing, industrial automation, plastics, and healthcare for credential theft.