
Research
Two Malicious Rust Crates Impersonate Popular Logger to Steal Wallet Keys
Socket uncovers malicious Rust crates impersonating fast_log to steal Solana and Ethereum wallet keys from source code.
@platformatic/ai-client
Advanced tools
A TypeScript client for streaming AI responses from Platformatic AI services. Browser and Node.js compatible.
buildClient
and ask
are the only functions to handle all AI interactionsnpm install @platformatic/ai-client
import { buildClient } from "@platformatic/ai-client";
const client = buildClient({
url: "https://your-ai-service.com",
headers: {
Authorization: "Bearer your-api-key",
},
});
// Streaming request
const response = await client.ask({
prompt: "List the first 5 prime numbers",
stream: true,
});
for await (const message of response.stream) {
if (message.type === "content") {
console.log(message.content);
} else if (message.type === "error") {
console.error("Stream error:", message.error.message);
break;
}
}
import { buildClient } from "@platformatic/ai-client";
const client = buildClient({
url: process.env.AI_URL || "http://127.0.0.1:3042",
headers: {
Authorization: "Bearer your-api-key",
},
timeout: 30000
});
// Example usage same as browser
The Platformatic AI service provides two endpoints:
/api/v1/stream
- For streaming responses (Server-Sent Events)/api/v1/prompt
- For direct responses (JSON)import { buildClient } from "@platformatic/ai-client";
const client = buildClient({
url: process.env.AI_URL || "http://127.0.0.1:3042",
headers: {
Authorization: "Bearer your-api-key",
},
timeout: 30000,
});
try {
const response = await client.ask({
prompt: "List the first 5 prime numbers",
stream: true,
});
console.log(
"Response headers:",
Object.fromEntries(response.headers.entries()),
);
for await (const message of response.stream) {
if (message.type === "content") {
process.stdout.write(message.content);
} else if (message.type === "done") {
console.log("\n\n*** Stream completed!");
console.log("Final response:", message.response);
} else if (message.type === "error") {
console.error("\n! Stream error:", message.error.message);
break;
}
}
console.log("\n*** Stream ended");
} catch (error) {
console.error("! Error:", error.message);
process.exit(1);
}
import { buildClient } from "@platformatic/ai-client";
const client = buildClient({
url: process.env.AI_URL || "http://127.0.0.1:3042",
headers: {
Authorization: "Bearer your-api-key",
},
});
try {
const response = await client.ask({
prompt: "Please give me the first 10 prime numbers",
models: ["gemini:gemini-2.5-flash"],
stream: false,
});
console.log("Headers:", Object.fromEntries(response.headers.entries()));
console.log("Response:", response.content);
} catch (error) {
console.error("Error:", error.message);
process.exit(1);
}
The client provides multiple ways to handle errors:
try {
const response = await client.ask({
prompt: "Hello AI",
sessionId: "user-123",
});
for await (const message of response.stream) {
if (message.type === "error") {
// Handle AI service errors
console.error("AI service error:", message.error.message);
break; // Stop processing on error
} else if (message.type === "content") {
console.log("Received:", message.content);
} else if (message.type === "done") {
console.log("Final response:", message.response);
}
}
} catch (error) {
// Handle request-level errors (HTTP errors, timeouts, etc.)
console.error("Request failed:", error.message);
}
The client supports two model formats:
const response = await client.ask({
prompt: "Hello AI",
models: ["openai:gpt-4"],
});
const response = await client.ask({
prompt: "Hello AI",
models: [
{
provider: "openai",
model: "gpt-4",
},
],
});
You can specify multiple models for fallback scenarios:
const response = await client.ask({
prompt: "Hello AI",
models: [
"openai:gpt-4",
"openai:gpt-3.5-turbo",
"deepseek:deepseek-chat",
"gemini:gemini-2.5-flash",
],
});
// Or using mixed formats
const response = await client.ask({
prompt: "Hello AI",
models: [
"openai:gpt-4",
{ provider: "deepseek", model: "deepseek-chat" },
"gemini:gemini-2.5-flash",
],
});
The AI service will try each model in order until one succeeds. Models must match the ones declared in the ai-warp
service.
The client supports conversation continuity through session IDs:
When you make your first request without a sessionId
, the AI service creates a new session:
// First request - no sessionId provided
const response = await client.ask({
prompt: "Hello, I'm planning a trip to Italy",
stream: false,
});
// The sessionId is available in both the response content and headers
console.log("New session:", response.content.sessionId);
console.log("Session from header:", response.headers.get("x-session-id"));
Use the returned sessionId
in subsequent requests to maintain conversation context:
const sessionId = response.content.sessionId;
// Follow-up request using the same sessionId
const followUp = await client.ask({
prompt: "What's the weather like there in spring?",
sessionId: sessionId, // Continue the conversation
stream: false,
});
// The AI will remember the previous context about Italy
Session management works the same way with streaming responses:
const response = await client.ask({
prompt: "Tell me about Rome",
stream: true,
});
let sessionId;
// The sessionId is also available immediately in the response headers
console.log("Session from header:", response.headers.get("x-session-id"));
for await (const message of response.stream) {
if (message.type === "done" && message.response) {
sessionId = message.response.sessionId;
console.log("Session ID:", sessionId);
}
}
// Use the sessionId for the next request
const nextResponse = await client.ask({
prompt: "What are the best restaurants there?",
sessionId: sessionId,
stream: true,
});
The client includes automatic stream resume functionality for fault-tolerant streaming. When a streaming connection is interrupted, you can seamlessly resume from where you left off.
By default, streaming requests with a sessionId
will automatically resume from the last event:
// Start a streaming conversation
const response1 = await client.ask({
prompt: "Write a long story about space exploration",
stream: true,
});
const sessionId = response1.sessionId;
// Consume part of the stream, then connection is interrupted
for await (const message of response1.stream) {
if (message.type === "content") {
console.log(message.content);
// Connection interrupted here...
break;
}
}
// Resume the stream automatically - no configuration needed
const response2 = await client.ask({
prompt: "Continue the story", // This will be ignored for resume
sessionId: sessionId, // Triggers automatic resume
stream: true, // + streaming = auto-resume
// resume: true // Default behavior
});
// Continue receiving the remaining content
for await (const message of response2.stream) {
if (message.type === "content") {
console.log(message.content); // Continues from where it left off
} else if (message.type === "done") {
console.log("Story completed!");
}
}
You can explicitly control resume behavior:
// Disable automatic resume for a fresh response
const freshResponse = await client.ask({
prompt: "Start a new conversation",
sessionId: existingSessionId,
stream: true,
resume: false // Force new request instead of resume
});
// Enable resume explicitly (same as default)
const resumeResponse = await client.ask({
prompt: "Continue previous conversation",
sessionId: existingSessionId,
stream: true,
resume: true // Explicit resume (default behavior)
});
sessionId
+ stream: true
+ resume: true
(default)The client supports custom logging through the logger
option. By default, the client uses a silent logger that produces no output.
import { buildClient, consoleLogger, nullLogger } from "@platformatic/ai-client";
// Silent logger (default) - no logging output
const client = buildClient({
url: "http://127.0.0.1:3042",
logger: nullLogger, // This is the default
});
// Console logger - logs to console
const client = buildClient({
url: "http://127.0.0.1:3042",
logger: consoleLogger,
});
You can provide your own logger implementation:
import { buildClient } from "@platformatic/ai-client";
const customLogger = {
debug: (message: string, data?: any) => {
// Custom debug logging
console.debug(`[DEBUG] ${message}`, data);
},
info: (message: string, data?: any) => {
// Custom info logging
console.info(`[INFO] ${message}`, data);
},
warn: (message: string, data?: any) => {
// Custom warning logging
console.warn(`[WARN] ${message}`, data);
},
error: (message: string, data?: any) => {
// Custom error logging
console.error(`[ERROR] ${message}`, data);
},
};
const client = buildClient({
url: "http://127.0.0.1:3042",
logger: customLogger,
});
interface Logger {
debug(message: string, data?: any): void;
info(message: string, data?: any): void;
warn(message: string, data?: any): void;
error(message: string, data?: any): void;
}
The client will log various events including:
The package includes working examples:
# Run the streaming example
node examples/stream.js
# Run the direct response example
node examples/prompt.js
# Run the session + streaming example (multi-turn conversation)
node examples/session-stream.js
# Set custom AI service URL
AI_URL=https://your-ai-service.com node examples/stream.js
The client is fully typed and compatible with @platformatic/ai-provider
types. Types are duplicated to keep the client dependency-free while maintaining compatibility:
import type {
AiModel,
AiProvider,
AiSessionId,
AiChatHistory,
QueryModel,
} from "@platformatic/ai-client";
// Types are compatible with ai-provider
const models: QueryModel[] = [
"openai:gpt-4",
{ provider: "deepseek", model: "deepseek-chat" },
];
buildClient(options)
Creates a new AI client instance.
url
(string): The AI service URLheaders
(object, optional): HTTP headers to include with requeststimeout
(number, optional): Request timeout in milliseconds (default: 60000)logger
(Logger, optional): Logger instance (default: silent logger - no logging)promptPath
(string, optional): Custom path for direct requests (default: /api/v1/prompt
)streamPath
(string, optional): Custom path for streaming requests (default: /api/v1/stream
)An AIClient
instance.
client.ask(options)
Makes a request to the AI service, returning either a stream or a complete response.
prompt
(string): The prompt to send to the AIsessionId
(string, optional): Session ID for conversation continuity. If not provided, the AI service creates a new session. Use the returned sessionId
from previous responses to maintain conversation context across multiple requests. Each session maintains its own conversation history and context.context
(string | Record<string, any> | any[], optional): Additional context for the requesttemperature
(number, optional): AI temperature parametermodels
(array, optional): Array of models in either string format "provider:model"
or object format { provider: string, model: string }
. Models must match the ones defined in the ai-warp
service.history
(array, optional): Previous conversation history. Note that history
and sessionId
cannot be provided at the same time.stream
(boolean, optional): Enable streaming (default: true)resume
(boolean, optional): Enable automatic stream resume when using sessionId
+ stream: true
(default: true). When enabled, the client will automatically resume from the last event in the session if available. Set to false
to force a new request instead of resuming.stream: true
(default): Promise<AskResponseStream>
- An object containing the async iterable stream and headersstream: false
: Promise<AskResponseContent>
- An object containing the content and headers{
stream: AsyncIterableStream<StreamMessage>, // Async iterable stream of StreamMessage objects
headers: Headers // Response headers from the server
}
{
content: JSON, // The complete AI response object
headers: Headers // Response headers from the server
}
AskResponse
(Direct Response Content){
text: string, // The AI's response text
sessionId: string, // Session ID for conversation continuity
result: AiResponseResult // Result status: 'COMPLETE' | 'INCOMPLETE_MAX_TOKENS' | 'INCOMPLETE_UNKNOWN'
}
StreamMessage
(Streaming Response Messages)The stream yields different types of messages:
Content Message - Contains partial response text:
{
type: 'content',
content?: string // Partial response text chunk
}
Error Message - Contains error information:
{
type: 'error',
error?: Error // Error object with details
}
Done Message - Contains final response metadata:
{
type: 'done',
response?: AskResponse // Final response object with complete metadata
}
The client is designed to work in both browser and Node.js environments:
ReadableStream
, TextDecoderStream
, and TransformStream
fetch
for HTTP requestsAbortSignal.timeout()
for request timeouts# Install dependencies
npm install
# Run tests
npm test
# Run tests with coverage
npm run test:coverage
# Type check
npm run typecheck
# Build
npm run build
# Lint
npm run lint
# Fix linting issues
npm run lint:fix
# Full check (lint + typecheck + test + build)
npm run check
Apache-2.0
FAQs
The Platformatic AI client
The npm package @platformatic/ai-client receives a total of 122 weekly downloads. As such, @platformatic/ai-client popularity was classified as not popular.
We found that @platformatic/ai-client demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 9 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Socket uncovers malicious Rust crates impersonating fast_log to steal Solana and Ethereum wallet keys from source code.
Research
A malicious package uses a QR code as steganography in an innovative technique.
Research
/Security News
Socket identified 80 fake candidates targeting engineering roles, including suspected North Korean operators, exposing the new reality of hiring as a security function.