
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@node-llm/core
Advanced tools
The production-grade LLM engine for Node.js. Provider-agnostic by design.
@node-llm/core provides a single, unified API for interacting with over 540+ models across all major providers. It is built for developers who need stable infrastructure, standard streaming, and automated tool execution without vendor lock-in.
.withSchema()).| Provider | Supported Features |
|---|---|
| Chat, Streaming, Tools, Vision, Audio, Images, Transcription, Reasoning | |
| Chat, Streaming, Tools, Vision, PDF, Structured Output, Extended Thinking (Claude 3.7) | |
| Chat, Streaming, Tools, Vision, Audio, Video, Embeddings | |
| Chat (V3), Extended Thinking (R1), Streaming, Tools | |
| Chat, Streaming, Tools, Image Gen (Titan/SD), Embeddings, Prompt Caching | |
| 540+ models, Chat, Streaming, Tools, Vision, Embeddings, Reasoning | |
| Local Inference, Chat, Streaming, Tools, Vision, Embeddings | |
| Chat, Streaming, Tools, Vision, Embeddings, Transcription, Moderation, Reasoning (Magistral) |
npm install @node-llm/core
NodeLLM automatically reads your API keys from environment variables (e.g., OPENAI_API_KEY).
import { createLLM } from "@node-llm/core";
const llm = createLLM({ provider: "openai" });
// 1. Standard Request
const res = await llm.chat("gpt-4o").ask("What is the speed of light?");
console.log(res.content);
// 2. Real-time Streaming
for await (const chunk of llm.chat().stream("Tell me a long story")) {
process.stdout.write(chunk.content);
}
Stop parsing markdown. Get typed objects directly.
import { z } from "@node-llm/core";
const PlayerSchema = z.object({
name: z.string(),
powerLevel: z.number(),
abilities: z.array(z.string())
});
const chat = llm.chat("gpt-4o-mini").withSchema(PlayerSchema);
const response = await chat.ask("Generate a random RPG character");
console.log(response.parsed.name); // Fully typed!
NodeLLM protects your production environment with four built-in safety pillars:
const llm = createLLM({
requestTimeout: 15000, // 15s DoS Protection
maxTokens: 4096, // Cost Protection
maxRetries: 3, // Retry Storm Protection
maxToolCalls: 5 // Infinite Loop Protection
});
NodeLLM 1.9.0 introduces a powerful lifecycle hook system for audit, security, and observability.
import { createLLM, PIIMaskMiddleware, UsageLoggerMiddleware } from "@node-llm/core";
const llm = createLLM({
provider: "openai",
middlewares: [
new PIIMaskMiddleware(), // Redact emails/phone numbers automatically
new UsageLoggerMiddleware() // Log structured token usage & costs
]
});
// All chats created from this instance inherit these middlewares
const chat = llm.chat("gpt-4o");
Middlewares can control the engine's recovery strategy during tool failures.
const safetyMiddleware = {
name: "Audit",
onToolCallError: async (ctx, tool, error) => {
if (tool.function.name === "delete_user") return "STOP"; // Kill the loop
return "RETRY"; // Attempt recovery
}
};
Define reusable, class-configured agents with a declarative DSL:
import { Agent, Tool, z } from "@node-llm/core";
class LookupOrderTool extends Tool<{ orderId: string }> {
name = "lookup_order";
description = "Look up an order by ID";
schema = z.object({ orderId: z.string() });
async execute({ orderId }: { orderId: string }) {
return { status: "shipped", eta: "Tomorrow" };
}
}
class SupportAgent extends Agent {
static model = "gpt-4.1";
static instructions = "You are a helpful support agent.";
static tools = [LookupOrderTool];
static temperature = 0.2;
}
// Use anywhere in your app
const agent = new SupportAgent();
const response = await agent.ask("Where is order #123?");
console.log(response.content);
Stop the agentic loop early when a definitive answer is found:
class FinalAnswerTool extends Tool<{ answer: string }> {
name = "final_answer";
description = "Return the final answer to the user";
schema = z.object({ answer: z.string() });
async execute({ answer }: { answer: string }) {
return this.halt(answer); // Stops the loop, returns this result
}
}
Looking for persistence? use @node-llm/orm.
Visit nodellm.dev for:
MIT © [NodeLLM Contributors]
FAQs
A provider-agnostic LLM core for Node.js, inspired by ruby-llm.
We found that @node-llm/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.