
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
fastify-lm
Advanced tools
fastify-lm is a Fastify plugin that simplifies integration with multiple language model (LM) providers, such as:
| Provider | Description |
|---|---|
| Test | Test provider, always returns "test" and the input parameters |
| OpenAI | GPT models, including GPT-4o, GPT-3.5 |
| Gemini models, such as Gemini 1.5 | |
| Claude | Anthropic’s Claude models (Claude 3, etc.) |
| Deepseek | Deepseek AI language models |
| Llama | Llama AI language models |
| Mistral | Mistral AI language models |
It provides a unified interface, allowing you to switch providers without modifying your application code.
Developing applications that interact with language models usually requires direct API integration, which can lead to:
With fastify-lm, you can:
✅ Define multiple providers in a single configuration
✅ Switch models just by changing environment variables
✅ Use a consistent query system without worrying about API differences
✅ Easily run A/B tests with different models to find the best fit for your use case
🚀 Ready to get started? Continue with the installation guide and start using fastify-lm in just a few minutes.
To install the plugin, on existing Fastify project, just run:
npm install fastify-lm
| fastify-lm (plugin) | Fastify |
|---|---|
^1.x | ^3.x, ^4.x, ^5.x |
Please note that if a Fastify version is out of support, then so are the corresponding versions of this plugin in the table above. See Fastify's LTS policy for more details.
Start by creating a Fastify instance and registering the plugin.
npm i fastify fastify-lm
Create a file src/server.js and add following code:
// Import the framework and instantiate it
import Fastify from "fastify";
import LmPlugin from "fastify-lm";
const fastify = Fastify({
logger: true,
});
// Register the lm-plugin
fastify.register(LmPlugin, {
models: [
{
name: "lm", // the name of the model instance on your app
provider: "openai", // openai, google, claude, deepseek or any available provider
model: "gpt-4o-mini",
apiKey: "your-api-key",
},
],
});
// Declare a route / that returns the models
fastify.get("/", async function handler(request, reply) {
const models = await fastify.lm.models();
return { models };
});
// Run the server!
try {
await fastify.listen({ port: 3000 });
await fastify.lm.models();
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
Remember to replace
your-api-keywith your actual API key.
Finally, launch the server with:
node src/server.js
and test it with:
curl http://localhost:3000/
Register the plugin in your Fastify instance by specifying the models and providers to use.
import Fastify from "fastify";
import lmPlugin from "fastify-lm";
// Create a Fastify instance and register the plugin
const app = Fastify();
app.register(lmPlugin, {
models: [
{
name: "lm",
provider: process.env.LM_PROVIDER,
model: process.env.LM_MODEL,
apiKey: process.env.LM_API_KEY,
},
],
});
const response = await app.lm.chat({
messages: [{ role: "user", content: "How are you?" }],
});
💡 Change the environment variables to switch the provider.
import Fastify, { FastifyRequest, FastifyReply } from "fastify";
import lmPlugin from "fastify-lm";
// Create a Fastify instance and register the plugin
const app = Fastify();
app.register(lmPlugin, {
models: [
{
name: "openai",
provider: "openai",
model: "gpt-3.5-turbo",
apiKey: process.env.OPENAI_API_KEY,
},
{
name: "google",
provider: "google",
model: "gemini-2.0-flash-lite",
apiKey: process.env.GOOGLE_API_KEY,
},
{
name: "claude",
provider: "claude",
model: "claude-3-5-sonnet-20240620",
apiKey: process.env.CLAUDE_API_KEY,
},
{
name: "deepseek",
provider: "deepseek",
model: "deepseek-chat",
apiKey: process.env.DEEPSEEK_API_KEY,
},
{
name: "mistral",
provider: "mistral",
model: "mistral-medium",
apiKey: process.env.MISTRAL_API_KEY,
},
],
});
// Route that receives the query and optional model parameter
app.get<{ Querystring: QueryParams }>(
"/chat",
{
schema: {
querystring: {
type: 'object',
required: ['query'],
properties: {
query: { type: 'string' },
model: {
type: 'string',
enum: ['openai', 'google', 'claude', 'deepseek', 'mistral'],
default: 'openai'
}
}
}
}
},
async (
request: FastifyRequest<{ Querystring: QueryParams }>,
reply: FastifyReply
) => {
const { query, model = "openai" } = request.query;
try {
const response = await app[model].chat({
messages: [{ role: "user", content: query }],
});
return { response };
} catch (error: any) {
reply.status(500).send({ error: error.message });
}
}
);
// Start the server
app.listen({ port: 3000 }, (err, address) => {
if (err) {
console.error(err);
process.exit(1);
}
console.log(`Server running at ${address}`);
});
interface QueryParams {
query: string;
model?: "openai" | "google" | "claude" | "deepseek" | "mistral"; // Optional, defaults to "openai"
}
Beyond simple model queries, you can leverage fastify-lm for more advanced functionalities:
Use AI to generate instant answers for common support queries.
📖 Read the full guide →
Automatically classify and prioritize support tickets based on urgency and sentiment.
📖 Read the full guide →
Analyze user feedback, reviews, or messages to determine sentiment trends.
📖 Read the full guide →
Detect and block inappropriate messages before processing them.
📖 Read the full guide →
Improve search relevance by understanding intent and expanding queries intelligently.
📖 Read the full guide →
Enhance user input by automatically generating text suggestions.
📖 Read the full guide →
Summarize long text passages using AI models.
📖 Read the full guide →
Translate user input dynamically with multi-provider support.
📖 Read the full guide →
Extract structured information from unstructured text, such as invoices, legal documents, or reports.
📖 Read the full guide →
🚀 Check out more examples in the /docs/ folder!
We need a lot of hands to implement other providers you can help us by submitting a pull request.
MIT
FAQs
A Fastify plugin for integrating multiple Language Models (LM)
We found that fastify-lm demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.