
Security News
rv Is a New Rust-Powered Ruby Version Manager Inspired by Python's uv
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
llm-router-core
Advanced tools
Intelligent LLM model selection with hybrid AI + deterministic scoring
Intelligent language model selection with hybrid AI + deterministic scoring
LLM Router Core is a powerful NPM package that intelligently selects the best language model for any task. It combines Google Gemini's AI analysis with deterministic scoring across 35+ models from live leaderboards, ensuring you always get the optimal model for your specific needs.
npm install llm-router-core
const { LLMRouter } = require('llm-router-core');
// Initialize router (Gemini API key is optional)
const router = new LLMRouter({
geminiApiKey: 'your-gemini-api-key' // Optional: enables AI analysis
});
// Get intelligent model recommendation
const result = await router.selectModel(
"Write a Python function to reverse a binary tree",
{ performance: 0.6, cost: 0.2, speed: 0.2 }
);
console.log('Recommended:', result.selectedModel.name);
console.log('Score:', result.score);
console.log('Reasoning:', result.reasoning);
new LLMRouter(config)
Create a new router instance.
const router = new LLMRouter({
geminiApiKey?: string; // Optional: Your Gemini API key for AI analysis
leaderboardUrl?: string; // Optional: Custom leaderboard URL
cacheTimeout?: number; // Optional: Cache timeout in ms (default: 5min)
enableLogging?: boolean; // Optional: Enable debug logging
version?: string; // Optional: Version for metadata
});
selectModel(prompt, priorities, options?)
Select the best model for a specific task.
const result = await router.selectModel(
"Your task prompt",
{ performance: 0.5, cost: 0.3, speed: 0.2 }, // Must sum to 1.0
{
includeReasoning: true, // Include AI reasoning in response
maxModels: 20, // Limit models considered
filterProviders: ['OpenAI', 'Anthropic'] // Filter by provider
}
);
Response Format:
{
selectedModel: {
name: string;
provider: string;
score: number;
performanceScore: number;
costScore: number;
speedScore: number;
benchmarks: { /* benchmark scores */ }
};
score: number; // Overall match score (0-10)
reasoning?: string; // AI explanation (if requested)
taskAnalysis: {
taskType: 'CODING' | 'MATH' | 'CREATIVE' | 'RESEARCH' | 'BUSINESS' | 'REASONING';
complexity: 'SIMPLE' | 'MEDIUM' | 'COMPLEX';
reasoning: string;
};
alternatives: LLMModel[]; // Top 3 alternatives
prioritiesUsed: PriorityWeights; // Final priorities after AI adjustment
metadata: {
totalModelsConsidered: number;
timestamp: string;
version: string;
};
}
batchSelect(requests, options?)
Process multiple prompts efficiently.
const results = await router.batchSelect([
{
id: 'task-1',
prompt: "First task",
priorities: { performance: 0.7, cost: 0.2, speed: 0.1 }
},
{
id: 'task-2',
prompt: "Second task",
priorities: { performance: 0.3, cost: 0.5, speed: 0.2 }
}
], {
concurrency: 3, // Process 3 at a time (default: 3, max: 10)
includeReasoning: true // Include AI reasoning
});
getRecommendationsForDomain(domain, options?)
Get pre-optimized recommendations for common domains.
// Get best models for coding tasks
const codingModels = await router.getRecommendationsForDomain('coding', {
budget: 'medium', // 'low' | 'medium' | 'high'
count: 5 // Number of models to return
});
// Available domains: 'coding', 'math', 'creative', 'research', 'business'
getAvailableModels(options?)
Get detailed information about available models with filtering.
const models = await router.getAvailableModels({
provider: 'OpenAI', // Filter by provider
minPerformance: 8.0, // Minimum performance score
maxCost: 7.0 // Maximum cost score (higher = more cost-effective)
});
Control what matters most for your use case:
const priorities = {
performance: 0.6, // Benchmark scores (0-1)
cost: 0.2, // Cost efficiency (0-1)
speed: 0.2 // Response speed (0-1)
};
// Must sum to 1.0
The AI automatically classifies tasks into:
CODING
- Programming, debugging, algorithmsMATH
- Mathematical problem solvingCREATIVE
- Writing, brainstorming, contentRESEARCH
- Analysis, information gatheringBUSINESS
- Professional tasks, emailsREASONING
- Logic, complex analysisconst { LLMRouter } = require('llm-router-core');
const router = new LLMRouter({
geminiApiKey: process.env.GEMINI_API_KEY // Optional
});
const result = await router.selectModel(
"Optimize this SQL query for better performance",
{ performance: 0.7, cost: 0.2, speed: 0.1 },
{ includeReasoning: true }
);
console.log(`Best model: ${result.selectedModel.name}`);
console.log(`Task type: ${result.taskAnalysis.taskType}`);
console.log(`AI reasoning: ${result.reasoning}`);
const requests = [
{
id: 'email-task',
prompt: "Write a professional follow-up email",
priorities: { performance: 0.3, cost: 0.5, speed: 0.2 }
},
{
id: 'code-review',
prompt: "Review this React component for best practices",
priorities: { performance: 0.8, cost: 0.1, speed: 0.1 }
}
];
const results = await router.batchSelect(requests, {
concurrency: 2,
includeReasoning: true
});
results.forEach(result => {
console.log(`Task ${result.requestId}: ${result.selectedModel.name}`);
});
// Get cost-effective models for creative writing
const creativeModels = await router.getRecommendationsForDomain('creative', {
budget: 'low',
count: 3
});
// Get high-performance models for math problems
const mathModels = await router.getRecommendationsForDomain('math', {
budget: 'high',
count: 5
});
For enhanced AI analysis, get your API key from Google AI Studio:
# .env file
GEMINI_API_KEY=your_gemini_api_key_here
const router = new LLMRouter({
geminiApiKey: process.env.GEMINI_API_KEY
});
Note: The package works without an API key using keyword-based analysis, but Gemini provides much more intelligent task classification and model recommendations.
The package includes curated data for 35+ models including:
Top Providers:
Benchmark Scores:
We welcome contributions! Please see our Contributing Guide for details.
git checkout -b feature/amazing-feature
)git commit -m 'Add amazing feature'
)git push origin feature/amazing-feature
)MIT License - see LICENSE file for details.
Made with ❤️ for the AI community
FAQs
Intelligent LLM model selection with hybrid AI + deterministic scoring
The npm package llm-router-core receives a total of 5 weekly downloads. As such, llm-router-core popularity was classified as not popular.
We found that llm-router-core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.
Security News
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.