
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
Unified SDK for 20+ AI tasks using HuggingFace Inference Providers
Call open-source AI models with a consistent API. Supports text generation, embeddings, image generation, audio transcription, object detection, and 15 more tasks through HuggingFace's Inference Providers API with automatic provider routing.
model:provider syntaxSee Migration Guide for upgrading from v0.3.x
npm install openmodels
import { client } from 'openmodels';
const openmodels = client({
apiKey: 'om_your_api_key_here', // Get from tryscout.dev
hfToken: 'hf_...' // Your HuggingFace token
});
// Chat completion
const response = await openmodels.chat({
model: 'openai/gpt-oss-120b',
messages: [
{ role: 'user', content: 'Explain quantum computing' }
],
});
console.log(response.choices[0].message.content);
Conversational AI with LLMs:
const response = await openmodels.chat({
model: 'openai/gpt-oss-120b:cerebras', // Provider selection
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is machine learning?' }
],
temperature: 0.7,
max_tokens: 500,
frequency_penalty: 0.5,
presence_penalty: 0.3,
response_format: { type: 'json_object' }, // Force JSON output
seed: 42, // Reproducible outputs
});
Raw text completion:
const response = await openmodels.textGeneration({
model: 'meta-llama/Llama-3.1-8B-Instruct',
inputs: 'Once upon a time',
parameters: {
max_new_tokens: 200,
temperature: 0.8,
top_p: 0.95,
repetition_penalty: 1.2,
},
stream: true, // Enable streaming
});
Generate embeddings for semantic search:
const response = await openmodels.featureExtraction({
model: 'intfloat/multilingual-e5-large',
inputs: ['Hello world', 'Another text'],
parameters: {
normalize: true,
truncate: true,
},
});
console.log('Embeddings:', response.embeddings);
Answer questions from context:
const response = await openmodels.questionAnswering({
model: 'deepset/roberta-base-squad2',
inputs: {
question: 'What is the capital of France?',
context: 'Paris is the capital and largest city of France.',
},
parameters: {
top_k: 3,
max_answer_len: 50,
},
});
Summarize long text:
const response = await openmodels.summarization({
model: 'facebook/bart-large-cnn',
inputs: 'Long article text here...',
parameters: {
generate_parameters: {
max_length: 150,
min_length: 30,
},
},
});
console.log('Summary:', response.summary_text);
Translate between languages:
const response = await openmodels.translation({
model: 'Helsinki-NLP/opus-mt-en-de',
inputs: 'Hello, how are you?',
parameters: {
src_lang: 'en',
tgt_lang: 'de',
},
});
console.log('Translation:', response.translation_text);
Classify text into categories:
const response = await openmodels.textClassification({
model: 'ProsusAI/finbert',
inputs: 'The stock market is performing well today.',
parameters: {
top_k: 3,
},
});
console.log('Classifications:', response.classifications);
Extract named entities:
const response = await openmodels.tokenClassification({
model: 'dslim/bert-base-NER',
inputs: 'Apple Inc. was founded by Steve Jobs in Cupertino.',
parameters: {
aggregation_strategy: 'simple',
},
});
console.log('Entities:', response.entities);
Classify with custom labels:
const response = await openmodels.zeroShotClassification({
model: 'facebook/bart-large-mnli',
inputs: 'This is a great product!',
parameters: {
candidate_labels: ['positive', 'negative', 'neutral'],
multi_label: false,
},
});
console.log('Labels:', response.labels);
console.log('Scores:', response.scores);
Predict masked tokens:
const response = await openmodels.fillMask({
model: 'google-bert/bert-base-uncased',
inputs: 'Paris is the [MASK] of France.',
parameters: {
top_k: 5,
},
});
console.log('Predictions:', response.predictions);
Answer questions about tables:
const response = await openmodels.tableQuestionAnswering({
model: 'google/tapas-base-finetuned-wtq',
inputs: {
query: 'What is the population of France?',
table: {
'Country': ['France', 'Germany', 'Spain'],
'Population': ['67M', '83M', '47M'],
},
},
});
console.log('Answer:', response.answer);
Generate images from text:
const response = await openmodels.textToImage({
model: 'black-forest-labs/FLUX.1-schnell',
inputs: 'A beautiful sunset over mountains',
parameters: {
negative_prompt: 'blurry, low quality',
width: 1024,
height: 1024,
num_inference_steps: 50,
guidance_scale: 7.5,
seed: 42,
},
});
// Save image
const buffer = await response.image.arrayBuffer();
fs.writeFileSync('image.png', Buffer.from(buffer));
Classify image content:
const response = await openmodels.imageClassification({
model: 'google/vit-base-patch16-224',
inputs: imageBase64,
parameters: {
top_k: 5,
},
});
console.log('Classifications:', response.classifications);
Detect objects in images:
const response = await openmodels.objectDetection({
model: 'facebook/detr-resnet-50',
inputs: imageBase64,
parameters: {
threshold: 0.5,
},
});
console.log('Detections:', response.detections);
// Each detection has: label, score, box (xmin, ymin, xmax, ymax)
Segment regions in images:
const response = await openmodels.imageSegmentation({
model: 'mattmdjaga/segformer_b2_clothes',
inputs: imageBase64,
parameters: {
threshold: 0.5,
},
});
console.log('Segments:', response.segments);
Ask questions about images:
const response = await openmodels.imageTextToText({
model: 'zai-org/GLM-4.5V',
inputs: {
image: imageBase64,
text: 'What is in this image?',
},
parameters: {
max_new_tokens: 512,
temperature: 0.7,
},
});
console.log('Answer:', response.generated_text);
Transform images:
const response = await openmodels.imageToImage({
model: 'timbrooks/instruct-pix2pix',
inputs: {
image: imageBase64,
prompt: 'Make it look like a watercolor painting',
},
parameters: {
strength: 0.8,
guidance_scale: 7.5,
},
});
Generate videos from text:
const response = await openmodels.textToVideo({
model: 'ali-vilab/text-to-video-ms-1.7b',
inputs: 'A cat playing with a ball',
parameters: {
num_frames: 16,
fps: 8,
guidance_scale: 7.5,
},
});
// Save video
const buffer = await response.video.arrayBuffer();
fs.writeFileSync('video.mp4', Buffer.from(buffer));
Transcribe speech to text:
const response = await openmodels.automaticSpeechRecognition({
model: 'openai/whisper-large-v3',
inputs: audioBase64,
parameters: {
return_timestamps: true,
},
});
console.log('Transcription:', response.text);
if (response.chunks) {
console.log('Timestamps:', response.chunks);
}
Classify audio content:
const response = await openmodels.audioClassification({
model: 'MIT/ast-finetuned-audioset-10-10-0.4593',
inputs: audioBase64,
parameters: {
top_k: 5,
},
});
console.log('Classifications:', response.classifications);
Choose specific providers for better performance or availability:
// Automatic selection (default)
model: 'openai/gpt-oss-120b'
// Manual provider selection
model: 'openai/gpt-oss-120b:cerebras'
model: 'openai/gpt-oss-120b:fireworks'
model: 'openai/gpt-oss-120b:together'
model: 'Qwen/Qwen2.5-7B-Instruct:replicate'
Available providers: cerebras, cohere, fal-ai, featherless, fireworks, groq, hyperbolic, hf-inference, nebius, novita, nscale, public-ai, replicate, sambanova, scaleway, together, zai
Real-time token generation:
const stream = await openmodels.chat({
model: 'openai/gpt-oss-120b',
messages: [{ role: 'user', content: 'Write a poem' }],
stream: true,
}) as AsyncGenerator<string, void, unknown>;
for await (const token of stream) {
process.stdout.write(token);
}
Get default models for tasks:
import { getDefaultModel, getModelsForTask, getAllTasks } from 'openmodels';
// Get default model for a task
const model = getDefaultModel('text-to-image');
// Returns: 'black-forest-labs/FLUX.1-schnell'
// Get all models for a task
const models = getModelsForTask('chat-completion');
// Returns: ['openai/gpt-oss-120b', 'Qwen/Qwen2.5-7B-Instruct', ...]
// Get all available tasks
const tasks = getAllTasks();
// Returns array of all 20 task names
import { OpenModelsError } from 'openmodels';
try {
const response = await openmodels.chat({...});
} catch (error) {
if (error instanceof OpenModelsError) {
console.error('OpenModels error:', error.message);
}
}
Full TypeScript definitions for all 20 tasks:
import {
// Chat
ChatCompletionRequest,
ChatCompletionResponse,
// Text
TextGenerationRequest,
FeatureExtractionRequest,
QuestionAnsweringRequest,
SummarizationRequest,
TranslationRequest,
TextClassificationRequest,
TokenClassificationRequest,
ZeroShotClassificationRequest,
FillMaskRequest,
TableQuestionAnsweringRequest,
// Image
TextToImageRequest,
ImageClassificationRequest,
ObjectDetectionRequest,
ImageSegmentationRequest,
ImageTextToTextRequest,
ImageToImageRequest,
// Video
TextToVideoRequest,
// Audio
AutomaticSpeechRecognitionRequest,
AudioClassificationRequest,
} from 'openmodels';
# Install globally
npm install -g openmodels
# Chat with models
openmodels chat "Explain quantum computing"
# Generate images
openmodels image "A beautiful sunset" --model black-forest-labs/FLUX.1-schnell
# Generate embeddings
openmodels embed "Hello world" --model intfloat/multilingual-e5-large
# List models
openmodels models
const openmodels = client({
apiKey: 'om_...', // Your OpenModels API key
hfToken: 'hf_...', // Your HuggingFace token (optional)
});
See /examples directory for complete working examples of all tasks.
import { OpenModelsLLM, OpenModelsEmbeddings } from 'openmodels/integrations/langchain';
const llm = new OpenModelsLLM(
{ apiKey: 'om_...' },
{ model: 'openai/gpt-oss-120b' }
);
from openmodels_llamaindex import OpenModelsLLM
llm = OpenModelsLLM(
api_key="om_...",
model="openai/gpt-oss-120b"
)
MIT
FAQs
SDK for calling open-source models with a consistent API
We found that openmodels demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.