
Security News
Feross on the 10 Minutes or Less Podcast: Nobody Reads the Code
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.
@embedapi/core
Advanced tools
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
EmbedAPI is a powerful Node.js client library that gives you instant access to ANY AI model through a single, unified API. Stop juggling multiple API keys and implementations - with EmbedAPI you can build sophisticated AI applications in minutes, not months.
npm install @embedapi/client
const EmbedAPIClient = require('@embedapi/client');
// Regular API client
const client = new EmbedAPIClient('your-api-key');
// Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });
// Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });
// Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', { isAgent: true, debug: true });
apiKey (string): Your API key for regular mode, or agent ID for agent modeoptions (object, optional): Configuration options
isAgent (boolean, optional): Set to true to use agent mode. Defaults to falsedebug (boolean, optional): Set to true to enable debug logging. Defaults to falsegenerate({ service, model, messages, ...options })Generates text using the specified AI service and model.
service (string): The name of the AI service (e.g., 'openai')model (string): The model to use (e.g., 'gpt-4o')messages (array): An array of message objects containing conversation historymaxTokens (number, optional): Maximum number of tokens to generatetemperature (number, optional): Sampling temperaturetopP (number, optional): Top-p sampling parameterfrequencyPenalty (number, optional): Frequency penalty parameterpresencePenalty (number, optional): Presence penalty parameterstopSequences (array, optional): Stop sequences for controlling response generationtools (array, optional): Array of function definitions for tool usetoolChoice (string|object, optional): Tool selection preferencesenabledTools (array, optional): List of enabled tool namesuserId (string, optional): Optional user identifier for agent mode// Regular mode
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentResponse = await agentClient.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
stream({ service, model, messages, ...options })Streams text generation using the specified AI service and model.
Same as generate(), plus:
streamOptions (object, optional): Stream-specific configuration optionsThe stream emits Server-Sent Events (SSE) with two types of messages:
{
"content": "Generated text chunk",
"role": "assistant"
}
{
"type": "done",
"tokenUsage": 17,
"cost": 0.000612
}
// Regular mode
const streamResponse = await client.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentStreamResponse = await agentClient.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'done') {
console.log('Stream stats:', {
tokenUsage: data.tokenUsage,
cost: data.cost
});
} else {
console.log('Content:', data.content);
}
}
}
}
listModels()Lists all available models.
const models = await client.listModels();
testAPIConnection()Tests the connection to the API.
const isConnected = await client.testAPIConnection();
All methods throw errors if the API request fails:
try {
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
console.error('Error:', error.message);
}
The client supports two authentication modes:
Regular Mode (default)
new EmbedAPIClient('your-api-key')Agent Mode
new EmbedAPIClient('your-agent-id', { isAgent: true })userId parameter available for request trackingMIT
FAQs
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
The npm package @embedapi/core receives a total of 2 weekly downloads. As such, @embedapi/core popularity was classified as not popular.
We found that @embedapi/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.

Research
/Security News
Campaign of 108 extensions harvests identities, steals sessions, and adds backdoors to browsers, all tied to the same C2 infrastructure.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.