
Security News
Feross on the 10 Minutes or Less Podcast: Nobody Reads the Code
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.
@embedapi/core
Advanced tools
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
A Node.js client for interacting with the EmbedAPI service.
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
Visit embedapi.com to get your API key and start building!
npm install @embedapi/core
Using yarn:
yarn add @embedapi/core
Using pnpm:
pnpm add @embedapi/core
const EmbedAPIClient = require('@embedapi/core');
# Regular API client
const client = new EmbedAPIClient('your-api-key');
# Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });
# Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });
# Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', {
isAgent: true,
debug: true
});
apiKey (string): Your API key for regular mode, or agent ID for agent modeoptions (object, optional): Configuration options
isAgent (boolean, optional): Set to true to use agent mode. Defaults to falsedebug (boolean, optional): Set to true to enable debug logging. Defaults to falsegenerate(options)Generates text using AI models.
service (string): AI service provider (openai, anthropic, vertexai, etc.)model (string): Model namemessages (array): Array of message objectsmaxTokens (number, optional): Maximum tokens to generatetemperature (number, optional): Temperature (0-1)topP (number, optional): Top P samplingfrequencyPenalty (number, optional): Frequency penaltypresencePenalty (number, optional): Presence penaltystopSequences (array, optional): Stop sequencestools (array, optional): Tools to usetoolChoice (string, optional): Tool choiceenabledTools (array, optional): Enabled toolsuserId (string, optional): User ID (for agent mode)// Regular mode
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentResponse = await agentClient.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
stream({ service, model, messages, ...options })Streams text generation using the specified AI service and model.
Same as generate(), plus:
streamOptions (object, optional): Stream-specific configuration optionsThe stream emits Server-Sent Events (SSE) with two types of messages:
{
"content": "Generated text chunk",
"role": "assistant"
}
{
"type": "done",
"tokenUsage": 17,
"cost": 0.000612
}
// Regular mode
const streamResponse = await client.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Agent mode
const agentStreamResponse = await agentClient.stream({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
if (data.type === 'done') {
console.log('Stream stats:', {
tokenUsage: data.tokenUsage,
cost: data.cost
});
} else {
console.log('Content:', data.content);
}
}
}
}
listModels()Lists all available models.
const models = await client.listModels();
testAPIConnection()Tests the connection to the API.
const isConnected = await client.testAPIConnection();
genImage(options)Generates images using AI models.
prompt (string): Image descriptionwidth (number, optional): Image widthheight (number, optional): Image heightmaxTokens (number, optional): Maximum tokenstemperature (number, optional): Temperature (0-1)steps (number, optional): Generation stepsguidance (number, optional): Guidance scaleseed (number, optional): Random seedimage_count (number, optional): Number of images to generateimage_quality (string, optional): Image qualityimage_format (string, optional): Image formatmodel (string, optional): Model name ('stability.stable-image-ultra-v1:1' or 'imagen')// Stability AI example
const stabilityResponse = await client.genImage({
prompt: 'A beautiful sunset over mountains',
width: 512,
height: 512,
model: 'stability.stable-image-ultra-v1:1',
steps: 30,
guidance: 7.5
});
// Imagen example
const imagenResponse = await client.genImage({
prompt: 'A futuristic cityscape at night',
width: 1024,
height: 1024,
model: 'imagen',
steps: 50,
guidance: 8.5,
image_quality: 'high',
image_format: 'png'
});
isNSFW(image)Checks if an image is NSFW (Not Safe For Work).
image (object): Image data object
data (string): Base64 encoded image datamimeType (string): MIME type of the image (e.g., 'image/png', 'image/jpeg')const nsfwResult = await client.isNSFW({
data: base64ImageData,
mimeType: 'image/png'
});
textToSpeech(text)Converts text to speech.
text (string): Text to convert to speechconst audioBlob = await client.textToSpeech('Hello, world!');
speechToText(audioBase64)Converts speech to text.
audioBase64 (string): Base64 encoded audio fileconst transcription = await client.speechToText(base64AudioData);
processImages(options)Processes images using Vision AI.
prompt (string): Description of what to analyzeimages (string[]): Array of base64 encoded imagesconst analysis = await client.processImages({
prompt: 'Describe what you see in this image',
images: [base64ImageData]
});
Generated images are temporarily stored on the server and will be automatically deleted after a period of time. It is recommended to:
Generated audio files from Text-to-Speech are also temporarily stored. Make sure to:
All methods throw errors if the API request fails:
try {
const response = await client.generate({
service: 'openai',
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }]
});
} catch (error) {
console.error('Error:', error.message);
}
The client supports two authentication modes:
Regular Mode (default)
new EmbedAPIClient('your-api-key')Agent Mode
new EmbedAPIClient('your-agent-id', { isAgent: true })userId parameter available for request trackingMIT
FAQs
🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀
The npm package @embedapi/core receives a total of 2 weekly downloads. As such, @embedapi/core popularity was classified as not popular.
We found that @embedapi/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.

Research
/Security News
Campaign of 108 extensions harvests identities, steals sessions, and adds backdoors to browsers, all tied to the same C2 infrastructure.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.