
Company News
/Security News
Socket Selected for OpenAI's Cybersecurity Grant Program
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.
@helicone/async
Advanced tools
A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry
A Node.js wrapper for logging LLM traces directly to Helicone, bypassing the proxy, with OpenLLMetry. This package enables you to monitor and analyze your OpenAI API usage without requiring a proxy server.
npm install @helicone/async
Create a Helicone account and get your API key from helicone.ai/developer
Set up your environment variables:
export HELICONE_API_KEY=<your API key>
export OPENAI_API_KEY=<your OpenAI API key>
const { HeliconeAsyncOpenAI } = require("helicone");
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
},
});
const chatCompletion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello world" }],
});
console.log(chatCompletion.data.choices[0].message);
The heliconeMeta object supports several configuration options:
interface HeliconeMeta {
apiKey?: string; // Your Helicone API key
custom_properties?: Record<string, any>; // Custom properties to track
cache?: boolean; // Enable/disable caching
retry?: boolean; // Enable/disable retries
user_id?: string; // Track requests by user
}
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
custom_properties: {
project: "my-project",
environment: "production",
},
user_id: "user-123",
},
});
async function generateResponse() {
try {
const response = await openai.chat.completion.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
],
max_tokens: 150,
});
return response.data.choices[0].message;
} catch (error) {
console.error("Error:", error);
}
}
try {
const completion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello" }],
});
} catch (error) {
if (error.response) {
console.error(error.response.status);
console.error(error.response.data);
} else {
console.error(error.message);
}
}
We welcome contributions! Please see our contributing guidelines for details.
Apache-2.0
FAQs
A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry
We found that @helicone/async demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.

Security News
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.

Research
/Security News
Campaign of 108 extensions harvests identities, steals sessions, and adds backdoors to browsers, all tied to the same C2 infrastructure.