
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@helicone/async
Advanced tools
A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry
A Node.js wrapper for logging LLM traces directly to Helicone, bypassing the proxy, with OpenLLMetry. This package enables you to monitor and analyze your OpenAI API usage without requiring a proxy server.
npm install @helicone/async
Create a Helicone account and get your API key from helicone.ai/developer
Set up your environment variables:
export HELICONE_API_KEY=<your API key>
export OPENAI_API_KEY=<your OpenAI API key>
const { HeliconeAsyncOpenAI } = require("helicone");
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
},
});
const chatCompletion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello world" }],
});
console.log(chatCompletion.data.choices[0].message);
The heliconeMeta object supports several configuration options:
interface HeliconeMeta {
apiKey?: string; // Your Helicone API key
custom_properties?: Record<string, any>; // Custom properties to track
cache?: boolean; // Enable/disable caching
retry?: boolean; // Enable/disable retries
user_id?: string; // Track requests by user
}
const openai = new HeliconeAsyncOpenAI({
apiKey: process.env.OPENAI_API_KEY,
heliconeMeta: {
apiKey: process.env.HELICONE_API_KEY,
custom_properties: {
project: "my-project",
environment: "production",
},
user_id: "user-123",
},
});
async function generateResponse() {
try {
const response = await openai.chat.completion.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" },
],
max_tokens: 150,
});
return response.data.choices[0].message;
} catch (error) {
console.error("Error:", error);
}
}
try {
const completion = await openai.chat.completion.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello" }],
});
} catch (error) {
if (error.response) {
console.error(error.response.status);
console.error(error.response.data);
} else {
console.error(error.message);
}
}
We welcome contributions! Please see our contributing guidelines for details.
Apache-2.0
FAQs
A Node.js wrapper for logging llm traces directly to Helicone, bypassing the proxy, with OpenLLMetry
We found that @helicone/async demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.