New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@helicone/helicone

Package Overview
Dependencies
Maintainers
2
Versions
31
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@helicone/helicone

A Node.js wrapper for the OpenAI API that logs all requests to Helicone.

latest
Source
npmnpm
Version
3.1.2
Version published
Weekly downloads
1.6K
26.11%
Maintainers
2
Weekly downloads
 
Created
Source

Helicone OpenAI v4+ Node.js Library

This package is a simple and convenient way to log all requests made through the OpenAI API with Helicone. You can easily track and manage your OpenAI API usage and monitor your GPT models' cost, latency, and performance on the Helicone platform.

Proxy Setup

Installation and Setup

  • To get started, install the helicone-openai package:

    npm install @helicone/helicone
    
  • Set HELICONE_API_KEY as an environment variable:

    Set HELICONE_API_KEY as an environment variable:
    

    ℹ️ You can also set the Helicone API Key in your code (See below).

  • Replace:

    const { ClientOptions, OpenAI } = require("openai");
    

    with:

    const { HeliconeProxyOpenAI as OpenAI,
        IHeliconeProxyClientOptions as ClientOptions } = require("helicone");
    
  • Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.

    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      heliconeMeta: {
        apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
        // ... additional helicone meta fields
      },
    });
    
    const chatCompletion = await openai.chat.completion.create({
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: "Hello world" }],
    });
    
    console.log(chatCompletion.data.choices[0].message);
    

Send Feedback

Ensure you store the helicone-id header returned in the original response.

const { data, response } = await openai.chat.completion
  .create({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Hello world" }],
  })
  .withResponse();

const heliconeId = response.headers.get("helicone-id");

await openai.helicone.logFeedback(heliconeId, HeliconeFeedbackRating.Positive); // or Negative

HeliconeMeta options

interface IHeliconeMeta {
  apiKey?: string;
  properties?: { [key: string]: any };
  cache?: boolean;
  retry?: boolean | { [key: string]: any };
  rateLimitPolicy?: string | { [key: string]: any };
  user?: string;
  baseUrl?: string;
  onFeedback?: OnHeliconeFeedback; // Callback after feedback was processed
}

type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;

Advanced Features Example

const options = new IHeliconeProxyClientOptions({
  apiKey,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
    cache: true,
    retry: true,
    properties: {
      Session: "24",
      Conversation: "support_issue_2",
    },
    rateLimitPolicy: {
      quota: 10,
      time_window: 60,
      segment: "Session",
    },
  },
});

Async Setup

Installation and Setup

  • To get started, install the helicone-openai package:

    npm install @helicone/helicone
    
  • Set HELICONE_API_KEY as an environment variable:

    Set HELICONE_API_KEY as an environment variable:
    

    ℹ️ You can also set the Helicone API Key in your code (See below).

  • Replace:

    const { ClientOptions, OpenAI } = require("openai");
    

    with:

    const { HeliconeAsyncOpenAI as OpenAI,
        IHeliconeAsyncClientOptions as ClientOptions } = require("helicone");
    
  • Make a request Chat, Completion, Embedding, etc usage is equivalent to OpenAI package.

    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      heliconeMeta: {
        apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
        // ... additional helicone meta fields
      },
    });
    
    const chatCompletion = await openai.chat.completion.create({
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: "Hello world" }],
    });
    
    console.log(chatCompletion.data.choices[0].message);
    

Send Feedback

With Async logging, you must retrieve the helicone-id header from the log response (not LLM response).

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  heliconeMeta: {
    apiKey: process.env.HELICONE_API_KEY,
    onLog: async (response: Response) => {
      const heliconeId = response.headers.get("helicone-id");
      await openai.helicone.logFeedback(
        heliconeId,
        HeliconeFeedbackRating.Positive
      );
    },
  },
});

HeliconeMeta options

Async logging loses some additional features such as cache, rate limits, and retries

interface IHeliconeMeta {
  apiKey?: string;
  properties?: { [key: string]: any };
  user?: string;
  baseUrl?: string;
  onLog?: OnHeliconeLog;
  onFeedback?: OnHeliconeFeedback;
}

type OnHeliconeLog = (response: Response) => Promise<void>;
type OnHeliconeFeedback = (result: Response) => Promise<void>;

 

 

For more information see our documentation.

Keywords

openai

FAQs

Package last updated on 12 Aug 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts