New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@revenium/openai

Package Overview
Dependencies
Maintainers
4
Versions
12
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@revenium/openai

Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI

latest
Source
npmnpm
Version
1.1.2
Version published
Maintainers
4
Created
Source

Revenium OpenAI Middleware for Node.js

npm version Node.js Documentation Website License: MIT

Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI

A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both Chat Completions API, Embeddings API, and Responses API.

Go-aligned API for consistent cross-language development!

Features

  • Go-Aligned API - Same Initialize()/GetClient() pattern as Go implementation
  • Seamless Integration - Native TypeScript support, no type casting required
  • Optional Metadata - Track users, organizations, and business context (all fields optional)
  • Multiple API Support - Chat Completions, Embeddings, and Responses API
  • Azure OpenAI Support - Full Azure OpenAI integration with automatic detection
  • Type Safety - Complete TypeScript support with IntelliSense
  • Streaming Support - Handles regular and streaming requests seamlessly
  • Fire-and-Forget - Never blocks your application flow
  • Automatic .env Loading - Loads environment variables automatically

Getting Started

1. Create Project Directory

# Create project directory and navigate to it
mkdir my-openai-project
cd my-openai-project

# Initialize npm project
npm init -y

# Install packages
npm install @revenium/openai openai dotenv tsx
npm install --save-dev typescript @types/node

2. Configure Environment Variables

Create a .env file in your project root. See .env.example for all available configuration options.

Minimum required configuration:

REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai
OPENAI_API_KEY=sk_your_openai_api_key_here

NOTE: Replace the placeholder values with your actual API keys.

3. Run Your First Example

For complete examples and usage patterns, see examples/README.md.

Requirements

  • Node.js 16+
  • OpenAI package v5.0.0 or later
  • TypeScript 5.0+ (for TypeScript projects)

What Gets Tracked

The middleware automatically captures comprehensive usage data:

Usage Metrics

  • Token Counts - Input tokens, output tokens, total tokens
  • Model Information - Model name, provider (OpenAI/Azure), API version
  • Request Timing - Request duration, response time
  • Cost Calculation - Estimated costs based on current pricing

Business Context (Optional)

  • User Tracking - Subscriber ID, email, credentials
  • Organization Data - Organization ID, subscription ID, product ID
  • Task Classification - Task type, agent identifier, trace ID
  • Quality Metrics - Response quality scores, task identifiers

Technical Details

  • API Endpoints - Chat completions, embeddings, responses API
  • Request Types - Streaming vs non-streaming
  • Error Tracking - Failed requests, error types, retry attempts
  • Environment Info - Development vs production usage

API Overview

The middleware provides a Go-aligned API with the following main functions:

  • Initialize(config?) - Initialize the middleware (from environment or explicit config)
  • GetClient() - Get the global Revenium client instance
  • Configure(config) - Alias for Initialize() for programmatic configuration
  • IsInitialized() - Check if the middleware is initialized
  • Reset() - Reset the global client (useful for testing)

For complete API documentation and usage examples, see examples/README.md.

Tool Metering

Track execution of custom tools and external API calls with automatic timing, error handling, and metadata collection.

Quick Example

import { meterTool, setToolContext } from '@revenium/openai';

setToolContext({
  agent: 'my-agent',
  traceId: 'session-123'
});

const result = await meterTool('weather-api', async () => {
  return await fetch('https://api.example.com/weather');
}, {
  operation: 'get_forecast',
  outputFields: ['temperature', 'humidity']
});

Functions

meterTool(toolId, fn, metadata?)

Wraps a function with automatic metering. Captures duration, success/failure, and errors. Returns function result unchanged.

reportToolCall(toolId, report)

Manually report a tool call that was already executed. Useful when wrapping is not possible.

Context Management

  • setToolContext(ctx) - Set context for all subsequent tool calls
  • getToolContext() - Get current context
  • clearToolContext() - Clear context
  • runWithToolContext(ctx, fn) - Run function with scoped context

Metadata Options

FieldDescription
operationTool operation name (e.g., "search", "scrape")
outputFieldsArray of field names to auto-extract from result
usageMetadataCustom metrics (e.g., tokens, results count)
agent, traceId, etc.Context fields (inherited from setToolContext)

Metadata Fields

The middleware supports the following optional metadata fields for tracking:

FieldTypeDescription
traceIdstringUnique identifier for session or conversation tracking
taskTypestringType of AI task being performed (e.g., "chat", "embedding")
agentstringAI agent or bot identifier
organizationNamestringOrganization or company name (used for lookup/auto-creation)
productNamestringYour product or feature name (used for lookup/auto-creation)
subscriptionIdstringSubscription plan identifier
responseQualityScorenumberCustom quality rating (0.0-1.0)
subscriber.idstringUnique user identifier
subscriber.emailstringUser email address
subscriber.credentialobjectAuthentication credential (name and value fields)

All metadata fields are optional. For complete metadata documentation and usage examples, see:

Trace Visualization Fields

The middleware automatically captures trace visualization fields for distributed tracing and analytics:

FieldTypeDescriptionEnvironment Variable
environmentstringDeployment environment (production, staging, development)REVENIUM_ENVIRONMENT, NODE_ENV
operationTypestringOperation classification (CHAT, EMBED, etc.) - automatically detectedN/A (auto-detected)
operationSubtypestringAdditional detail (function_call, etc.) - automatically detectedN/A (auto-detected)
retryNumbernumberRetry attempt number (0 for first attempt, 1+ for retries)REVENIUM_RETRY_NUMBER
parentTransactionIdstringParent transaction reference for distributed tracingREVENIUM_PARENT_TRANSACTION_ID
transactionNamestringHuman-friendly operation labelREVENIUM_TRANSACTION_NAME
regionstringCloud region (us-east-1, etc.) - auto-detected from AWS/Azure/GCPAWS_REGION, REVENIUM_REGION
credentialAliasstringHuman-readable credential nameREVENIUM_CREDENTIAL_ALIAS
traceTypestringCategorical identifier (alphanumeric, hyphens, underscores only, max 128 chars)REVENIUM_TRACE_TYPE
traceNamestringHuman-readable label for trace instances (max 256 chars)REVENIUM_TRACE_NAME

All trace visualization fields are optional. The middleware will automatically detect and populate these fields when possible.

Example Configuration

REVENIUM_ENVIRONMENT=production
REVENIUM_REGION=us-east-1
REVENIUM_CREDENTIAL_ALIAS=OpenAI Production Key
REVENIUM_TRACE_TYPE=customer_support
REVENIUM_TRACE_NAME=Support Ticket #12345
REVENIUM_PARENT_TRANSACTION_ID=parent-txn-123
REVENIUM_TRANSACTION_NAME=Answer Customer Question
REVENIUM_RETRY_NUMBER=0

Terminal Summary Output

The middleware can optionally print a cost/metrics summary to the terminal after each API request. This is useful during development to see token usage and estimated costs without checking the dashboard.

Enabling Terminal Summary

Set the following environment variables:

# Use 'true' or 'human' for human-readable output, 'json' for JSON output
REVENIUM_PRINT_SUMMARY=true
REVENIUM_TEAM_ID=your-team-id-here

Or configure programmatically:

Initialize({
  reveniumApiKey: "hak_your-api-key",
  printSummary: true, // or 'human' or 'json'
  teamId: "your-team-id",
});

Output Formats

Human-Readable Format (default)

Set REVENIUM_PRINT_SUMMARY=true or REVENIUM_PRINT_SUMMARY=human:

============================================================
📊 REVENIUM USAGE SUMMARY
============================================================
🤖 Model: gpt-4o-mini
🏢 Provider: OpenAI
⏱️  Duration: 1.23s

💬 Token Usage:
   📥 Input Tokens:  150
   📤 Output Tokens: 250
   📊 Total Tokens:  400

💰 Cost: $0.000450
============================================================

JSON Format

Set REVENIUM_PRINT_SUMMARY=json for machine-readable output:

{
  "model": "gpt-4o-mini",
  "provider": "OpenAI",
  "durationSeconds": 1.23,
  "inputTokenCount": 150,
  "outputTokenCount": 250,
  "totalTokenCount": 400,
  "cost": 0.00045,
  "traceId": "abc-123"
}

The JSON output includes all the same fields as the human-readable format and is ideal for log parsing, automation, and integration with other tools.

Note: The teamId is required to display cost information. If not provided, the summary will show token usage but the cost field will be null with a costStatus of "unavailable". When teamId is set but the cost hasn't been aggregated yet, the cost field will be null with a costStatus of "pending". You can find your team ID in the Revenium web application.

Prompt Capture

The middleware can capture prompts and responses for analysis. This feature is disabled by default for privacy and performance.

Configuration

Enable prompt capture globally via environment variable:

REVENIUM_CAPTURE_PROMPTS=true
REVENIUM_MAX_PROMPT_SIZE=50000  # Optional: default is 50000 characters

Or enable per-request via metadata:

const response = await client.chat.completions.create(
  {
    model: "gpt-4",
    messages: [{ role: "user", content: "Hello!" }],
  },
  {
    usageMetadata: { capturePrompts: true },
  },
);

Security

Captured prompts are automatically sanitized to remove sensitive credentials:

  • API keys (OpenAI, Anthropic, Perplexity)
  • AWS access keys
  • GitHub tokens
  • JWT tokens
  • Bearer tokens
  • Passwords and secrets

Prompts exceeding maxPromptSize are truncated and marked with promptsTruncated: true.

Configuration Options

Environment Variables

For a complete list of all available environment variables with examples, see .env.example.

Examples

The package includes comprehensive examples in the examples/ directory.

Getting Started

npm run example:getting-started

OpenAI Examples

ExampleCommandDescription
openai/basic.tsnpm run example:openai-basicChat completions and embeddings
openai/metadata.tsnpm run example:openai-metadataAll metadata fields demonstration
openai/streaming.tsnpm run example:openai-streamStreaming chat completions
openai/responses-basic.tsnpm run example:openai-res-basicResponses API usage
openai/responses-embed.tsnpm run example:openai-res-embedEmbeddings with Responses API
openai/responses-streaming.tsnpm run example:openai-res-streamStreaming Responses API

Azure OpenAI Examples

ExampleCommandDescription
azure/basic.tsnpm run example:azure-basicAzure chat completions
azure/stream.tsnpm run example:azure-streamAzure streaming
azure/responses-basic.tsnpm run example:azure-res-basicAzure Responses API
azure/responses-stream.tsnpm run example:azure-res-streamAzure Responses API streaming

For complete example documentation, setup instructions, and usage patterns, see examples/README.md.

How It Works

  • Initialize: Call Initialize() to set up the middleware with your configuration
  • Get Client: Call GetClient() to get a wrapped OpenAI client instance
  • Make Requests: Use the client normally - all requests are automatically tracked
  • Async Tracking: Usage data is sent to Revenium in the background (fire-and-forget)
  • Transparent Response: Original OpenAI responses are returned unchanged

The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.

Supported APIs:

  • Chat Completions API (client.chat().completions().create())
  • Embeddings API (client.embeddings().create())
  • Responses API (client.responses().create() and client.responses().createStreaming())

Troubleshooting

Common Issues

No tracking data appears:

  • Verify environment variables are set correctly in .env
  • Enable debug logging by setting REVENIUM_DEBUG=true in .env
  • Check console for [Revenium] log messages
  • Verify your REVENIUM_METERING_API_KEY is valid

Client not initialized error:

  • Make sure you call Initialize() before GetClient()
  • Check that your .env file is in the project root
  • Verify REVENIUM_METERING_API_KEY is set

Azure OpenAI not working:

  • Verify all Azure environment variables are set (see .env.example)
  • Check that AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY are correct
  • Ensure you're using a valid deployment name in the model parameter

Debug Mode

Enable detailed logging by adding to your .env:

REVENIUM_DEBUG=true

Getting Help

If issues persist:

Supported Models

This middleware works with any OpenAI model. For the complete model list, see the OpenAI Models Documentation.

API Support Matrix

The following table shows what has been tested and verified with working examples:

FeatureChat CompletionsEmbeddingsResponses API
OpenAI BasicYesYesYes
OpenAI StreamingYesNoYes
Azure BasicYesNoYes
Azure StreamingYesNoYes
Metadata TrackingYesYesYes
Token CountingYesYesYes

Note: "Yes" = Tested with working examples in examples/ directory

Documentation

For detailed documentation, visit docs.revenium.io

Contributing

See CONTRIBUTING.md

Testing

The middleware includes comprehensive automated tests that fail the build when something is wrong.

Run All Tests

Run unit, integration, and performance tests:

npm test

Run Tests with Coverage

npm run test:coverage

Run Tests in Watch Mode

npm run test:watch

Test Requirements

All tests are designed to:

  • ✅ Fail the build when something is wrong (process.exit(1))
  • ✅ Pass when everything works correctly (process.exit(0))
  • ✅ Provide clear error messages
  • ✅ Test trace field validation, environment detection, and region detection

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For issues, feature requests, or contributions:

Built by Revenium

Keywords

openai

FAQs

Package last updated on 19 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts