
Security News
The Hidden Blast Radius of the Axios Compromise
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.
@revenium/openai
Advanced tools
Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI
Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI
A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both Chat Completions API, Embeddings API, and Responses API.
Go-aligned API for consistent cross-language development!
Initialize()/GetClient() pattern as Go implementation# Create project directory and navigate to it
mkdir my-openai-project
cd my-openai-project
# Initialize npm project
npm init -y
# Install packages
npm install @revenium/openai openai dotenv tsx
npm install --save-dev typescript @types/node
Create a .env file in your project root. See .env.example for all available configuration options.
Minimum required configuration:
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai
OPENAI_API_KEY=sk_your_openai_api_key_here
NOTE: Replace the placeholder values with your actual API keys.
For complete examples and usage patterns, see examples/README.md.
The middleware automatically captures comprehensive usage data:
The middleware provides a Go-aligned API with the following main functions:
Initialize(config?) - Initialize the middleware (from environment or explicit config)GetClient() - Get the global Revenium client instanceConfigure(config) - Alias for Initialize() for programmatic configurationIsInitialized() - Check if the middleware is initializedReset() - Reset the global client (useful for testing)For complete API documentation and usage examples, see examples/README.md.
Track execution of custom tools and external API calls with automatic timing, error handling, and metadata collection.
import { meterTool, setToolContext } from '@revenium/openai';
setToolContext({
agent: 'my-agent',
traceId: 'session-123'
});
const result = await meterTool('weather-api', async () => {
return await fetch('https://api.example.com/weather');
}, {
operation: 'get_forecast',
outputFields: ['temperature', 'humidity']
});
meterTool(toolId, fn, metadata?)
Wraps a function with automatic metering. Captures duration, success/failure, and errors. Returns function result unchanged.
reportToolCall(toolId, report)
Manually report a tool call that was already executed. Useful when wrapping is not possible.
Context Management
setToolContext(ctx) - Set context for all subsequent tool callsgetToolContext() - Get current contextclearToolContext() - Clear contextrunWithToolContext(ctx, fn) - Run function with scoped context| Field | Description |
|---|---|
operation | Tool operation name (e.g., "search", "scrape") |
outputFields | Array of field names to auto-extract from result |
usageMetadata | Custom metrics (e.g., tokens, results count) |
agent, traceId, etc. | Context fields (inherited from setToolContext) |
The middleware supports the following optional metadata fields for tracking:
| Field | Type | Description |
|---|---|---|
traceId | string | Unique identifier for session or conversation tracking |
taskType | string | Type of AI task being performed (e.g., "chat", "embedding") |
agent | string | AI agent or bot identifier |
organizationName | string | Organization or company name (used for lookup/auto-creation) |
productName | string | Your product or feature name (used for lookup/auto-creation) |
subscriptionId | string | Subscription plan identifier |
responseQualityScore | number | Custom quality rating (0.0-1.0) |
subscriber.id | string | Unique user identifier |
subscriber.email | string | User email address |
subscriber.credential | object | Authentication credential (name and value fields) |
All metadata fields are optional. For complete metadata documentation and usage examples, see:
examples/README.md - All usage examplesThe middleware automatically captures trace visualization fields for distributed tracing and analytics:
| Field | Type | Description | Environment Variable |
|---|---|---|---|
environment | string | Deployment environment (production, staging, development) | REVENIUM_ENVIRONMENT, NODE_ENV |
operationType | string | Operation classification (CHAT, EMBED, etc.) - automatically detected | N/A (auto-detected) |
operationSubtype | string | Additional detail (function_call, etc.) - automatically detected | N/A (auto-detected) |
retryNumber | number | Retry attempt number (0 for first attempt, 1+ for retries) | REVENIUM_RETRY_NUMBER |
parentTransactionId | string | Parent transaction reference for distributed tracing | REVENIUM_PARENT_TRANSACTION_ID |
transactionName | string | Human-friendly operation label | REVENIUM_TRANSACTION_NAME |
region | string | Cloud region (us-east-1, etc.) - auto-detected from AWS/Azure/GCP | AWS_REGION, REVENIUM_REGION |
credentialAlias | string | Human-readable credential name | REVENIUM_CREDENTIAL_ALIAS |
traceType | string | Categorical identifier (alphanumeric, hyphens, underscores only, max 128 chars) | REVENIUM_TRACE_TYPE |
traceName | string | Human-readable label for trace instances (max 256 chars) | REVENIUM_TRACE_NAME |
All trace visualization fields are optional. The middleware will automatically detect and populate these fields when possible.
REVENIUM_ENVIRONMENT=production
REVENIUM_REGION=us-east-1
REVENIUM_CREDENTIAL_ALIAS=OpenAI Production Key
REVENIUM_TRACE_TYPE=customer_support
REVENIUM_TRACE_NAME=Support Ticket #12345
REVENIUM_PARENT_TRANSACTION_ID=parent-txn-123
REVENIUM_TRANSACTION_NAME=Answer Customer Question
REVENIUM_RETRY_NUMBER=0
The middleware can optionally print a cost/metrics summary to the terminal after each API request. This is useful during development to see token usage and estimated costs without checking the dashboard.
Set the following environment variables:
# Use 'true' or 'human' for human-readable output, 'json' for JSON output
REVENIUM_PRINT_SUMMARY=true
REVENIUM_TEAM_ID=your-team-id-here
Or configure programmatically:
Initialize({
reveniumApiKey: "hak_your-api-key",
printSummary: true, // or 'human' or 'json'
teamId: "your-team-id",
});
Set REVENIUM_PRINT_SUMMARY=true or REVENIUM_PRINT_SUMMARY=human:
============================================================
📊 REVENIUM USAGE SUMMARY
============================================================
🤖 Model: gpt-4o-mini
🏢 Provider: OpenAI
⏱️ Duration: 1.23s
💬 Token Usage:
📥 Input Tokens: 150
📤 Output Tokens: 250
📊 Total Tokens: 400
💰 Cost: $0.000450
============================================================
Set REVENIUM_PRINT_SUMMARY=json for machine-readable output:
{
"model": "gpt-4o-mini",
"provider": "OpenAI",
"durationSeconds": 1.23,
"inputTokenCount": 150,
"outputTokenCount": 250,
"totalTokenCount": 400,
"cost": 0.00045,
"traceId": "abc-123"
}
The JSON output includes all the same fields as the human-readable format and is ideal for log parsing, automation, and integration with other tools.
Note: The teamId is required to display cost information. If not provided, the summary will show token usage but the cost field will be null with a costStatus of "unavailable". When teamId is set but the cost hasn't been aggregated yet, the cost field will be null with a costStatus of "pending". You can find your team ID in the Revenium web application.
The middleware can capture prompts and responses for analysis. This feature is disabled by default for privacy and performance.
Enable prompt capture globally via environment variable:
REVENIUM_CAPTURE_PROMPTS=true
REVENIUM_MAX_PROMPT_SIZE=50000 # Optional: default is 50000 characters
Or enable per-request via metadata:
const response = await client.chat.completions.create(
{
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
},
{
usageMetadata: { capturePrompts: true },
},
);
Captured prompts are automatically sanitized to remove sensitive credentials:
Prompts exceeding maxPromptSize are truncated and marked with promptsTruncated: true.
For a complete list of all available environment variables with examples, see .env.example.
The package includes comprehensive examples in the examples/ directory.
npm run example:getting-started
| Example | Command | Description |
|---|---|---|
openai/basic.ts | npm run example:openai-basic | Chat completions and embeddings |
openai/metadata.ts | npm run example:openai-metadata | All metadata fields demonstration |
openai/streaming.ts | npm run example:openai-stream | Streaming chat completions |
openai/responses-basic.ts | npm run example:openai-res-basic | Responses API usage |
openai/responses-embed.ts | npm run example:openai-res-embed | Embeddings with Responses API |
openai/responses-streaming.ts | npm run example:openai-res-stream | Streaming Responses API |
| Example | Command | Description |
|---|---|---|
azure/basic.ts | npm run example:azure-basic | Azure chat completions |
azure/stream.ts | npm run example:azure-stream | Azure streaming |
azure/responses-basic.ts | npm run example:azure-res-basic | Azure Responses API |
azure/responses-stream.ts | npm run example:azure-res-stream | Azure Responses API streaming |
For complete example documentation, setup instructions, and usage patterns, see examples/README.md.
Initialize() to set up the middleware with your configurationGetClient() to get a wrapped OpenAI client instanceThe middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
Supported APIs:
client.chat().completions().create())client.embeddings().create())client.responses().create() and client.responses().createStreaming())No tracking data appears:
.envREVENIUM_DEBUG=true in .env[Revenium] log messagesREVENIUM_METERING_API_KEY is validClient not initialized error:
Initialize() before GetClient().env file is in the project rootREVENIUM_METERING_API_KEY is setAzure OpenAI not working:
.env.example)AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY are correctmodel parameterEnable detailed logging by adding to your .env:
REVENIUM_DEBUG=true
If issues persist:
REVENIUM_DEBUG=true)examples/ directory for working examplesexamples/README.md for detailed setup instructionsThis middleware works with any OpenAI model. For the complete model list, see the OpenAI Models Documentation.
The following table shows what has been tested and verified with working examples:
| Feature | Chat Completions | Embeddings | Responses API |
|---|---|---|---|
| OpenAI Basic | Yes | Yes | Yes |
| OpenAI Streaming | Yes | No | Yes |
| Azure Basic | Yes | No | Yes |
| Azure Streaming | Yes | No | Yes |
| Metadata Tracking | Yes | Yes | Yes |
| Token Counting | Yes | Yes | Yes |
Note: "Yes" = Tested with working examples in examples/ directory
For detailed documentation, visit docs.revenium.io
See CONTRIBUTING.md
The middleware includes comprehensive automated tests that fail the build when something is wrong.
Run unit, integration, and performance tests:
npm test
npm run test:coverage
npm run test:watch
All tests are designed to:
process.exit(1))process.exit(0))See SECURITY.md
This project is licensed under the MIT License - see the LICENSE file for details.
For issues, feature requests, or contributions:
Built by Revenium
FAQs
Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI
We found that @revenium/openai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.