New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@revenium/middleware

Package Overview
Dependencies
Maintainers
4
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@revenium/middleware

Unified Revenium middleware for AI provider usage tracking - OpenAI, Anthropic, Google, Perplexity, LiteLLM, fal.ai

latest
Source
npmnpm
Version
1.1.2
Version published
Maintainers
4
Created
Source

Revenium Middleware for Node.js

npm version Node.js Documentation Website License: MIT

Unified TypeScript middleware for automatic AI usage tracking across multiple providers

A professional-grade Node.js middleware that integrates with OpenAI, Azure OpenAI, Anthropic, Google (GenAI + Vertex AI), Perplexity, LiteLLM, and fal.ai to provide automatic usage tracking, billing analytics, and metadata collection. Features Go-aligned API patterns, sub-path imports for tree-shaking, and ESM + CJS dual output.

Features

  • Multi-Provider Support - OpenAI, Azure OpenAI, Anthropic, Google GenAI, Google Vertex AI, Perplexity, LiteLLM, fal.ai
  • Go-Aligned API - Consistent Initialize() / GetClient() pattern across providers
  • Sub-Path Imports - Tree-shakeable @revenium/middleware/openai, /anthropic, etc.
  • Tool Metering - Track custom tool and external API calls with meterTool() and reportToolCall()
  • Fire-and-Forget - Metering never blocks your application flow
  • Streaming Support - Handles regular and streaming requests for all providers
  • ESM + CJS - Dual output with full TypeScript type definitions
  • Automatic .env Loading - Loads environment variables automatically

Supported Providers

ProviderSub-Path ImportAPI Pattern
OpenAI@revenium/middleware/openaiInitialize() / GetClient()
Azure OpenAI@revenium/middleware/openaiInitialize() / GetClient() (auto-detected)
Anthropic@revenium/middleware/anthropicinitialize() / configure() / auto-init on import
Google GenAI@revenium/middleware/google/genaiGoogleGenAIController / GoogleGenAIService
Google Vertex AI@revenium/middleware/google/vertexVertexAIController / VertexAIService
Perplexity@revenium/middleware/perplexityInitialize() / GetClient()
LiteLLM@revenium/middleware/litellminitialize() / configure() / enable() / disable()
fal.ai@revenium/middleware/falInitialize() / GetClient()
Tool Metering@revenium/middleware/toolsmeterTool() / reportToolCall()

Getting Started

Installation

npm install @revenium/middleware

Install the provider SDK you need as a peer dependency:

npm install openai                    # For OpenAI / Azure OpenAI / Perplexity
npm install @anthropic-ai/sdk         # For Anthropic
npm install @google/genai             # For Google GenAI
npm install google-auth-library       # For Google Vertex AI
npm install @fal-ai/client            # For fal.ai

Configuration

Create a .env file in your project root. See .env.example for all available options.

Minimum required:

REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
REVENIUM_METERING_BASE_URL=https://api.revenium.ai

Plus the API key for your chosen provider (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.).

Quick Start - OpenAI

import { Initialize, GetClient } from "@revenium/middleware/openai";

Initialize();
const client = GetClient();

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});

Quick Start - Anthropic

import "@revenium/middleware/anthropic";
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

const response = await client.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello!" }],
});

Quick Start - Google GenAI

import { GoogleGenAIController } from "@revenium/middleware/google/genai";

const controller = new GoogleGenAIController({
  reveniumApiKey: process.env.REVENIUM_METERING_API_KEY!,
});

const response = await controller.generateContent({
  model: "gemini-2.0-flash",
  contents: "Hello!",
});

Quick Start - Azure OpenAI

import { Initialize, GetClient } from "@revenium/middleware/openai";

Initialize();
const client = GetClient();

const response = await client.chat.completions.create({
  model: "my-deployment-name",
  messages: [{ role: "user", content: "Hello!" }],
});

Azure is auto-detected when AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT are set.

Quick Start - Google Vertex AI

import { VertexAIController } from "@revenium/middleware/google/vertex";

const controller = new VertexAIController({
  reveniumApiKey: process.env.REVENIUM_METERING_API_KEY!,
});

const response = await controller.generateContent({
  model: "gemini-2.0-flash",
  contents: "Hello!",
});

Quick Start - Perplexity

import { Initialize, GetClient } from "@revenium/middleware/perplexity";

Initialize();
const client = GetClient();

const response = await client.chat.completions.create({
  model: "sonar",
  messages: [{ role: "user", content: "Hello!" }],
});

Quick Start - fal.ai

Ensure FAL_KEY and REVENIUM_METERING_API_KEY are set in your environment before initializing.

import { Initialize, GetClient } from "@revenium/middleware/fal";

Initialize();
const fal = GetClient();

// Image generation (with cost attribution metadata)
const image = await fal.subscribe(
  "fal-ai/flux/schnell",
  {
    input: { prompt: "a futuristic cityscape at sunset" },
  },
  { subscriber: { id: "user_123" }, traceId: "req_abc789" },
);
console.log(image.data.images[0].url);

// Video generation
const video = await fal.subscribe("fal-ai/kling-video/v2/master/text-to-video", {
  input: { prompt: "ocean waves crashing on rocks", duration: 5 },
});
console.log(video.data.video.url);

// Audio generation (text-to-speech)
const audio = await fal.subscribe("fal-ai/kokoro/american-english", {
  input: { prompt: "Hello from Revenium!", voice: "af_heart" },
});
console.log(audio.data.audio.url);

// LLM via OpenRouter
const chat = await fal.subscribe("openrouter/router", {
  input: { prompt: "Explain quantum computing", model: "google/gemini-2.5-flash" },
});
console.log(chat.data.output);

The middleware automatically detects the media type from the endpoint ID and routes metering data to the correct Revenium endpoint. The optional metadata parameter enables cost attribution per subscriber, organization, or trace.

Quick Start - LiteLLM

import { initialize } from "@revenium/middleware/litellm";

initialize();

API Reference

OpenAI

Go-aligned client pattern with Azure auto-detection:

FunctionDescription
Initialize(config?)Initialize middleware from environment or explicit config
GetClient()Get the wrapped OpenAI client instance
Configure(config)Alias for Initialize() for programmatic configuration
IsInitialized()Check if middleware is initialized
Reset()Reset the global client (useful for testing)

Anthropic

Auto-initializes on import. Manual control available:

FunctionDescription
initialize()Explicitly initialize middleware
configure(config)Set configuration and patch Anthropic
patchAnthropic()Enable request interception
unpatchAnthropic()Disable request interception
isInitialized()Check initialization status
getStatus()Get detailed status including circuit breaker state
reset()Reset middleware and circuit breaker

Google GenAI / Vertex AI

Controller and service pattern:

ExportDescription
GoogleGenAIController / VertexAIControllerMain controller for API calls
GoogleGenAIService / VertexAIServiceService implementation
trackGoogleUsageAsync()Manual usage tracking
mapGoogleFinishReason()Map finish reasons to standard format

Perplexity

Same Go-aligned client pattern as OpenAI:

FunctionDescription
Initialize(config?)Initialize middleware from environment or explicit config
GetClient()Get the wrapped Perplexity client instance
Configure(config)Alias for Initialize() for programmatic configuration
IsInitialized()Check if middleware is initialized
Reset()Reset the global client (useful for testing)

fal.ai

Enterprise wrapper for fal.ai's multi-modal platform (images, video, audio, LLM) with automatic metering:

FunctionDescription
Initialize(config?)Initialize middleware from environment or explicit config
GetClient()Get the wrapped fal.ai client instance
Configure(config)Alias for Initialize() for programmatic configuration
IsInitialized()Check if middleware is initialized
Reset()Reset the global client (useful for testing)

Client Methods:

MethodDescription
fal.subscribe(endpointId, options, metadata?)Submit to queue and wait for result (recommended for most use cases)
fal.run(endpointId, options, metadata?)Execute directly and wait for result (low-latency models)
fal.stream(endpointId, options, metadata?)Stream partial results (real-time LLM or progress tracking)
fal.queueAccess the underlying queue client directly
fal.realtimeAccess the underlying realtime client directly
fal.storageAccess the underlying storage client directly
fal.getUnderlyingClient()Get the raw FalClient instance (not metered)

The metadata parameter is optional on all methods and enables cost attribution (e.g., { subscriber: { id: '...' }, organizationName, traceId }). It does not affect the fal.ai payload. See Metadata Fields for all supported options.

Media Type Routing:

Media TypeMetering EndpointDetection ExamplesBilling Metric
IMAGE/ai/imagesflux, stable-diffusion, recraft, sdxlPer image (+ resolution)
VIDEO/ai/videokling-video, veo, sora, runway, lumaSeconds of video
AUDIO/ai/audiokokoro, chatterbox, whisper, f5-ttsCharacters (TTS) / minutes (transcription) / seconds (generation)
CHAT/ai/completionsopenrouterToken usage (input/output/total)

Media type is detected via a two-phase approach: first by regex matching on the endpoint ID, then corrected by inspecting the response structure (e.g., presence of images, video, audio_url, or usage fields).

Fallback: Unknown endpoints default to IMAGE metering. A warning is logged automatically for unrecognized endpoints.

LiteLLM

HTTP client patching for LiteLLM proxy:

FunctionDescription
initialize()Initialize from environment variables
configure(config)Set configuration explicitly
enable()Enable HTTP client patching
disable()Disable HTTP client patching
isMiddlewareInitialized()Check initialization status
getStatus()Get status including proxy URL
reset()Reset all state

Tool Metering

Track custom tool and external API calls. Available from any provider sub-path or directly via @revenium/middleware/tools.

import { meterTool, setToolContext } from "@revenium/middleware/tools";

setToolContext({
  agent: "my-agent",
  traceId: "session-123",
});

const result = await meterTool(
  "weather-api",
  async () => {
    return await fetch("https://api.example.com/weather");
  },
  {
    operation: "get_forecast",
    outputFields: ["temperature", "humidity"],
  },
);

Functions

FunctionDescription
meterTool(toolId, fn, metadata?)Wrap a function with automatic metering (timing, success/failure, errors)
reportToolCall(toolId, report)Manually report an already-executed tool call
setToolContext(ctx)Set context for all subsequent tool calls
getToolContext()Get current context
clearToolContext()Clear context
runWithToolContext(ctx, fn)Run function with scoped context (uses AsyncLocalStorage)

Tool Metadata Options

FieldDescription
operationTool operation name (e.g., "search", "scrape")
outputFieldsArray of field names to auto-extract from result
usageMetadataCustom metrics (e.g., tokens, results count)
agentAgent identifier (inherited from context)
traceIdTrace identifier (inherited from context)
organizationNameOrganization name (inherited from context)
productNameProduct name (inherited from context)
subscriberCredentialSubscriber credential string (inherited from context)
workflowIdWorkflow identifier (inherited from context)
transactionIdTransaction identifier (inherited from context)

Metadata Fields

All fields are optional and can be set per-request via usageMetadata:

FieldTypeDescription
traceIdstringUnique identifier for session or conversation tracking
taskTypestringType of AI task (e.g., "chat", "embedding")
agentstringAI agent or bot identifier
organizationNamestringOrganization or company name
productNamestringProduct or feature name
subscriptionIdstringSubscription plan identifier
responseQualityScorenumberCustom quality rating (0.0-1.0)
subscriber.idstringUnique user identifier
subscriber.emailstringUser email address
subscriber.credentialobjectAuthentication credential (name and value)

Trace Visualization Fields

Environment variables for distributed tracing and analytics:

Environment VariableDescription
REVENIUM_ENVIRONMENTDeployment environment (production, staging, development)
REVENIUM_REGIONCloud region (auto-detected from AWS/Azure/GCP if not set)
REVENIUM_CREDENTIAL_ALIASHuman-readable credential name
REVENIUM_TRACE_TYPECategorical identifier (alphanumeric, hyphens, underscores, max 128 chars)
REVENIUM_TRACE_NAMEHuman-readable label for trace instances (max 256 chars)
REVENIUM_PARENT_TRANSACTION_IDParent transaction reference for distributed tracing
REVENIUM_TRANSACTION_NAMEHuman-friendly operation label
REVENIUM_RETRY_NUMBERRetry attempt number (0 for first attempt)

Configuration Options

Common Environment Variables

VariableRequiredDescription
REVENIUM_METERING_API_KEYYesRevenium API key (starts with hak_)
REVENIUM_METERING_BASE_URLNoRevenium API endpoint (default: https://api.revenium.ai)
REVENIUM_DEBUGNoEnable debug logging (true/false)
REVENIUM_PRINT_SUMMARYNoTerminal summary (true, human, json, false)
REVENIUM_TEAM_IDNoTeam ID for cost display in terminal summary
REVENIUM_CAPTURE_PROMPTSNoEnable prompt capture (true/false)

Provider-Specific Variables

VariableProviderDescription
OPENAI_API_KEYOpenAIOpenAI API key
AZURE_OPENAI_API_KEYAzure OpenAIAzure OpenAI API key
AZURE_OPENAI_ENDPOINTAzure OpenAIAzure resource endpoint URL
AZURE_OPENAI_API_VERSIONAzure OpenAIAPI version (default: 2024-02-15-preview)
ANTHROPIC_API_KEYAnthropicAnthropic API key
GOOGLE_API_KEYGoogle GenAIGoogle AI Studio API key
GOOGLE_CLOUD_PROJECTGoogle VertexGCP project ID
GOOGLE_APPLICATION_CREDENTIALSGoogle VertexPath to service account key file
GOOGLE_CLOUD_LOCATIONGoogle VertexGCP region (default: us-central1)
PERPLEXITY_API_KEYPerplexityPerplexity API key
LITELLM_PROXY_URLLiteLLMLiteLLM proxy URL (e.g., http://localhost:4000)
LITELLM_API_KEYLiteLLMLiteLLM proxy API key
FAL_KEYfal.aifal.ai API key

See .env.example for the complete list with all optional configuration.

Troubleshooting

No tracking data appears

  • Verify environment variables are set correctly in .env
  • Enable debug logging: REVENIUM_DEBUG=true
  • Check console for [Revenium] log messages
  • Verify your REVENIUM_METERING_API_KEY is valid

Client not initialized error

  • Make sure you call Initialize() before GetClient()
  • Check that your .env file is in the project root
  • Verify REVENIUM_METERING_API_KEY is set

Azure OpenAI not working

  • Verify all Azure environment variables are set (see .env.example)
  • Check that AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_API_KEY are correct
  • Ensure you're using a valid deployment name in the model parameter

Debug Mode

Enable detailed logging:

REVENIUM_DEBUG=true

Testing

npm test                # Run all tests
npm run test:core       # Run core module tests
npm run test:openai     # Run OpenAI tests
npm run test:anthropic  # Run Anthropic tests
npm run test:google     # Run Google tests
npm run test:perplexity # Run Perplexity tests
npm run test:litellm    # Run LiteLLM tests
npm run test:fal        # Run fal.ai tests
npm run test:integration # Run integration tests
npm run test:coverage   # Run tests with coverage

Requirements

  • Node.js 18+
  • TypeScript 5.0+ (for TypeScript projects)
  • At least one provider SDK installed as peer dependency

Contributing

See CONTRIBUTING.md

Code of Conduct

See CODE_OF_CONDUCT.md

Security

See SECURITY.md

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

Built by Revenium

Keywords

revenium

FAQs

Package last updated on 02 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts