
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
@revenium/google
Advanced tools
Transparent TypeScript middleware for automatic Revenium usage tracking with Google AI (Gemini)
Automatically track and meter your Google AI API usage with Revenium. This middleware provides seamless integration with Google AI v1 (deprecated) and Google AI v2 (latest), requiring minimal code changes.
SDK Comparison:
| Feature | Google v1 | Google v2 |
|---|---|---|
| Status | Deprecated (EOL: Nov 2025) | Latest |
| Chat | Yes | Yes |
| Streaming | Yes | Yes |
| Embeddings | No | Yes |
| Authentication | API Key | API Key |
| Use Case | Legacy projects | New projects |
Important: The model parameter is required when calling any controller method. You must specify the model explicitly in your code.
This middleware supports all models available in Google AI Studio. The middleware does not maintain a hardcoded list of models, ensuring compatibility with new models as Google releases them.
For the latest available models, see:
Example usage:
// Without metadata (clean, simple)
const result = await controller.createChat(
["Your prompt here"],
"gemini-2.0-flash-001" // required model parameter
);
// With metadata (optional)
const metadata = {
subscriberId: "user-123",
subscriberEmail: "user@example.com",
organizationId: "org-456",
productId: "product-789"
};
const resultWithMetadata = await controller.createChat(
["Your prompt here"],
"gemini-2.0-flash-001",
metadata // optional metadata parameter
);
npm install @revenium/google
Note: The middleware automatically loads your .env file when imported. You don't need to install or configure dotenv separately.
For complete setup instructions and usage examples, see the examples linked throughout this guide.
The following guide walks you through setting up Revenium middleware in your project:
npm install @revenium/google
Note: The middleware automatically loads your .env file when imported. You don't need to install or configure dotenv separately.
Revenium API Key:
hak_)Google AI API Key:
Create a .env file in your project root:
# Create .env file
echo. > .env # On Windows (CMD TERMINAL)
touch .env # On Mac/Linux (CMD TERMINAL)
# OR
#PowerShell
New-Item -Path .env -ItemType File
Copy and paste the following into .env:
# Google AI Configuration
GOOGLE_API_KEY=your_google_ai_api_key_here
# Revenium Configuration
REVENIUM_METERING_API_KEY=your_revenium_api_key_here
# Optional: For development/testing (defaults to https://api.revenium.ai)
# REVENIUM_METERING_BASE_URL=https://api.revenium.ai
# Optional: Enable debug logging
REVENIUM_LOG_LEVEL=INFO
Create test-google.js with basic usage:
import { GoogleV2Controller } from "@revenium/google";
const controller = new GoogleV2Controller();
const result = await controller.createChat(
["What is artificial intelligence?"],
"gemini-2.0-flash-001"
);
For a complete working example, see: Google V2 Basic Example
Note: Need the legacy v1 SDK? See the Legacy Support section below.
Use the examples as reference for implementing the middleware in your project. See the examples linked above for complete implementation including:
Add test scripts and module type to your package.json:
{
"name": "my-google-ai-project",
"version": "1.0.0",
"type": "module",
"scripts": {
"test-google": "node test-google.js"
},
"dependencies": {
"@revenium/google": "^1.0.0"
}
}
Important: If you get a "Cannot use import statement outside a module" error, make sure your package.json includes "type": "module" as shown above.
For more advanced usage including streaming, embeddings, and custom metadata, see the complete examples:
If you've cloned this repository from GitHub and want to run the included examples to see how the middleware works (without modifying the middleware source code):
# Clone the repository
git clone https://github.com/revenium/revenium-middleware-google-node.git
cd revenium-middleware-google-node
# Install dependencies
npm install
# Build the packages
npm run build
# Configure environment variables
cp .env.example .env
# Edit .env with your API keys
Using npm scripts:
# Google AI v2 examples (recommended)
npm run example:google:v2:basic # Basic chat completion
npm run example:google:v2:streaming # Streaming response
npm run example:google:v2:embedding # Text embeddings
# Google AI v1 examples (legacy)
npm run example:google:v1:basic # Basic chat
npm run example:google:v1:streaming # Streaming
npm run example:google:v1:chat # Multi-turn conversation
Or use npx tsx directly:
npx tsx packages/google/examples/v2/basic.ts
npx tsx packages/google/examples/v2/streaming.ts
npx tsx packages/google/examples/v2/embedding.ts
For detailed information about each example, see the examples directory.
If you're planning to modify the examples or experiment with the code, the setup above is sufficient. However, if you want to modify the middleware source code itself (files in packages/google/src/), you'll need to understand the development workflow.
See Local Development and Contributing below for the complete development guide.
Already have a project? Just install and replace imports:
npm install @revenium/google
Add to your existing .env file:
GOOGLE_API_KEY=your_google_ai_api_key_here
REVENIUM_METERING_API_KEY=your_revenium_api_key_here
# Optional: For development/testing (defaults to https://api.revenium.ai)
# REVENIUM_METERING_BASE_URL=https://api.revenium.ai
# Optional: Enable debug logging
REVENIUM_LOG_LEVEL=INFO
Before:
import { GoogleGenerativeAI } from "@google/generative-ai";
After:
// Google v2 (Recommended)
import { GoogleV2Controller } from "@revenium/google";
// OR Google v1 (Legacy - deprecated)
import { GoogleV1Controller } from "@revenium/google";
Basic usage pattern:
import { GoogleV2Controller } from "@revenium/google";
const controller = new GoogleV2Controller();
const result = await controller.createChat(
["Your prompt here"],
"gemini-2.0-flash-001" // required model parameter
);
For complete working examples, see:
Basic streaming pattern:
const result = await controller.createStreaming(
["Your prompt here"],
"gemini-2.0-flash-001"
);
for await (const chunk of result.stream) {
// Process streaming chunks
}
For complete streaming examples with performance tracking and metadata, see:
Note: Embeddings are only available in Google v2. Google v1 does not support embeddings.
Basic embedding pattern:
const result = await controller.createEmbedding(
"Text to embed",
"text-embedding-004"
);
For complete embedding examples with metadata and batch processing, see:
Add business context to your AI usage. See the Revenium Metering API Reference for complete header options.
The usageMetadata parameter supports the following fields for detailed tracking:
| Field | Description | Use Case |
|---|---|---|
traceId | Session/conversation tracking identifier | Distributed tracing, debugging |
taskType | AI task categorization | Cost analysis by workload type |
subscriberId | User identifier | Billing, rate limiting |
subscriberEmail | User email address | Support, compliance |
subscriberCredentialName | Auth credential name | Track API keys |
subscriberCredential | Auth credential value | Security auditing |
organizationId | Organization ID | Multi-tenant cost allocation |
subscriptionId | Subscription plan ID | Plan limit tracking |
productId | Product/feature ID | Feature cost attribution |
agent | AI agent identifier | Distinguish workflows |
responseQualityScore | Quality rating 0.0-1.0 | Performance analysis |
modelSource | Routing layer (e.g., DIRECT, GOOGLE) | Integration analytics |
systemFingerprint | Provider-issued model fingerprint | Debugging and attribution |
temperature | Sampling temperature applied | Compare response creativity |
errorReason | Upstream error message | Error monitoring |
mediationLatency | Gateway latency in ms | Diagnose mediation overhead |
Note: The Google middleware accepts these fields in a flat structure. Internally, subscriber fields are transformed to a nested structure (
subscriber.id,subscriber.email,subscriber.credential.name,subscriber.credential.value) before being sent to the Revenium API.
Basic pattern with metadata:
const customMetadata = {
subscriberId: "user-123",
subscriberEmail: "user@example.com",
organizationId: "org-456",
productId: "chat-app"
};
const result = await controller.createChat(
["Your prompt here"],
"gemini-2.0-flash-001", // required model parameter
customMetadata // optional metadata parameter
);
For complete metadata examples with all available fields, see:
Important: The model parameter is required when calling any controller method. You must specify the model explicitly in your code.
This middleware supports all models available in Google AI Studio. The middleware does not maintain a hardcoded list of models, ensuring compatibility with new models as Google releases them.
For the latest available models, see:
Common models used in examples:
gemini-2.0-flash-001, gemini-1.5-pro, gemini-1.5-flashtext-embedding-004Note: Google v1 SDK does not support embeddings. Use v2 for embedding operations.
Example usage:
// Basic pattern - model is required
const result = await controller.createChat(
["Your prompt here"],
"gemini-2.0-flash-001" // required model parameter
);
For complete examples with different models and use cases, see:
Note: Google v1 SDK does not support embeddings due to API limitations.
Required:
GOOGLE_API_KEY - Your Google AI API key from Google AI StudioREVENIUM_METERING_API_KEY - Your Revenium API key from Revenium DashboardOptional:
REVENIUM_METERING_BASE_URL - Revenium API base URL (defaults to https://api.revenium.ai, only needed for development/testing)REVENIUM_LOG_LEVEL - Log level: DEBUG, INFO, WARN, ERROR (defaults to INFO)Controllers read configuration from environment variables:
export GOOGLE_API_KEY="your-api-key"
export REVENIUM_METERING_API_KEY="your-revenium-key"
# Optional: Only for development/testing
# export REVENIUM_METERING_BASE_URL="https://api.revenium.ai"
Then instantiate the controller:
import { GoogleV2Controller } from "@revenium/google";
const controller = new GoogleV2Controller();
const response = await controller.createChat(
["Hello world"],
"gemini-2.0-flash-001"
);
For complete configuration examples, see:
"Missing API Key" Error
export GOOGLE_API_KEY="your-actual-api-key"
echo $GOOGLE_API_KEY # Verify it's set
"Requests not being tracked"
export REVENIUM_METERING_API_KEY="your-actual-revenium-key"
export REVENIUM_LOG_LEVEL="DEBUG" # Enable debug logging
Module Import Errors
{
"type": "module"
}
For detailed documentation, visit docs.revenium.io
See CONTRIBUTING.md
See SECURITY.md
The Google AI v1 SDK (@google/generative-ai) is deprecated and will reach end-of-life in November 2025. We recommend migrating to v2 (@google/genai).
If you need to use the legacy v1 SDK:
import { GoogleV1Controller } from "@revenium/google";
const controller = new GoogleV1Controller();
const result = await controller.createChat(
["What is artificial intelligence?"],
"gemini-2.0-flash-001"
);
Legacy examples:
This project is licensed under the MIT License - see the LICENSE file for details.
For issues, feature requests, or contributions:
Are you planning to modify the middleware source code? (Not just run examples)
If you want to:
Then follow the complete development workflow in DEVELOPMENT.md, which covers:
npm run build is needednpx tsx - no rebuild neededpackages/google/src/, you must run npm run build before testingQuick Start for Contributors:
# 1. Make changes to source code
vim packages/google/src/v2/googleV2.service.ts
# 2. Rebuild the package
npm run build
# 3. Test your changes
npm run example:google:v2:basic
# 4. See DEVELOPMENT.md for complete workflow
Built by Revenium
FAQs
Transparent TypeScript middleware for automatic Revenium usage tracking with Google AI (Gemini)
We found that @revenium/google demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 4 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.