
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@with-orbit/sdk
Advanced tools
Orbit - AI Cost Analytics SDK. Track, monitor, and optimize your AI spend.
Track, monitor, and optimize your AI spend across OpenAI, Anthropic, and other LLM providers.
npm install @with-orbit/sdk
# or
yarn add @with-orbit/sdk
# or
pnpm add @with-orbit/sdk
Sign up at Orbit and create an API key.
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({
apiKey: 'orb_live_xxxxxxxxxxxxxxxxxxxxxxxx',
defaultFeature: 'my-app', // Optional: default feature for all events
});
Wrap your OpenAI or Anthropic client for automatic tracking:
import OpenAI from 'openai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'chat-assistant', // Attribute all calls to this feature
});
// All API calls are now automatically tracked!
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello, world!' }],
});
Works with Anthropic too:
import Anthropic from '@anthropic-ai/sdk';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const anthropic = orbit.wrapAnthropic(new Anthropic(), {
feature: 'document-analysis',
});
const message = await anthropic.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Analyze this document...' }],
});
Works with Google Gemini (new @google/genai SDK):
import { GoogleGenAI } from '@google/genai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const ai = orbit.wrapGoogle(new GoogleGenAI({ apiKey: 'your-gemini-key' }), {
feature: 'chat',
});
const response = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: 'Hello, how are you?',
});
Works with Google Gemini (legacy @google/generative-ai SDK):
import { GoogleGenerativeAI } from '@google/generative-ai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const genAI = orbit.wrapGoogleLegacy(new GoogleGenerativeAI('your-gemini-key'), {
feature: 'chat',
});
const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
const result = await model.generateContent('Hello, how are you?');
For other providers or custom implementations:
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
// Track a successful request
await orbit.track({
model: 'gpt-4o',
input_tokens: 150,
output_tokens: 50,
latency_ms: 1234,
feature: 'summarization',
environment: 'production',
});
// Track an error
await orbit.trackError('gpt-4o', 'rate_limit_exceeded', 'Rate limit exceeded', {
feature: 'chat-assistant',
input_tokens: 150,
});
const orbit = new Orbit({
// Required
apiKey: 'orb_live_xxx',
// Optional
baseUrl: 'https://app.withorbit.io/api/v1', // Custom API endpoint
defaultFeature: 'my-app', // Default feature name
defaultEnvironment: 'production', // 'production' | 'staging' | 'development'
debug: false, // Enable debug logging
// Batching (for high-volume applications)
batchEvents: true, // Batch events before sending
batchSize: 10, // Max events per batch
batchInterval: 5000, // Max ms before sending batch
// Reliability
retry: true, // Retry failed requests
maxRetries: 3, // Max retry attempts
});
| Property | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model name (e.g., 'gpt-4o', 'claude-3-opus') |
input_tokens | number | Yes | Number of input tokens |
output_tokens | number | Yes | Number of output tokens |
provider | string | No | Provider name (auto-detected if not provided) |
latency_ms | number | No | Request latency in milliseconds |
feature | string | No | Feature name for attribution |
environment | string | No | Environment ('production', 'staging', 'development') |
status | string | No | Request status ('success', 'error', 'timeout') |
error_type | string | No | Error type if status is 'error' |
error_message | string | No | Error message if status is 'error' |
user_id | string | No | Your application's user ID |
session_id | string | No | Session ID for grouping requests |
request_id | string | No | Unique request ID for tracing |
task_id | string | No | Task ID for grouping related LLM calls in agentic workflows |
customer_id | string | No | Customer ID for billing attribution |
metadata | object | No | Additional key-value metadata |
Features are Orbit's killer feature - they let you see exactly which parts of your application are consuming AI resources:
// Track different features
await orbit.track({
model: 'gpt-4o',
input_tokens: 100,
output_tokens: 50,
feature: 'chat-assistant', // <-- Attribute to chat feature
});
await orbit.track({
model: 'gpt-4o',
input_tokens: 500,
output_tokens: 200,
feature: 'document-analysis', // <-- Attribute to doc analysis
});
Then in the Orbit dashboard, you'll see:
Track multi-step agentic workflows by grouping related LLM calls under a task:
// All calls with the same task_id are grouped together
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'ai-agent',
task_id: 'task_abc123', // Group all LLM calls for this task
customer_id: 'cust_xyz789', // Attribute costs to this customer
});
// Step 1: Plan
await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Plan how to analyze this data...' }],
});
// Step 2: Execute
await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Now execute the analysis...' }],
});
// Both calls are tracked under task_abc123
In the Orbit dashboard, you can then see:
Track usage across different environments:
const orbit = new Orbit({
apiKey: 'orb_live_xxx',
defaultEnvironment: process.env.NODE_ENV === 'production' ? 'production' : 'development',
});
For serverless or short-lived processes, flush events before exit:
// Before your process exits
await orbit.shutdown();
Full TypeScript support with exported types:
import { Orbit, OrbitEvent, OrbitConfig } from '@with-orbit/sdk';
const config: OrbitConfig = {
apiKey: 'orb_live_xxx',
};
const event: OrbitEvent = {
model: 'gpt-4o',
input_tokens: 100,
output_tokens: 50,
};
MIT
FAQs
Orbit - AI Cost Analytics SDK. Track, monitor, and optimize your AI spend.
We found that @with-orbit/sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.