New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@with-orbit/sdk

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@with-orbit/sdk

Orbit - AI Cost Analytics SDK. Track, monitor, and optimize your AI spend.

latest
Source
npmnpm
Version
0.1.5
Version published
Maintainers
1
Created
Source

Orbit SDK

Track, monitor, and optimize your AI spend across OpenAI, Anthropic, and other LLM providers.

Installation

npm install @with-orbit/sdk
# or
yarn add @with-orbit/sdk
# or
pnpm add @with-orbit/sdk

Quick Start

1. Get your API key

Sign up at Orbit and create an API key.

2. Initialize the SDK

import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({
  apiKey: 'orb_live_xxxxxxxxxxxxxxxxxxxxxxxx',
  defaultFeature: 'my-app', // Optional: default feature for all events
});

3. Track your LLM calls

Wrap your OpenAI or Anthropic client for automatic tracking:

import OpenAI from 'openai';
import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const openai = orbit.wrapOpenAI(new OpenAI(), {
  feature: 'chat-assistant', // Attribute all calls to this feature
});

// All API calls are now automatically tracked!
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello, world!' }],
});

Works with Anthropic too:

import Anthropic from '@anthropic-ai/sdk';
import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const anthropic = orbit.wrapAnthropic(new Anthropic(), {
  feature: 'document-analysis',
});

const message = await anthropic.messages.create({
  model: 'claude-3-opus-20240229',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Analyze this document...' }],
});

Works with Google Gemini (new @google/genai SDK):

import { GoogleGenAI } from '@google/genai';
import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const ai = orbit.wrapGoogle(new GoogleGenAI({ apiKey: 'your-gemini-key' }), {
  feature: 'chat',
});

const response = await ai.models.generateContent({
  model: 'gemini-2.0-flash',
  contents: 'Hello, how are you?',
});

Works with Google Gemini (legacy @google/generative-ai SDK):

import { GoogleGenerativeAI } from '@google/generative-ai';
import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const genAI = orbit.wrapGoogleLegacy(new GoogleGenerativeAI('your-gemini-key'), {
  feature: 'chat',
});

const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
const result = await model.generateContent('Hello, how are you?');

Option B: Manual tracking

For other providers or custom implementations:

import { Orbit } from '@with-orbit/sdk';

const orbit = new Orbit({ apiKey: 'orb_live_xxx' });

// Track a successful request
await orbit.track({
  model: 'gpt-4o',
  input_tokens: 150,
  output_tokens: 50,
  latency_ms: 1234,
  feature: 'summarization',
  environment: 'production',
});

// Track an error
await orbit.trackError('gpt-4o', 'rate_limit_exceeded', 'Rate limit exceeded', {
  feature: 'chat-assistant',
  input_tokens: 150,
});

Configuration

const orbit = new Orbit({
  // Required
  apiKey: 'orb_live_xxx',

  // Optional
  baseUrl: 'https://app.withorbit.io/api/v1', // Custom API endpoint
  defaultFeature: 'my-app',                   // Default feature name
  defaultEnvironment: 'production',            // 'production' | 'staging' | 'development'
  debug: false,                                // Enable debug logging

  // Batching (for high-volume applications)
  batchEvents: true,       // Batch events before sending
  batchSize: 10,           // Max events per batch
  batchInterval: 5000,     // Max ms before sending batch

  // Reliability
  retry: true,             // Retry failed requests
  maxRetries: 3,           // Max retry attempts
});

Event Properties

PropertyTypeRequiredDescription
modelstringYesModel name (e.g., 'gpt-4o', 'claude-3-opus')
input_tokensnumberYesNumber of input tokens
output_tokensnumberYesNumber of output tokens
providerstringNoProvider name (auto-detected if not provided)
latency_msnumberNoRequest latency in milliseconds
featurestringNoFeature name for attribution
environmentstringNoEnvironment ('production', 'staging', 'development')
statusstringNoRequest status ('success', 'error', 'timeout')
error_typestringNoError type if status is 'error'
error_messagestringNoError message if status is 'error'
user_idstringNoYour application's user ID
session_idstringNoSession ID for grouping requests
request_idstringNoUnique request ID for tracing
task_idstringNoTask ID for grouping related LLM calls in agentic workflows
customer_idstringNoCustomer ID for billing attribution
metadataobjectNoAdditional key-value metadata

Feature Attribution

Features are Orbit's killer feature - they let you see exactly which parts of your application are consuming AI resources:

// Track different features
await orbit.track({
  model: 'gpt-4o',
  input_tokens: 100,
  output_tokens: 50,
  feature: 'chat-assistant',  // <-- Attribute to chat feature
});

await orbit.track({
  model: 'gpt-4o',
  input_tokens: 500,
  output_tokens: 200,
  feature: 'document-analysis',  // <-- Attribute to doc analysis
});

Then in the Orbit dashboard, you'll see:

  • Cost breakdown by feature
  • Request volume by feature
  • Error rates by feature
  • And more!

Agentic Task Tracking

Track multi-step agentic workflows by grouping related LLM calls under a task:

// All calls with the same task_id are grouped together
const openai = orbit.wrapOpenAI(new OpenAI(), {
  feature: 'ai-agent',
  task_id: 'task_abc123',      // Group all LLM calls for this task
  customer_id: 'cust_xyz789',  // Attribute costs to this customer
});

// Step 1: Plan
await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Plan how to analyze this data...' }],
});

// Step 2: Execute
await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Now execute the analysis...' }],
});

// Both calls are tracked under task_abc123

In the Orbit dashboard, you can then see:

  • All LLM calls grouped by task
  • Total cost per task
  • Customer-level cost attribution

Environments

Track usage across different environments:

const orbit = new Orbit({
  apiKey: 'orb_live_xxx',
  defaultEnvironment: process.env.NODE_ENV === 'production' ? 'production' : 'development',
});

Graceful Shutdown

For serverless or short-lived processes, flush events before exit:

// Before your process exits
await orbit.shutdown();

TypeScript Support

Full TypeScript support with exported types:

import { Orbit, OrbitEvent, OrbitConfig } from '@with-orbit/sdk';

const config: OrbitConfig = {
  apiKey: 'orb_live_xxx',
};

const event: OrbitEvent = {
  model: 'gpt-4o',
  input_tokens: 100,
  output_tokens: 50,
};

License

MIT

Keywords

ai

FAQs

Package last updated on 02 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts