Latest Threat ResearchGlassWorm Loader Hits Open VSX via Developer Account Compromise.Details
Socket
Book a DemoInstallSign in
Socket

@thinkhive/sdk

Package Overview
Dependencies
Maintainers
1
Versions
6
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@thinkhive/sdk

ThinkHive SDK v3.3 - AI agent observability with business metrics, ROI analytics, and 25+ trace format support

latest
Source
npmnpm
Version
3.3.0
Version published
Weekly downloads
20
-77.78%
Maintainers
1
Weekly downloads
 
Created
Source

ThinkHive SDK v3.3.0

The official JavaScript/TypeScript SDK for ThinkHive - AI Agent Observability Platform.

Features

  • OpenTelemetry-Based Tracing: Built on OTLP for seamless integration with existing observability tools
  • Run-Centric Architecture: Atomic unit of work tracking with claims, calibration, and linking
  • Facts vs Inferences: Claims API for separating verified facts from inferences
  • Deterministic Ticket Linking: 7 methods for linking runs to support tickets
  • Calibrated Predictions: Brier scores for prediction accuracy
  • Auto-Instrumentation: Works with LangChain, OpenAI, Anthropic, and more
  • Multi-Format Support: Normalizes traces from 25+ observability platforms

Installation

npm install @thinkhive/sdk

Quick Start

Basic Initialization

import { init, runs, traceLLM, shutdown } from '@thinkhive/sdk';

// Initialize the SDK
init({
  apiKey: 'th_your_api_key',
  serviceName: 'my-ai-agent',
  autoInstrument: true,
  frameworks: ['langchain', 'openai'],
});

// Create a run (atomic unit of work)
const run = await runs.create({
  agentId: 'weather-agent',
  conversation: [
    { role: 'user', content: 'What is the weather in San Francisco?' },
    { role: 'assistant', content: 'The weather in San Francisco is currently 65°F and sunny.' }
  ],
  outcome: 'success',
});

console.log(`Run ID: ${run.id}`);

// Shutdown when done
await shutdown();

Manual Tracing

import { init, traceLLM, traceRetrieval, traceTool, traceChain } from '@thinkhive/sdk';

init({ apiKey: 'th_your_api_key', serviceName: 'my-agent' });

// Trace an LLM call
const response = await traceLLM({
  name: 'generate-response',
  modelName: 'gpt-4',
  provider: 'openai',
  input: { prompt: 'Hello!' }
}, async () => {
  // Your LLM call here
  return await openai.chat.completions.create({...});
});

// Trace a retrieval operation
const docs = await traceRetrieval({
  name: 'search-knowledge-base',
  query: 'refund policy',
  topK: 5
}, async () => {
  return await vectorStore.similaritySearch(query, 5);
});

// Trace a tool call
const result = await traceTool({
  name: 'lookup-order',
  toolName: 'order_lookup',
  parameters: { orderId: '12345' }
}, async () => {
  return await lookupOrder('12345');
});

Analyzer API (User-Selected Analysis)

import { analyzer } from '@thinkhive/sdk';

// Estimate cost before running analysis
const estimate = await analyzer.estimateCost({
  traceIds: ['trace-1', 'trace-2', 'trace-3'],
  tier: 'standard',
});
console.log(`Estimated cost: $${estimate.estimatedCost}`);

// Analyze specific traces
const analysis = await analyzer.analyze({
  traceIds: ['trace-1', 'trace-2'],
  tier: 'standard',
  includeRootCause: true,
  includeLayers: true,
});

// Analyze traces by time window with smart sampling
const windowAnalysis = await analyzer.analyzeWindow({
  agentId: 'support-agent',
  startDate: new Date('2024-01-01'),
  endDate: new Date('2024-01-31'),
  filters: { outcomes: ['failure'], minSeverity: 'medium' },
  sampling: { strategy: 'smart', samplePercent: 10 },
});

// Get aggregated insights
const summary = await analyzer.summarize({
  agentId: 'support-agent',
  startDate: new Date('2024-01-01'),
  endDate: new Date('2024-01-31'),
});

Issues API (Clustered Failure Patterns)

import { issues } from '@thinkhive/sdk';

// List issues for an agent
const issueList = await issues.list('support-agent', {
  status: 'open',
  limit: 10,
});

// Get a specific issue
const issue = await issues.get('issue-123');

// Get fixes for an issue
const fixes = await issues.getFixes('issue-123');

API Key Management

import { apiKeys, hasPermission, canAccessAgent } from '@thinkhive/sdk';

// Create a scoped API key
const result = await apiKeys.create({
  name: 'CI Pipeline Key',
  permissions: {
    read: true,
    write: true,
    delete: false
  },
  scopeType: 'agent', // Restrict to specific agents
  allowedAgentIds: ['agent-prod-001'],
  environment: 'production',
  expiresAt: new Date(Date.now() + 90 * 24 * 60 * 60 * 1000) // 90 days
});

console.log(`Key created: ${result.name} (${result.keyPrefix}...)`);

// Check permissions
if (hasPermission(key, 'write')) {
  // Can write data
}

// Check agent access
if (canAccessAgent(key, 'agent-123')) {
  // Can access this agent
}

Claims API (Facts vs Inferences)

import { claims, isFact, isInference, getHighConfidenceClaims } from '@thinkhive/sdk';

// List claims for a run
const claimList = await claims.list(runId);

// Filter by type
const facts = claimList.filter(isFact);
const inferences = claimList.filter(isInference);

// Get high confidence claims
const confident = getHighConfidenceClaims(claimList, 0.9);

Calibration API (Prediction Accuracy)

import { calibration, calculateBrierScore, isWellCalibrated } from '@thinkhive/sdk';

// Get calibration status
const status = await calibration.getStatus(agentId);

// Calculate Brier score for predictions
const brierScore = calculateBrierScore(predictions, outcomes);

// Check if well calibrated
if (isWellCalibrated(status)) {
  console.log('Agent predictions are well calibrated');
}

Business Metrics API

import {
  businessMetrics,
  isMetricReady,
  needsMoreTraces,
  getStatusMessage
} from '@thinkhive/sdk';

// Get current metric value with status
const metric = await businessMetrics.current('agent-123', 'Deflection Rate');
console.log(`${metric.metricName}: ${metric.valueFormatted}`);

if (metric.status === 'insufficient_data') {
  console.log(`Need ${metric.minTraceThreshold - metric.traceCount} more traces`);
}

// Get historical data for graphing
const history = await businessMetrics.history('agent-123', 'Deflection Rate', {
  startDate: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
  endDate: new Date(),
  granularity: 'daily',
});

console.log(`${history.dataPoints.length} data points`);
console.log(`Change: ${history.summary.changePercent}%`);

// Record external metric values (from CRM, surveys, etc.)
await businessMetrics.record('agent-123', {
  metricName: 'CSAT/NPS',
  value: 4.5,
  unit: 'score',
  periodStart: '2024-01-01T00:00:00Z',
  periodEnd: '2024-01-07T23:59:59Z',
  source: 'survey_system',
  sourceDetails: { surveyId: 'survey_456', responseCount: 150 },
});

Metric Status Types

StatusDescription
readyMetric calculated and ready to display
insufficient_dataNeed more traces before calculation
awaiting_externalExternal data source not connected
staleData is older than expected

Ticket Linking (Zendesk Integration)

import {
  linking,
  generateZendeskMarker,
  linkRunToZendeskTicket
} from '@thinkhive/sdk';

// Generate a marker to embed in ticket
const marker = generateZendeskMarker(runId);
// Returns: <!-- thinkhive:run:abc123 -->

// Link a run to a ticket
await linkRunToZendeskTicket(runId, ticketId);

// Get best linking method
import { getBestLinkMethod } from '@thinkhive/sdk';
const method = getBestLinkMethod(runData);
// Returns: 'conversation_id' | 'subject_hash' | 'marker' | etc.

Auto-Instrumentation

import { init } from '@thinkhive/sdk';

// Initialize with auto-instrumentation
init({
  apiKey: 'th_your_api_key',
  serviceName: 'my-ai-agent',
  autoInstrument: true,
  frameworks: ['langchain', 'openai', 'anthropic']
});

// Now all LangChain, OpenAI, and Anthropic calls are automatically traced!

Analysis Tiers

TierDescriptionUse Case
fastQuick pattern-based analysisHigh-volume, low-latency needs
standardLLM-powered analysisDefault for most use cases
deepMulti-pass with validationCritical traces, root cause analysis

Environment Variables

VariableDescription
THINKHIVE_API_KEYYour ThinkHive API key
THINKHIVE_ENDPOINTCustom API endpoint (default: https://demo.thinkhive.ai)
THINKHIVE_SERVICE_NAMEService name for traces (optional)

V3 Architecture

Key Concepts

Run-Centric Model: The atomic unit of work is a "Run" (not a trace). A run captures:

  • Conversation messages
  • Retrieved contexts
  • Tool calls
  • Outcome and metadata

Facts vs Inferences: Claims API separates:

  • Facts: Verified information from retrieval or tool calls
  • Inferences: LLM-generated conclusions
  • Computed: Derived values from rules

Calibrated Predictions: Track prediction accuracy using:

  • Brier scores for overall calibration
  • ECE (Expected Calibration Error) for bucketed analysis

API Structure

APIDescription
runsCreate and manage runs (atomic work units)
claimsManage facts/inferences for runs
calibrationTrack prediction accuracy
analyzerUser-selected trace analysis
issuesClustered failure patterns
linkingConnect runs to support tickets
customerContextTime-series customer snapshots
apiKeysAPI key management
businessMetricsIndustry-driven metrics with historical tracking
roiAnalyticsBusiness ROI and financial impact analysis
qualityMetricsRAG evaluation and hallucination detection

New Evaluation APIs (v3.0)

APIDescription
humanReviewHuman-in-the-loop review queues
nondeterminismMulti-sample reliability testing
evalHealthEvaluation metric health monitoring
deterministicGradersRule-based evaluation
conversationEvalMulti-turn conversation evaluation
transcriptPatternsPattern detection in transcripts

API Reference

See API Documentation for complete type definitions.

License

MIT License - see LICENSE for details.

Keywords

thinkhive

FAQs

Package last updated on 26 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts