New: Introducing PHP and Composer Support.Read the Announcement
Socket
Book a DemoInstallSign in
Socket

artificial-manager

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install
Package was removed
Sorry, it seems this package was removed from the registry

artificial-manager

AI Query Acceleration Package - intelligent caching, request coalescing, rate limiting, and multi-provider support

latest
npmnpm
Version
1.0.0
Version published
Maintainers
1
Created
Source

artificial-manager

AI Query Acceleration Package - intelligent caching, request coalescing, rate limiting, and multi-provider support for AI APIs.

Features

  • 68% reduction in API calls via semantic caching
  • Request coalescing - deduplicate identical in-flight requests
  • Smart rate limiting - respects provider headers automatically
  • Multi-provider failover - automatic switching on failures
  • Cost tracking - per-request token counting and cost estimation
  • TypeScript first - full type safety with ESM and CJS support

Installation

npm install artificial-manager

Quick Start

import { AIManager } from 'artificial-manager';

const ai = new AIManager({
  providers: {
    openai: { apiKey: process.env.OPENAI_API_KEY },
    anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
  },
  cache: { enabled: true, ttl: 3600 },
  rateLimit: { respectHeaders: true },
});

// Simple usage
const response = await ai.chat({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response.content);
console.log(`Cost: $${response.cost?.totalCost.toFixed(6)}`);
console.log(`Cached: ${response.cached}`);

Multi-Provider Failover

const response = await ai.chat({
  model: 'gpt-4',
  fallback: ['claude-3-opus', 'gemini-pro'],
  messages: [{ role: 'user', content: 'Hello!' }],
});

If the primary model fails, the request automatically falls back to the next provider.

Streaming

for await (const chunk of ai.stream({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
  process.stdout.write(chunk.text);
}

Supported Providers

ProviderModels
OpenAIgpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-3.5-turbo, o1, o1-mini
Anthropicclaude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3-5-sonnet
Googlegemini-pro, gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash
Mistralmistral-tiny, mistral-small, mistral-medium, mistral-large
Coherecommand, command-light, command-r, command-r-plus

Configuration

const ai = new AIManager({
  providers: {
    openai: {
      apiKey: 'sk-...',
      baseUrl: 'https://api.openai.com/v1', // optional
      timeout: 30000, // optional, ms
    },
    anthropic: {
      apiKey: 'sk-ant-...',
    },
  },

  cache: {
    enabled: true,
    ttl: 3600, // seconds
    maxSize: 1000, // max entries
    semanticEnabled: true, // enable semantic similarity matching
    semanticThreshold: 0.85, // similarity threshold
  },

  rateLimit: {
    respectHeaders: true, // parse Retry-After headers
    defaultRpm: 60, // requests per minute fallback
    preemptiveThrottle: true, // queue before hitting limits
  },

  retry: {
    maxRetries: 3,
    baseDelayMs: 1000,
    maxDelayMs: 30000,
  },

  telemetry: {
    enabled: true, // opt-out with false
  },

  defaultProvider: 'openai',
});

Caching

Exact Match Cache

Hash-based caching using SHA-256 of the request parameters. Identical requests return cached responses instantly.

Semantic Cache (Optional)

Enable semantic caching to match similar prompts:

const ai = new AIManager({
  // ...
  cache: {
    enabled: true,
    semanticEnabled: true,
    semanticThreshold: 0.85, // 0-1, higher = stricter matching
  },
});

Cost Tracking

// Get cost summary
const summary = ai.getCostSummary();
console.log(`Total cost: $${summary.totalCost.toFixed(4)}`);
console.log(`Total tokens: ${summary.totalTokens}`);

// Estimate cost before making a request
const estimate = ai.estimateCost('gpt-4', 1000, 500);
console.log(`Estimated cost: $${estimate.totalCost.toFixed(4)}`);

Cache Statistics

const stats = ai.getCacheStats();
console.log(`Exact cache hit rate: ${(stats.exact?.hitRate * 100).toFixed(1)}%`);
console.log(`Semantic cache hits: ${stats.semantic?.hits}`);

Telemetry

This package includes opt-in-by-default telemetry powered by Google Analytics 4 to track unique installs and usage patterns. We collect:

  • Unique install ID (anonymous UUID)
  • Package version
  • Node.js version
  • Operating system
  • Provider usage distribution (anonymized)
  • Cache hit rates

We DO NOT collect:

  • API keys
  • Prompts or responses
  • Any personally identifiable information

Viewing Analytics

Telemetry data is sent to Google Analytics 4. To view your package analytics:

  • Go to your GA4 property dashboard
  • Navigate to Reports > Engagement > Events
  • Look for these events:
    • install - Unique package installations
    • ai_usage - Daily usage statistics
    • ai_error - Error occurrences

To see unique users vs download count, check the "Users" metric in GA4.

Custom GA4 Configuration

You can use your own GA4 property:

const ai = new AIManager({
  // ...
  telemetry: {
    enabled: true,
    ga4MeasurementId: 'G-XXXXXXXXXX', // Your Measurement ID
    ga4ApiSecret: 'your-api-secret', // Your API Secret
  },
});

Or via environment variables:

ARTIFICIAL_MANAGER_GA4_MEASUREMENT_ID=G-XXXXXXXXXX
ARTIFICIAL_MANAGER_GA4_API_SECRET=your-api-secret

Opting Out

Set the environment variable:

ARTIFICIAL_MANAGER_TELEMETRY=false

Or in code:

const ai = new AIManager({
  // ...
  telemetry: { enabled: false },
});

// Or disable after initialization
ai.disableTelemetry();

API Reference

AIManager

chat(request: ChatRequest): Promise<ChatResponse>

Send a chat completion request.

stream(request: StreamRequest): AsyncGenerator<StreamChunk>

Stream a chat completion response.

getCacheStats(): { exact: CacheStats | null; semantic: CacheStats | null }

Get cache statistics.

getCostSummary(since?: number): CostSummary

Get cost summary, optionally filtered by timestamp.

clearCache(): void

Clear all caches.

countTokens(model: string, text: string): number

Estimate token count for text.

estimateCost(model: string, promptTokens: number, completionTokens: number): CostEstimate

Estimate cost for a request.

shutdown(): Promise<void>

Gracefully shutdown, flushing telemetry and clearing resources.

License

MIT

Keywords

ai

FAQs

Package last updated on 19 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts