
Product
Socket Brings Supply Chain Security to skills.sh
Socket is now scanning AI agent skills across multiple languages and ecosystems, detecting malicious behavior before developers install, starting with skills.sh's 60,000+ skills.
artificial-manager
Advanced tools
AI Query Acceleration Package - intelligent caching, request coalescing, rate limiting, and multi-provider support
AI Query Acceleration Package - intelligent caching, request coalescing, rate limiting, and multi-provider support for AI APIs.
npm install artificial-manager
import { AIManager } from 'artificial-manager';
const ai = new AIManager({
providers: {
openai: { apiKey: process.env.OPENAI_API_KEY },
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
},
cache: { enabled: true, ttl: 3600 },
rateLimit: { respectHeaders: true },
});
// Simple usage
const response = await ai.chat({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.content);
console.log(`Cost: $${response.cost?.totalCost.toFixed(6)}`);
console.log(`Cached: ${response.cached}`);
const response = await ai.chat({
model: 'gpt-4',
fallback: ['claude-3-opus', 'gemini-pro'],
messages: [{ role: 'user', content: 'Hello!' }],
});
If the primary model fails, the request automatically falls back to the next provider.
for await (const chunk of ai.stream({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
process.stdout.write(chunk.text);
}
| Provider | Models |
|---|---|
| OpenAI | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-3.5-turbo, o1, o1-mini |
| Anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-3-5-sonnet |
| gemini-pro, gemini-1.5-pro, gemini-1.5-flash, gemini-2.0-flash | |
| Mistral | mistral-tiny, mistral-small, mistral-medium, mistral-large |
| Cohere | command, command-light, command-r, command-r-plus |
const ai = new AIManager({
providers: {
openai: {
apiKey: 'sk-...',
baseUrl: 'https://api.openai.com/v1', // optional
timeout: 30000, // optional, ms
},
anthropic: {
apiKey: 'sk-ant-...',
},
},
cache: {
enabled: true,
ttl: 3600, // seconds
maxSize: 1000, // max entries
semanticEnabled: true, // enable semantic similarity matching
semanticThreshold: 0.85, // similarity threshold
},
rateLimit: {
respectHeaders: true, // parse Retry-After headers
defaultRpm: 60, // requests per minute fallback
preemptiveThrottle: true, // queue before hitting limits
},
retry: {
maxRetries: 3,
baseDelayMs: 1000,
maxDelayMs: 30000,
},
telemetry: {
enabled: true, // opt-out with false
},
defaultProvider: 'openai',
});
Hash-based caching using SHA-256 of the request parameters. Identical requests return cached responses instantly.
Enable semantic caching to match similar prompts:
const ai = new AIManager({
// ...
cache: {
enabled: true,
semanticEnabled: true,
semanticThreshold: 0.85, // 0-1, higher = stricter matching
},
});
// Get cost summary
const summary = ai.getCostSummary();
console.log(`Total cost: $${summary.totalCost.toFixed(4)}`);
console.log(`Total tokens: ${summary.totalTokens}`);
// Estimate cost before making a request
const estimate = ai.estimateCost('gpt-4', 1000, 500);
console.log(`Estimated cost: $${estimate.totalCost.toFixed(4)}`);
const stats = ai.getCacheStats();
console.log(`Exact cache hit rate: ${(stats.exact?.hitRate * 100).toFixed(1)}%`);
console.log(`Semantic cache hits: ${stats.semantic?.hits}`);
This package includes opt-in-by-default telemetry powered by Google Analytics 4 to track unique installs and usage patterns. We collect:
We DO NOT collect:
Telemetry data is sent to Google Analytics 4. To view your package analytics:
install - Unique package installationsai_usage - Daily usage statisticsai_error - Error occurrencesTo see unique users vs download count, check the "Users" metric in GA4.
You can use your own GA4 property:
const ai = new AIManager({
// ...
telemetry: {
enabled: true,
ga4MeasurementId: 'G-XXXXXXXXXX', // Your Measurement ID
ga4ApiSecret: 'your-api-secret', // Your API Secret
},
});
Or via environment variables:
ARTIFICIAL_MANAGER_GA4_MEASUREMENT_ID=G-XXXXXXXXXX
ARTIFICIAL_MANAGER_GA4_API_SECRET=your-api-secret
Set the environment variable:
ARTIFICIAL_MANAGER_TELEMETRY=false
Or in code:
const ai = new AIManager({
// ...
telemetry: { enabled: false },
});
// Or disable after initialization
ai.disableTelemetry();
chat(request: ChatRequest): Promise<ChatResponse>Send a chat completion request.
stream(request: StreamRequest): AsyncGenerator<StreamChunk>Stream a chat completion response.
getCacheStats(): { exact: CacheStats | null; semantic: CacheStats | null }Get cache statistics.
getCostSummary(since?: number): CostSummaryGet cost summary, optionally filtered by timestamp.
clearCache(): voidClear all caches.
countTokens(model: string, text: string): numberEstimate token count for text.
estimateCost(model: string, promptTokens: number, completionTokens: number): CostEstimateEstimate cost for a request.
shutdown(): Promise<void>Gracefully shutdown, flushing telemetry and clearing resources.
MIT
FAQs
AI Query Acceleration Package - intelligent caching, request coalescing, rate limiting, and multi-provider support
We found that artificial-manager demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Product
Socket is now scanning AI agent skills across multiple languages and ecosystems, detecting malicious behavior before developers install, starting with skills.sh's 60,000+ skills.

Product
Socket now supports PHP with full Composer and Packagist integration, enabling developers to search packages, generate SBOMs, and protect their PHP dependencies from supply chain threats.

Security News
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.