
Company News
Socket Named Top Sales Organization by RepVue
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.
Lightweight multi-provider LLM wrapper with rate limiting, retries, caching, and prompt sanitization.
Lightweight multi-provider LLM wrapper for Node/Edge (ESM + TypeScript).
Features:
generate APInpm install glim-llm
# plus provider SDKs you want (peer deps). Example:
npm install openai @google/generative-ai
import { createLLMClient, SUPPORTED_PROVIDERS } from 'glim-llm';
const openaiClient = createLLMClient({
provider: 'openai',
config: { apiKey: process.env.OPENAI_KEY!, model: 'gpt-4o-mini' },
rateLimit: { concurrency: 2, requestsPerInterval: 60, intervalMs: 60_000 },
cache: { ttlMs: 300_000, max: 1000 },
retry: { retries: 3 },
sanitize: true,
});
const result = await openaiClient.generate({ prompt: 'Explain edge computing in 2 sentences.' });
console.log(result.text);
console.log('Providers available:', SUPPORTED_PROVIDERS);
createLLMClient(options)Returns an object with:
name (provider)generate(params) Promisestream(params) AsyncGenerator (currently one-shot; future: true streaming)provider: 'openai' | 'groq' | 'gemini'config: { apiKey, model, maxOutputTokens?, temperature?, extra? }rateLimit (optional): { concurrency?, requestsPerInterval?, intervalMs?, throwOnLimit? }cache (optional | false): { ttlMs?, max?, namespace? }retry (optional | false): { retries?, factor?, minTimeoutMs?, maxTimeoutMs? }sanitize (boolean | function) enable basic prompt cleaning.prompt (string)systemPrompt?model? overridetemperature?maxOutputTokens?streaming? (future use)signal? AbortSignal (future wiring)cacheKey? custom or false to bypass cacheLRU in-memory; suits single runtime instance. External cache (Redis, KV) can wrap by replacing ResponseCache logic (PRs welcome).
Two layers: concurrency (parallel tasks) and requestsPerInterval within a sliding window intervalMs. Set throwOnLimit: true to error instead of waiting.
Uses exponential backoff. Disable with retry: false.
Naive removal of control chars and prompt injection phrases; override with custom function.
All dependencies are ESM-friendly. Avoid Node-specific APIs if targeting strict edge (replace crypto hash with Web Crypto; TODO: auto-detect later).
MIT
FAQs
Lightweight multi-provider LLM wrapper with rate limiting, retries, caching, and prompt sanitization.
The npm package glim-llm receives a total of 1 weekly downloads. As such, glim-llm popularity was classified as not popular.
We found that glim-llm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.