New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@bernierllc/cache-manager

Package Overview
Dependencies
Maintainers
2
Versions
7
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@bernierllc/cache-manager

Multi-tier caching with TTL support, cache invalidation, and multiple storage backends

latest
Source
npmnpm
Version
1.0.7
Version published
Maintainers
2
Created
Source

@bernierllc/cache-manager

Multi-tier caching with TTL support, cache invalidation, and multiple storage backends.

Installation

npm install @bernierllc/cache-manager

For Redis support:

npm install @bernierllc/cache-manager redis

Quick Start

import { CacheManager } from '@bernierllc/cache-manager';

// Create cache with default memory backend
const cache = new CacheManager({
  strategy: 'lru',
  maxSize: 1000,
  defaultTtl: 60 * 1000 // 1 minute
});

// Set cache value
await cache.set('user:123', { name: 'John', email: 'john@example.com' });

// Get cache value
const user = await cache.get('user:123');
console.log(user); // { name: 'John', email: 'john@example.com' }

// Get or set pattern
const userData = await cache.getOrSet('user:456', async () => {
  return await fetchUserFromDatabase('456');
}, 5 * 60 * 1000); // Cache for 5 minutes

Core Features

  • Multiple Backends: Memory, Redis, database, multi-tier
  • Cache Strategies: LRU, LFU, TTL-based eviction policies
  • Intelligent Invalidation: Tag-based, pattern-based cache invalidation
  • Serialization: JSON, binary, custom serializers
  • Compression: Optional compression for large values
  • Stats & Monitoring: Hit rates, memory usage, performance metrics
  • Distributed Support: Redis-backed distributed caching

API Reference

CacheManager

Constructor

const cache = new CacheManager(options: CacheOptions)

Options:

  • backend?: CacheBackend | CacheBackend[] - Storage backend(s)
  • strategy?: 'lru' | 'lfu' | 'ttl' - Eviction strategy (default: 'lru')
  • maxSize?: number - Maximum cache size (default: 1000)
  • defaultTtl?: number - Default TTL in milliseconds
  • keyPrefix?: string - Prefix for all cache keys
  • onEviction?: (key, value) => void - Eviction callback
  • onExpiration?: (key, value) => void - Expiration callback

Core Methods

// Basic operations
await cache.set(key: string, value: any, ttl?: number, tags?: string[]): Promise<void>
await cache.get<T>(key: string): Promise<T | null>
await cache.delete(key: string): Promise<boolean>
await cache.clear(): Promise<void>
await cache.has(key: string): Promise<boolean>

// Batch operations  
await cache.mget<T>(keys: string[]): Promise<(T | null)[]>
await cache.mset<T>(entries: Record<string, T>, ttl?: number): Promise<void>
await cache.getOrSet<T>(key: string, factory: () => Promise<T>, ttl?: number): Promise<T>

// Invalidation
await cache.invalidateByTag(tag: string): Promise<void>
await cache.invalidateByPattern(pattern: string): Promise<void>

// Utilities
await cache.keys(pattern?: string): Promise<string[]>
await cache.size(): Promise<number>
await cache.getStats(): Promise<CacheStats>
await cache.cleanup(): Promise<number>

Usage Examples

Basic Memory Caching

import { CacheManager, MemoryCacheBackend } from '@bernierllc/cache-manager';

const cache = new CacheManager({
  backend: new MemoryCacheBackend({
    maxSize: 1000,
    strategy: 'lru'
  }),
  defaultTtl: 60 * 60 * 1000 // 1 hour
});

// Cache user data
await cache.set('user:123', { 
  name: 'John Doe', 
  email: 'john@example.com' 
});

const user = await cache.get('user:123');
console.log(user.name); // 'John Doe'

Redis-backed Distributed Cache

import { CacheManager, RedisCacheBackend } from '@bernierllc/cache-manager';

const cache = new CacheManager({
  backend: new RedisCacheBackend({
    host: 'localhost',
    port: 6379,
    keyPrefix: 'myapp:'
  }),
  defaultTtl: 15 * 60 * 1000 // 15 minutes
});

// Works across multiple application instances
await cache.set('global:config', configObject);

Multi-tier Caching

import { 
  CacheManager, 
  MemoryCacheBackend, 
  RedisCacheBackend 
} from '@bernierllc/cache-manager';

const cache = new CacheManager({
  backend: [
    new MemoryCacheBackend({ maxSize: 100 }),    // L1 cache - fast, small
    new RedisCacheBackend({ host: 'localhost' })  // L2 cache - shared, persistent
  ]
});

// Automatically checks L1, then L2, updates L1 on L2 hits
const data = await cache.get('expensive:computation');

Tag-based Invalidation

const cache = new CacheManager({
  backend: new RedisCacheBackend({ host: 'localhost' })
});

// Cache with tags
await cache.set('post:1', postData, 60 * 60 * 1000, ['user:123', 'category:tech']);
await cache.set('post:2', postData2, 60 * 60 * 1000, ['user:123', 'category:news']);

// Invalidate all posts by user
await cache.invalidateByTag('user:123');

// Invalidate all tech posts  
await cache.invalidateByTag('category:tech');

Pattern-based Invalidation

// Cache user-specific data
await cache.set('user:123:profile', profileData);
await cache.set('user:123:settings', settingsData);
await cache.set('user:456:profile', otherProfileData);

// Invalidate all data for user 123
await cache.invalidateByPattern('user:123:*');

Cache Warming and Preloading

// Warm cache with frequently accessed data
async function warmCache() {
  const popularUsers = await getPopularUsers();
  
  for (const user of popularUsers) {
    await cache.set(`user:${user.id}`, user, 60 * 60 * 1000);
  }
}

// Background cache refresh
setInterval(async () => {
  const keys = await cache.keys('user:*');
  
  for (const key of keys) {
    const userId = key.split(':')[1];
    const freshData = await fetchUserFromDatabase(userId);
    await cache.set(key, freshData, 60 * 60 * 1000);
  }
}, 30 * 60 * 1000); // Refresh every 30 minutes

Performance Monitoring

// Monitor cache performance
setInterval(async () => {
  const stats = await cache.getStats();
  
  console.log(`Cache Performance:
    Hit Rate: ${(stats.hitRate * 100).toFixed(1)}%
    Size: ${stats.size} entries
    Memory: ${Math.round(stats.memoryUsage / 1024)}KB
    Evictions: ${stats.evictions}
  `);
}, 60000);

// Automatic cleanup
cache.startCleanupInterval(5 * 60 * 1000); // Clean every 5 minutes

Configuration

Cache Backends

MemoryCacheBackend

new MemoryCacheBackend({
  maxSize: 1000,              // Maximum number of entries
  strategy: 'lru',            // Eviction strategy
  onEviction: (key, value) => console.log('Evicted:', key),
  onExpiration: (key, value) => console.log('Expired:', key)
})

RedisCacheBackend

new RedisCacheBackend({
  host: 'localhost',
  port: 6379,
  password: 'secret',         // Optional
  db: 0,                      // Database number
  keyPrefix: 'cache:',        // Key prefix
  connectionString: 'redis://localhost:6379' // Alternative to host/port
})

MultiTierCacheBackend

new MultiTierCacheBackend({
  backends: [
    new MemoryCacheBackend({ maxSize: 100 }),
    new RedisCacheBackend({ host: 'localhost' })
  ]
})

Cache Strategies

  • LRU (Least Recently Used): Evicts the least recently accessed entries
  • LFU (Least Frequently Used): Evicts the least frequently accessed entries
  • TTL (Time To Live): Evicts entries based on expiration time

Serializers

import { JSONSerializer, BinarySerializer } from '@bernierllc/cache-manager';

// JSON serialization (default)
const cache = new CacheManager({
  serializer: new JSONSerializer()
});

// Binary serialization for better performance
const cache = new CacheManager({
  serializer: new BinarySerializer()
});

Error Handling

try {
  await cache.set('key', 'value');
  const value = await cache.get('key');
} catch (error) {
  console.error('Cache operation failed:', error);
  // Fallback to original data source
}

Best Practices

  • Choose appropriate TTL: Set TTL based on data freshness requirements
  • Use appropriate cache size: Balance memory usage with hit rates
  • Implement cache warming: Preload frequently accessed data
  • Monitor performance: Track hit rates and adjust configuration
  • Handle failures gracefully: Always have fallback mechanisms
  • Use tags wisely: Group related cache entries for efficient invalidation

Performance

The cache manager is designed for high-performance scenarios:

  • Memory backend: >50,000 operations/second
  • Redis backend: >10,000 operations/second
  • Multi-tier: Combines speed of memory with persistence of Redis
  • Efficient serialization: Minimal overhead for data conversion
  • Smart eviction: Algorithms optimized for real-world usage patterns

Integration Documentation

Logger Integration

The cache manager supports optional logger integration using @bernierllc/logger:

import { CacheManager } from '@bernierllc/cache-manager';
import { detectLogger } from '@bernierllc/logger';

const cache = new CacheManager({
  strategy: 'lru',
  maxSize: 1000
});

// Auto-detect logger if available
const logger = await detectLogger();
if (logger) {
  // Enhanced logging for cache operations
  cache.on('hit', (key) => {
    logger.debug('Cache hit', { key });
  });
  
  cache.on('miss', (key) => {
    logger.debug('Cache miss', { key });
  });
  
  cache.on('evicted', (key, reason) => {
    logger.info('Cache eviction', { key, reason });
  });
}

NeverHub Integration

The cache manager integrates with NeverHub when available for enhanced service discovery and monitoring:

import { CacheManager } from '@bernierllc/cache-manager';
import { detectNeverHub } from '@bernierllc/neverhub-adapter';

async function initializeCacheManager() {
  const cache = new CacheManager({
    strategy: 'lru',
    maxSize: 1000
  });

  // Auto-detect NeverHub
  const neverhub = await detectNeverHub();
  if (neverhub) {
    // Register cache manager as a service
    await neverhub.register({
      type: 'cache-manager',
      name: '@bernierllc/cache-manager',
      version: '1.0.0',
      capabilities: [
        { type: 'cache', name: 'memory', version: '1.0.0' },
        { type: 'cache', name: 'redis', version: '1.0.0' }
      ]
    });

    // Publish cache events
    cache.on('hit', async (key) => {
      await neverhub.publishEvent({
        type: 'cache.hit',
        data: { key, timestamp: Date.now() }
      });
    });

    // Subscribe to cache invalidation events
    await neverhub.subscribe('cache.invalidate', async (event) => {
      if (event.data.pattern) {
        await cache.invalidatePattern(event.data.pattern);
      } else if (event.data.key) {
        await cache.delete(event.data.key);
      }
    });
  }

  return cache;
}

Graceful Degradation

The cache manager implements graceful degradation patterns:

  • Works without external services: Core functionality operates independently
  • Logger integration: Enhanced monitoring when logger service is available
  • NeverHub integration: Service discovery and events when NeverHub is present
  • Backend flexibility: Falls back to memory storage if Redis is unavailable
  • Error resilience: Cache failures don't break application functionality

Dependencies

  • Required: None (core functionality)
  • Optional: redis for Redis backend support
  • Optional: lz4 for compression support
  • Optional: @bernierllc/logger for enhanced logging capabilities
  • Optional: @bernierllc/neverhub-adapter for service discovery integration
  • Internal: @bernierllc/connection-parser for connection string parsing

License

Copyright (c) 2025 Bernier LLC. All rights reserved.

Keywords

cache

FAQs

Package last updated on 09 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts