
Research
Supply Chain Attack on Axios Pulls Malicious Dependency from npm
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.
@exa-ai/cache-manager
Advanced tools
A cache module for nodejs that allows easy wrapping of functions in cache, tiered caches, and a consistent interface.
pnpm install cache-manager
import { caching } from 'cache-manager';
const memoryCache = await caching('memory', {
max: 100,
ttl: 10 * 1000 /*milliseconds*/,
});
const ttl = 5 * 1000; /*milliseconds*/
await memoryCache.set('foo', 'bar', ttl);
console.log(await memoryCache.get('foo'));
// >> "bar"
await memoryCache.del('foo');
console.log(await memoryCache.get('foo'));
// >> undefined
const getUser = (id: string) => new Promise.resolve({ id: id, name: 'Bob' });
const userId = 123;
const key = 'user_' + userId;
console.log(await memoryCache.wrap(key, () => getUser(userId), ttl));
// >> { id: 123, name: 'Bob' }
See unit tests in test/caching.test.ts for more information.
await memoryCache.store.mset(
[
['foo', 'bar'],
['foo2', 'bar2'],
],
ttl,
);
console.log(await memoryCache.store.mget('foo', 'foo2'));
// >> ['bar', 'bar2']
// Delete keys with mdel() passing arguments...
await memoryCache.store.mdel('foo', 'foo2');
You can use your own custom store by creating one with the same API as the built-in memory stores.
As caching() requires async functionality to resolve some stores, this is not well-suited to use for default function/constructor parameters etc.
If you need to create a cache store synchronously, you can instead use createCache():
import { createCache, memoryStore } from 'node-cache-manager';
// Create memory cache synchronously
const memoryCache = createCache(memoryStore(), {
max: 100,
ttl: 10 * 1000 /*milliseconds*/,
});
// Default parameter in function
function myService(cache = createCache(memoryStore())) {}
// Default parameter in class constructor
const DEFAULT_CACHE = createCache(memoryStore(), { ttl: 60 * 1000 });
// ...
class MyService {
constructor(private cache = DEFAULT_CACHE) {}
}
import { multiCaching } from 'cache-manager';
const multiCache = multiCaching([memoryCache, someOtherCache]);
const userId2 = 456;
const key2 = 'user_' + userId;
const ttl = 5;
// Sets in all caches.
await multiCache.set('foo2', 'bar2', ttl);
// Fetches from highest priority cache that has the key.
console.log(await multiCache.get('foo2'));
// >> "bar2"
// Delete from all caches
await multiCache.del('foo2');
// Sets multiple keys in all caches.
// You can pass as many key, value tuples as you want
await multiCache.mset(
[
['foo', 'bar'],
['foo2', 'bar2'],
],
ttl
);
// mget() fetches from highest priority cache.
// If the first cache does not return all the keys,
// the next cache is fetched with the keys that were not found.
// This is done recursively until either:
// - all have been found
// - all caches has been fetched
console.log(await multiCache.mget('key', 'key2'));
// >> ['bar', 'bar2']
// Delete keys with mdel() passing arguments...
await multiCache.mdel('foo', 'foo2');
See unit tests in test/multi-caching.test.ts for more information.
The caching and multiCaching functions accept an options object as the second parameter. The following options are available:
true by default.import { caching } from 'cache-manager';
const memoryCache = await caching('memory', {
max: 100,
ttl: 10 * 1000 /*milliseconds*/,
shouldCloneBeforeSet: false, // this is set true by default (optional)
});
Both the caching and multicaching modules support a mechanism to refresh expiring cache keys in background when using the wrap function.
This is done by adding a refreshThreshold attribute while creating the caching store or passing it to the wrap function.
If refreshThreshold is set and after retrieving a value from cache the TTL will be checked.
If the remaining TTL is less than refreshThreshold, the system will update the value asynchronously,
following same rules as standard fetching. In the meantime, the system will return the old value until expiration.
NOTES:
wrap function.ttl is set for the key, the refresh mechanism will not be triggered. For redis, the ttl is set to -1 by default.For example, pass the refreshThreshold to caching like this:
const memoryCache = await caching('memory', {
max: 100,
ttl: 10 * 1000 /*milliseconds*/,
refreshThreshold: 3 * 1000 /*milliseconds*/,
/* optional, but if not set, background refresh error will be an unhandled
* promise rejection, which might crash your node process */
onBackgroundRefreshError: (error) => { /* log or otherwise handle error */ }
});
When a value will be retrieved from Redis with a TTL minor than 3sec, the value will be updated in the background.
Cache Manager now does not throw errors by default. Instead, all errors are evented through the error event. Here is an example on how to use it:
const memoryCache = await caching('memory', {
max: 100,
ttl: 10 * 1000 /*milliseconds*/,
});
memoryCache.on('error', (error) => {
console.error('Cache error:', error);
});
If you would like to contribute to the project, please read how to contribute here CONTRIBUTING.md.
cache-manager is licensed under the MIT license.
FAQs
Cache module for Node.js
The npm package @exa-ai/cache-manager receives a total of 528 weekly downloads. As such, @exa-ai/cache-manager popularity was classified as not popular.
We found that @exa-ai/cache-manager demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
A supply chain attack on Axios introduced a malicious dependency, plain-crypto-js@4.2.1, published minutes earlier and absent from the project’s GitHub releases.

Research
Malicious versions of the Telnyx Python SDK on PyPI delivered credential-stealing malware via a multi-stage supply chain attack.

Security News
TeamPCP is partnering with ransomware group Vect to turn open source supply chain attacks on tools like Trivy and LiteLLM into large-scale ransomware operations.