Security News
PyPI Introduces Digital Attestations to Strengthen Python Package Security
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
lru-cache-for-clusters-as-promised
Advanced tools
LRU Cache for Clusters as Promised provides a cluster-safe lru-cache
via Promises. For environments not using cluster
, the class will provide a Promisified interface to a standard lru-cache
.
Each time you call cluster.fork()
, a new thread is spawned to run your application. When using a load balancer even if a user is assigned a particular IP and port these values are shared between the workers
in your cluster, which means there is no guarantee that the user will use the same workers
between requests. Caching the same objects in multiple threads is not an efficient use of memory.
LRU Cache for Clusters as Promised stores a single lru-cache
on the master
thread which is accessed by the workers
via IPC messages. The same lru-cache
is shared between workers
having a common master
, so no memory is wasted.
When creating a new instance and cluster.isMaster === true
the shared cache is checked based on the and the shared cache is populated, it will be used instead but acted on locally rather than via IPC messages. If the shared cache is not populated a new LRUCache instance is returned.
npm install --save lru-cache-for-clusters-as-promised
namespace: string
, default "default"
;
timeout: integer
, default 100
.
Promise
.failsafe: string
, default resolve
.
Promise
will return resolve(undefined)
by default, or with a value of reject
the return will be reject(Error)
.max: number
maxAge: milliseconds
stale: true|false
true
expired items are return before they are removed rather than undefined
prune: false|crontime string
, defaults to false
prune()
on your cache at regular intervals specified in "crontime", for example "*/30 * * * * *" would prune the cache every 30 seconds. Also works in single threaded environments not using the cluster
module.! note that
length
anddispose
are missing as it is not possible to passfunctions
via IPC messages.
set(key, value, maxAge)
maxAge
will cause the value to expire per the stale
value or when prune
d.mSet({ key1: 1, key2: 2, ...}, maxAge)
mSetObjects({ key1: { obj: 1 }, key2: { obj: 2 }, ...}, maxAge)
get(key)
mGet([key1, key2, ...])
{ key1: '1', key2: '2' }
.mGetObjects([key1, key2, ...])
{ key1: '1', key2: '2' }
.peek(key)
del(key)
mDel([key1, key2...])
has(key)
incr(key, [amount])
amount
, which defaults to 1
. More atomic in a clustered environment.decr(key, [amount])
amount
, which defaults to 1
. More atomic in a clustered environment.reset()
keys()
values()
dump()
prune()
length()
itemCount()
length()
.max([max])
max
value for the cache.maxAge([maxAge])
maxAge
value for the cache.stale([true|false])
stale
value for the cache.// require the module in your master thread that creates workers to initialize
const LRUCache = require('lru-cache-for-clusters-as-promised');
LRUCache.init();
// worker code
const LRUCache = require('lru-cache-for-clusters-as-promised');
const cache = new LRUCache({
namespace: 'users',
max: 50,
stale: false,
timeout: 100,
failsafe: 'resolve',
});
const user = { name: 'user name' };
const key = 'userKey';
// set a user for a the key
cache.set(key, user)
.then(() => {
console.log('set the user to the cache');
// get the same user back out of the cache
return cache.get(key);
})
.then((cachedUser) => {
console.log('got the user from cache', cachedUser);
// check the number of users in the cache
return cache.length();
})
.then((size) => {
console.log('user cache size/length', size);
// remove all the items from the cache
return cache.reset();
})
.then(() => {
console.log('the user cache is empty');
// return user count, this will return the same value as calling length()
return cache.itemCount();
})
.then((size) => {
console.log('user cache size/itemCount', size);
});
Clustered cache on master thread for clustered environments
Promisified for non-clustered environments
FAQs
LRU Cache that is safe for clusters, based on `lru-cache`. Save memory by only caching items on the main thread via a promisified interface.
We found that lru-cache-for-clusters-as-promised demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.