Security News
RubyGems.org Adds New Maintainer Role
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
lru-cache-for-clusters-as-promised
Advanced tools
LRU Cache for Clusters as Promised provides a cluster-safe lru-cache
via Promises. For environments not using cluster
, the class will provide a Promisified interface to a standard lru-cache
.
Each time you call cluster.fork()
, a new thread is spawned to run your application. When using a load balancer even if a user is assigned a particular IP and port these values are shared between the workers
in your cluster, which means there is no guarantee that the user will use the same workers
between requests. Caching the same objects in multiple threads is not an efficient use of memory.
LRU Cache for Clusters as Promised stores a single lru-cache
on the master
thread which is accessed by the workers
via IPC messages. The same lru-cache
is shared between workers
having a common master
, so no memory is wasted.
When creating a new instance and cluster.isMaster === true
the shared cache is checked based on the and the shared cache is populated, it will be used instead but acted on locally rather than via IPC messages. If the shared cache is not populated a new LRUCache instance is returned.
npm install --save lru-cache-for-clusters-as-promised
yarn add lru-cache-for-clusters-as-promised
namespace: string
, default "default"
;
timeout: integer
, default 100
.
Promise
.failsafe: string
, default resolve
.
Promise
will return resolve(undefined)
by default, or with a value of reject
the return will be reject(Error)
.max: number
maxAge: milliseconds
stale: true|false
true
expired items are return before they are removed rather than undefined
prune: false|crontime string
, defaults to false
prune()
on your cache at regular intervals specified in "crontime", for example "*/30 * * * * *" would prune the cache every 30 seconds (See node-cron
patterns for more info). Also works in single threaded environments not using the cluster
module. Passing false
to an existing namespace will disable any jobs that are scheduled.parse: function
, defaults to JSON.parse
LRUCacheForClustersAsPromised
instance and in theory could be different per worker.stringify: function
, defaults to JSON.stringify
! note that
length
anddispose
are missing as it is not possible to passfunctions
via IPC messages.
init(): void
cluster.isMaster === true
to initialize the caches.getInstance(options): Promise<LRUCacheForClustersAsPromised>
LRUCacheForClustersAsPromised
instance once the underlying LRUCache
is guaranteed to exist. Uses the same options
you would pass to the constructor. When constructed synchronously other methods will ensure the underlying cache is created, but this method can be useful from the worker when you plan to interact with the caches directly. Note that this will slow down the construction time on the worker by a few milliseconds while the cache creation is confirmed.getAllCaches(): { key : LRUCache }
LRUCache
caches keyed by namespace. Accessible only when cluster.isMaster === true
, otherwise throws an exception.getCache(): LRUCache
LRUCache
. Accessible only when cluster.isMaster === true
, otherwise throws an exception.set(key, value, maxAge): Promise<void>
maxAge
will cause the value to expire per the stale
value or when prune
d.setObject async (key, object, maxAge): Promise<void>
cache.stringify()
, which defaults to JSON.stringify()
. Use a custom parser like flatted
to cases like circular object references.mSet({ key1: 1, key2: 2, ...}, maxAge): Promise<void>
mSetObjects({ key1: { obj: 1 }, key2: { obj: 2 }, ...}, maxAge): Promise<void>
cache.stringify()
, see cache.setObject()
;get(key): Promise<string | number | null | undefined>
getObject(key): Promise<Object | null | undefined>
cache.parse()
, which defaults to JSON.parse()
. Use a custom parser like flatted
to cases like circular object references.mGet([key1, key2, ...]): Promise<{key:string | number | null | undefined}?>
{ key1: '1', key2: '2' }
.mGetObjects([key1, key2, ...]): Promise<{key:Object | null | undefined}?>
{ key1: '1', key2: '2' }
. Passes the values through cache.parse()
, see cache.getObject()
.peek(key): Promise<string | number | null | undefined>
del(key): Promise<void>
mDel([key1, key2...]): Promise<void>
has(key): Promise<boolean>
incr(key, [amount]): Promise<number>
amount
, which defaults to 1
. More atomic in a clustered environment.decr(key, [amount]): Promise<number>
amount
, which defaults to 1
. More atomic in a clustered environment.reset(): Promise<void>
keys(): Promise<Array<string>>
values(): Promise<Array<string | number>>
dump()
prune(): Promise<void>
length(): Promise<number>
itemCount(): Promise<number>
length()
.max([max]): Promise<number | void>
max
value for the cache.maxAge([maxAge]): Promise<number | void>
maxAge
value for the cache.allowStale([true|false]): Promise<boolean | void>
allowStale
value for the cache (set via stale
in options). The stale()
method is deprecated.execute(command, [arg1, arg2, ...]): Promise<any>
LRUCache
function) on the cache, returns whatever value was returned.Master
// require the module in your master thread that creates workers to initialize
require('lru-cache-for-clusters-as-promised').init();
Worker
// worker code
const LRUCache = require('lru-cache-for-clusters-as-promised');
const cache = new LRUCache({
namespace: 'users',
max: 50,
stale: false,
timeout: 100,
failsafe: 'resolve',
});
// async cache
(async function() {
const options = { /* ...options */ };
const cache = await LRUCache.getInstance(options);
}());
const user = { name: 'user name' };
const key = 'userKey';
// set a user for a the key
cache.set(key, user)
.then(() => {
console.log('set the user to the cache');
// get the same user back out of the cache
return cache.get(key);
})
.then((cachedUser) => {
console.log('got the user from cache', cachedUser);
// check the number of users in the cache
return cache.length();
})
.then((size) => {
console.log('user cache size/length', size);
// remove all the items from the cache
return cache.reset();
})
.then(() => {
console.log('the user cache is empty');
// return user count, this will return the same value as calling length()
return cache.itemCount();
})
.then((size) => {
console.log('user cache size/itemCount', size);
});
Use a custom object parser for the cache to handle cases like circular object references that JSON.parse()
and JSON.stringify()
cannot, or use custom revivers, etc.
const flatted = require('flatted');
const LRUCache = require('lru-cache-for-clusters-as-promised');
const cache = new LRUCache({
namespace: 'circular-objects',
max: 50,
parse: flatted.parse,
stringify: flatted.stringify,
});
// create a circular reference
const a = { b: null };
const b = { a };
b.a.b = b;
// this will work
await cache.setObject(1, a);
// this will return an object with the same circular reference via flatted
const c = await cache.getObject(1);
if (a == c && a.b === c.b) {
console.log('yes they are the same!');
}
Clustered cache on master thread for clustered environments
Promisified for non-clustered environments
FAQs
LRU Cache that is safe for clusters, based on `lru-cache`. Save memory by only caching items on the main thread via a promisified interface.
We found that lru-cache-for-clusters-as-promised demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.
Security News
Research
Socket's threat research team has detected five malicious npm packages targeting Roblox developers, deploying malware to steal credentials and personal data.