Distributed callback queue

Purpose of this module to allow only 1 similar action to run at the same time across any amount
of the machines. This task is solved by the means of acquiring lock before any action
is performed. Currently redis is used as a backing provider, which is super fast,
and more than reliable for this task, especially when combined with redis-sentinel, or cluster
in the future (once it's production ready).
This library requires you to pass 2 redis instances: 1 will be used for acquiring and releasing locks
and the other one for pubsub events (and this is how the system becomes distributed)
npm install dlock -S
Usage
Opts description
const CallbackQueue = require('dlock');
Sample configuration
const opts = {
client: redis,
pubsub: pubsub,
pubsubChannel: '{mypubsubchannel}',
lock: {
timeout: 20000,
retries: 0,
delay: 500
},
lockPrefix: '{mylockprefix}',
log: false
};
const callbackQueue = new CallbackQueue(opts);
In-flight Request Caching
Perform only 1 request and fan-out results via redis pubsub on the network, so that
we never perform more than 1 requests to the same resource in parallel
const errPredicate = err => err.name !== 'LockAcquisitionError';
callbackQueue
.push('jobid', onJobCompleted)
.then(completed => {
const nastylongcalculations = 1 + 1;
completed(null, nastylongcalculations);
})
.catch({ name: 'LockAcquisitionError' }, noop)
.catch(errPredicate, err => {
});
function onJobCompleted(err, ...args) {
if (err) {
return console.error('Failed to complete job: ', err);
}
console.log(args[0]);
}
Async/Await in-flight request caching
Fanout(jobId: String, [timeout: Number], job: Function)
Use fanout(...)
method for the easiest way to handle job subscriptions where
one actor must perform long-running job, but as soon as it's done - everyone who
queued for the results of this job must be notified.
Sample of code is provided to make use of this feature:
const jobId = 'xxx';
const job = async () => {
await Promise.delay(10000);
return 'done';
}
let result;
try {
result = await callbackQueue.fanout(jobId, 2000, job);
} catch (e) {
}
try {
result = await callbackQueue.fanout(jobId, 20000, job);
} catch (e) {
}
Distributed Resource Locking
Allows to acquire lock across multiple processes with redis based lock
callbackQueue
.once('app:job:processing')
.then(lock => {
return lock.release().reflect();
})
.catch(err => {
})
Distributed Locking on Multiple Keys
A little more complex lock, which ensures that we can acquire all locks from a list.
When at least one lock is not acquired - we can't proceed further.
This can be helpful in cases when partial resource can be altered in a separate action
and side-effect from such event would affect further actions from a multi lock.
const { MultiLockError } = CallbackQueue;
callbackQueue
.multi('1', '2', '3', ['4', '5'])
.then(multiLock => {
return multiLock
.extend(1000)
.then(() => longRunningJob())
.then(() => multiLock.release())
.catch(MultiLockError, e => {
});
})
.catch(MultiLockError, e => {
})
.catch(e => {
});
Semaphore
Ensures that all requests are processed one by one. It doesn't guarantee FIFO, but ensures that
not more than 1 request runs at any given time.
const Promise = require('bluebird');
const semaphore = callbackQueue.semaphore('job:id');
Promise.using(semaphore.take(), () => {
});
await semaphore.take(false);
try {
} finally {
semaphore.leave();
}