Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
poolifier
Advanced tools
A fast, easy to use Node.js Worker Thread Pool and Cluster Pool implementation
Poolifier is used to perform CPU intensive and I/O intensive tasks on nodejs servers, it implements worker pools using worker-threads and cluster pools using Node.js cluster modules.
With poolifier you can improve your performance and resolve problems related to the event loop.
Moreover you can execute your tasks using an API designed to improve the developer experience.
Please consult our general guidelines.
Node pool contains two worker-threads/cluster worker pool implementations, you don't have to deal with worker-threads/cluster worker complexity.
The first implementation is a static worker pool, with a defined number of workers that are started at creation time and will be reused.
The second implementation is a dynamic worker pool with a number of worker started at creation time (these workers will be always active and reused) and other workers created when the load will increase (with an upper limit, these workers will be reused when active), the new created workers will be stopped after a configurable period of inactivity.
You have to implement your worker extending the ThreadWorker or ClusterWorker class.
npm install poolifier --save
You can implement a worker-threads worker in a simple way by extending the class ThreadWorker:
'use strict'
const { ThreadWorker } = require('poolifier')
function yourFunction(data) {
// this will be executed in the worker thread,
// the data will be received by using the execute method
return { ok: 1 }
}
module.exports = new ThreadWorker(yourFunction, {
maxInactiveTime: 60000
})
Instantiate your pool based on your needs :
'use strict'
const { DynamicThreadPool, FixedThreadPool, PoolEvents } = require('poolifier')
// a fixed worker-threads pool
const pool = new FixedThreadPool(15,
'./yourWorker.js',
{ errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
pool.emitter.on(PoolEvents.busy, () => console.log('Pool is busy'))
// or a dynamic worker-threads pool
const pool = new DynamicThreadPool(10, 100,
'./yourWorker.js',
{ errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
pool.emitter.on(PoolEvents.full, () => console.log('Pool is full'))
pool.emitter.on(PoolEvents.busy, () => console.log('Pool is busy'))
// the execute method signature is the same for both implementations,
// so you can easy switch from one to another
pool.execute({}).then(res => {
console.log(res)
}).catch ....
You can do the same with the classes ClusterWorker, FixedClusterPool and DynamicClusterPool.
See examples folder for more details (in particular if you want to use a pool with multiple worker functions).
Remember that workers can only send and receive structured-cloneable data.
Node versions >= 16.14.x are supported.
PoolOptions
An object with these properties:
messageHandler
(optional) - A function that will listen for message event on each worker
errorHandler
(optional) - A function that will listen for error event on each worker
onlineHandler
(optional) - A function that will listen for online event on each worker
exitHandler
(optional) - A function that will listen for exit event on each worker
workerChoiceStrategy
(optional) - The worker choice strategy to use in this pool:
WorkerChoiceStrategies.ROUND_ROBIN
: Submit tasks to worker in a round robin fashionWorkerChoiceStrategies.LEAST_USED
: Submit tasks to the worker with the minimum number of executed, executing and queued tasksWorkerChoiceStrategies.LEAST_BUSY
: Submit tasks to the worker with the minimum tasks total execution and wait timeWorkerChoiceStrategies.LEAST_ELU
: Submit tasks to the worker with the minimum event loop utilization (ELU) (experimental)WorkerChoiceStrategies.WEIGHTED_ROUND_ROBIN
: Submit tasks to worker by using a weighted round robin scheduling algorithm based on tasks execution timeWorkerChoiceStrategies.INTERLEAVED_WEIGHTED_ROUND_ROBIN
: Submit tasks to worker by using an interleaved weighted round robin scheduling algorithm based on tasks execution time(experimental)WorkerChoiceStrategies.FAIR_SHARE
: Submit tasks to worker by using a fair share scheduling algorithm based on tasks execution time (the default) or ELU active timeWorkerChoiceStrategies.WEIGHTED_ROUND_ROBIN
, WorkerChoiceStrategies.INTERLEAVED_WEIGHTED_ROUND_ROBIN
and WorkerChoiceStrategies.FAIR_SHARE
strategies are targeted to heavy and long tasks.
Default: WorkerChoiceStrategies.ROUND_ROBIN
workerChoiceStrategyOptions
(optional) - The worker choice strategy options object to use in this pool.
Properties:
measurement
(optional) - The measurement to use in worker choice strategies: runTime
, waitTime
or elu
.runTime
(optional) - Use the tasks median runtime instead of the tasks average runtime in worker choice strategies.waitTime
(optional) - Use the tasks median wait time instead of the tasks average wait time in worker choice strategies.elu
(optional) - Use the tasks median ELU instead of the tasks average ELU in worker choice strategies.weights
(optional) - The worker weights to use in weighted round robin worker choice strategies: { 0: 200, 1: 300, ..., n: 100 }
.Default: { runTime: { median: false }, waitTime: { median: false }, elu: { median: false } }
restartWorkerOnError
(optional) - Restart worker on uncaught error in this pool.
Default: true
enableEvents
(optional) - Events emission enablement in this pool.
Default: true
enableTasksQueue
(optional) - Tasks queue per worker enablement in this pool.
Default: false
tasksQueueOptions
(optional) - The worker tasks queue options object to use in this pool.
Properties:
concurrency
(optional) - The maximum number of tasks that can be executed concurrently on a worker.Default: { concurrency: 1 }
ThreadPoolOptions extends PoolOptions
workerOptions
(optional) - An object with the worker options to pass to worker. See worker_threads for more details.ClusterPoolOptions extends PoolOptions
env
(optional) - An object with the environment variables to pass to worker. See cluster for more details.
settings
(optional) - An object with the cluster settings. See cluster for more details.
pool = new FixedThreadPool/FixedClusterPool(numberOfThreads/numberOfWorkers, filePath, opts)
numberOfThreads/numberOfWorkers
(mandatory) Number of workers for this pool
filePath
(mandatory) Path to a file with a worker implementation
opts
(optional) An object with the pool options properties described above
pool = new DynamicThreadPool/DynamicClusterPool(min, max, filePath, opts)
min
(mandatory) Same as FixedThreadPool/FixedClusterPool numberOfThreads/numberOfWorkers, this number of workers will be always active
max
(mandatory) Max number of workers that this pool can contain, the new created workers will die after a threshold (default is 1 minute, you can override it in your worker implementation).
filePath
(mandatory) Path to a file with a worker implementation
opts
(optional) An object with the pool options properties described above
pool.execute(data, name)
data
(optional) An object that you want to pass to your worker implementation
name
(optional) A string with the task function name that you want to execute on the worker. Default: 'default'
This method is available on both pool implementations and returns a promise.
pool.destroy()
Destroy method is available on both pool implementations.
This method will call the terminate method on each worker.
class YourWorker extends ThreadWorker/ClusterWorker
taskFunctions
(mandatory) The task function or task functions object that you want to execute on the worker
opts
(optional) An object with these properties:
maxInactiveTime
(optional) - Max time to wait tasks to work on in milliseconds, after this period the new worker will die.
The last active time of your worker unit will be updated when a task is submitted to a worker or when a worker terminate a task.
If killBehavior
is set to KillBehaviors.HARD
this value represents also the timeout for the tasks that you submit to the pool, when this timeout expires your tasks is interrupted and the worker is killed if is not part of the minimum size of the pool.
If killBehavior
is set to KillBehaviors.SOFT
your tasks have no timeout and your workers will not be terminated until your task is completed.
Default: 60000
killBehavior
(optional) - Dictates if your async unit (worker/process) will be deleted in case that a task is active on it.
KillBehaviors.SOFT: If currentTime - lastActiveTime
is greater than maxInactiveTime
but a task is still executing or queued, then the worker won't be deleted.
KillBehaviors.HARD: If currentTime - lastActiveTime
is greater than maxInactiveTime
but a task is still executing or queued, then the worker will be deleted.
This option only apply to the newly created workers.
Default: KillBehaviors.SOFT
Performance is one of the main target of these worker pool implementations, we want to have a strong focus on this.
We already have a bench folder where you can find some comparisons.
Before to jump into each poolifier pool type, let highlight that Node.js comes with a thread pool already, the libuv thread pool where some particular tasks already run by default.
Please take a look at which tasks run on the libuv thread pool.
If your task runs on libuv thread pool, you can try to:
and/or
If your task does not run into libuv thread pool and is CPU intensive then poolifier thread pools (FixedThreadPool and DynamicThreadPool) are suggested to run CPU intensive tasks, you can still run I/O intensive tasks into thread pools, but performance enhancement is expected to be minimal.
Thread pools are built on top of Node.js worker-threads module.
If your task does not run into libuv thread pool and is I/O intensive then poolifier cluster pools (FixedClusterPool and DynamicClusterPool) are suggested to run I/O intensive tasks, again you can still run CPU intensive tasks into cluster pools, but performance enhancement is expected to be minimal.
Consider that by default Node.js already has great performance for I/O tasks (asynchronous I/O).
Cluster pools are built on top of Node.js cluster module.
If your task contains code that runs on libuv plus code that is CPU intensive or I/O intensive you either split it either combine more strategies (i.e. tune the number of libuv threads and use cluster/thread pools).
But in general, always profile your application.
To choose your pool consider that with a FixedThreadPool/FixedClusterPool or a DynamicThreadPool/DynamicClusterPool (in this case is important the min parameter passed to the constructor) your application memory footprint will increase.
Increasing the memory footprint, your application will be ready to accept more tasks, but during idle time your application will consume more memory.
One good choice from poolifier team point of view is to profile your application using fixed or dynamic worker pool, and to see your application metrics when you increase/decrease the num of workers.
For example you could keep the memory footprint low choosing a DynamicThreadPool/DynamicClusterPool with 5 workers, and allow to create new workers until 50/100 when needed, this is the advantage to use the DynamicThreadPool/DynamicClusterPool.
But in general, always profile your application.
Choose your task here 2.6.x, propose an idea, a fix, an improvement.
See CONTRIBUTING guidelines.
Creator/Owner:
Contributors
[2.6.5] - 2023-06-27
destroy()
gracefully shutdowns worker's server.MessageChannel
internal usage for IPC.FAQs
Fast and small Node.js Worker_Threads and Cluster Worker Pool
The npm package poolifier receives a total of 26,816 weekly downloads. As such, poolifier popularity was classified as popular.
We found that poolifier demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.