Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
@tcortega/puppeteer-cluster
Advanced tools
Create a cluster of puppeteer workers. This library spawns a pool of Chromium instances via Puppeteer and helps to keep track of jobs and errors. This is helpful if you want to crawl multiple pages or run tests in parallel. Puppeteer Cluster takes care of reusing Chromium and restarting the browser in case of errors.
Install puppeteer (if you don't already have it installed):
npm install --save puppeteer
Install puppeteer-cluster:
npm install --save puppeteer-cluster
The following is a typical example of using puppeteer-cluster. A cluster is created with 2 concurrent workers. Then a task is defined which includes going to the URL and taking a screenshot. We then queue two jobs and wait for the cluster to finish.
const { Cluster } = require('puppeteer-cluster');
(async () => {
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT,
maxConcurrency: 2,
});
await cluster.task(async ({ page, data: url }) => {
await page.goto(url);
const screen = await page.screenshot();
// Store screenshot, do something else
});
cluster.queue('http://www.google.com/');
cluster.queue('http://www.wikipedia.org/');
// many more pages
await cluster.idle();
await cluster.close();
})();
There are different concurrency models, which define how isolated each job is run. You can set it in the options
when calling Cluster.launch. The default option is Cluster.CONCURRENCY_CONTEXT
, but it is recommended to always specify which one you want to use.
Concurrency | Description | Shared data |
---|---|---|
CONCURRENCY_PAGE | One Page for each URL | Shares everything (cookies, localStorage, etc.) between jobs. |
CONCURRENCY_CONTEXT | Incognito page (see IncognitoBrowserContext) for each URL | No shared data. |
CONCURRENCY_BROWSER | One browser (using an incognito page) per URL. If one browser instance crashes for any reason, this will not affect other jobs. | No shared data. |
Custom concurrency (experimental) | You can create your own concurrency implementation. Copy one of the files of the concurrency/built-in directory and implement ConcurrencyImplementation . Then provide the class to the option concurrency . This part of the library is currently experimental and might break in the future, even in a minor version upgrade while the version has not reached 1.0. | Depends on your implementation |
To allow proper type checks with TypeScript you can provide generics. In case no types are provided, any
is assumed for input and output. See the following minimal example or check out the more complex typings example for more information.
const cluster: Cluster<string, number> = await Cluster.launch(/* ... */);
await cluster.task(async ({ page, data }) => {
// TypeScript knows that data is a string and expects this function to return a number
return 123;
});
// Typescript expects a string as argument ...
cluster.queue('http://...');
// ... and will return a number when execute is called.
const result = await cluster.execute('https://www.google.com');
Try to checkout the puppeteer debugging tips first. Your problem might not be related to puppeteer-cluster
, but puppteer
itself. Additionally, you can enable verbose logging to see which data is consumed by which worker and some other cluster information. Set the DEBUG environment variable to puppeteer-cluster:*
. See an example below or checkout the debug docs for more information.
# Linux
DEBUG='puppeteer-cluster:*' node examples/minimal
# Windows Powershell
$env:DEBUG='puppeteer-cluster:*';node examples/minimal
Cluster module provides a method to launch a cluster of Chromium instances.
Emitted when a queued task ends in an error for some reason. Reasons might be a network error, your code throwing an error, timeout hit, etc. The first argument will the error itself. The second argument is the URL or data of the job (as given to Cluster.queue). If retryLimit is set to a value greater than 0
, the cluster will automatically requeue the job and retry it again later. The third argument is a boolean which indicates whether this task will be retried.
In case the task was queued via Cluster.execute there will be no event fired.
cluster.on('taskerror', (err, data, willRetry) => {
if (willRetry) {
console.warn(`Encountered an error while crawling ${data}. ${err.message}\nThis job will be retried`);
} else {
console.error(`Failed to crawl ${data}: ${err.message}`);
}
});
Emitted when a task is queued via Cluster.queue or Cluster.execute. The first argument is the object containing the data (if any data is provided). The second argument is the queued function (if any). In case only a function is provided via Cluster.queue or Cluster.execute, the first argument will be undefined. If only data is provided, the second argument will be undefined.
options
<Object> Set of configurable options for the cluster. Can have the following fields:
concurrency
<Cluster.CONCURRENCY_PAGE|Cluster.CONCURRENCY_CONTEXT|Cluster.CONCURRENCY_BROWSER|ConcurrencyImplementation> The chosen concurrency model. See Concurreny models for more information. Defaults to Cluster.CONCURRENCY_CONTEXT
. Alternatively you can provide a class implementing ConcurrencyImplementation
.maxConcurrency
<number> Maximal number of parallel workers. Defaults to 1
.puppeteerOptions
<Object> Object passed to puppeteer.launch. See puppeteer documentation for more information. Defaults to {}
.perBrowserOptions
<Array<Object>> Object passed to puppeteer.launch for each individual browser. If set, puppeteerOptions
will be ignored. Defaults to undefined
(meaning that puppeteerOptions
will be used).retryLimit
<number> How often do you want to retry a job before marking it as failed. Ignored by tasks queued via Cluster.execute. Defaults to 0
.retryDelay
<number> How much time should pass at minimum between the job execution and its retry. Ignored by tasks queued via Cluster.execute. Defaults to 0
.sameDomainDelay
<number> How much time should pass at minimum between two requests to the same domain. If you use this field, the queued data
must be your URL or data
must be an object containing a field called url
.skipDuplicateUrls
<boolean> If set to true
, will skip URLs which were already crawled by the cluster. Defaults to false
. If you use this field, the queued data
must be your URL or data
must be an object containing a field called url
.timeout
<number> Specify a timeout for all tasks. Defaults to 30000
(30 seconds).monitor
<boolean> If set to true
, will provide a small command line output to provide information about the crawling process. Defaults to false
.rawMonitor
<boolean> Just like monitor
, without displaying the stats in the terminal. The raw information is accessible via cluster.stats()
. Defaults to false
.workerCreationDelay
<number> Time between creation of two workers. Set this to a value like 100
(0.1 seconds) in case you want some time to pass before another worker is created. You can use this to prevent a network peak right at the start. Defaults to 0
(no delay).puppeteer
<Object> In case you want to use a different puppeteer library (like puppeteer-core or puppeteer-extra), pass the object here. If not set, will default to using puppeteer. When using puppeteer-core
, make sure to also provide puppeteerOptions.executablePath
.The method launches a cluster instance.
taskFunction
<function(string|Object, Page, Object)> Sets the function, which will be called for each job. The function will be called with an object having the following fields:
page
<Page> The page given by puppeteer, which provides methods to interact with a single tab in Chromium.data
The data of the job you provided to Cluster.queue.worker
<Object> An object containing information about the worker executing the current job.
id
<number> ID of the worker. Worker IDs start at 0.Specifies a task for the cluster. A task is called for each job you queue via Cluster.queue. Alternatively you can directly queue the function that you want to be executed. See Cluster.queue for an example.
data
Data to be queued. This might be your URL (a string) or a more complex object containing data. The data given will be provided to your task function(s). See [examples] for a more complex usage of this argument.taskFunction
<function> Function like the one given to Cluster.task. If a function is provided, this function will be called (only for this job) instead of the function provided to Cluster.task. The function will be called with an object having the following fields:
page
<Page> The page given by puppeteer, which provides methods to interact with a single tab in Chromium.data
The data of the job you provided as first argument to Cluster.queue. This might be undefined
in case you only specified a function.worker
<Object> An object containing information about the worker executing the current job.
id
<number> ID of the worker. Worker IDs start at 0.Puts a URL or data into the queue. Alternatively (or even additionally) you can queue functions. See the examples about function queuing for more information: (Simple function queuing, complex function queuing).
Be aware that this function only returns a Promise for backward compatibility reasons. This function does not run asynchronously and will immediately return.
data
Data to be queued. This might be your URL (a string) or a more complex object containing data. The data given will be provided to your task function(s). See [examples] for a more complex usage of this argument.taskFunction
<function> Function like the one given to Cluster.task. If a function is provided, this function will be called (only for this job) instead of the function provided to Cluster.task. The function will be called with an object having the following fields:
page
<Page> The page given by puppeteer, which provides methods to interact with a single tab in Chromium.data
The data of the job you provided as first argument to Cluster.queue. This might be undefined
in case you only specified a function.worker
<Object> An object containing information about the worker executing the current job.
id
<number> ID of the worker. Worker IDs start at 0.Works like Cluster.queue, but this function returns a Promise which will be resolved after the task is executed. That means, that the job is still queued, but the script will wait for it to be finished. In case an error happens during the execution, this function will reject the Promise with the thrown error. There will be no "taskerror" event fired. In addition, tasks queued via execute will ignore "retryLimit" and "retryDelay". For an example see the Execute example.
Returns information about the crawling process, including:
now
<[datetime]>startTime
<[datetime]> Cluster start timetimeRunningMillis
<number> Time running (milliseconds)timeRunning
<string> Time running (readable format)doneTargets
<number> Targets completedallTargetCount
<number> Total number of targets queueddonePercStr
<string> % targets done (readable format)errorCount
<number> Number of errorserrorPerc
<string> % errors (readable format)timeRemainingMillis
<number> Calculated time remaining (milliseconds)timeRemining
<string> Calculated time remaining (readable format)pagesPerSecond
<string> Avg rate of page completed per sec (readable format)cpuUsage
<string> CPU load (readable format)memoryUsage
<string> Memory load (readable format)workersStarting
<number> Nunmber of workers startingworkersAvail
<number> Total number of workers availableworkersIdle
<number> Number of currently idle workersworkersWorking
<number> Number of currently active workersPromise is resolved when the queue becomes empty.
Closes the cluster and all opened Chromium instances including all open pages (if any were opened). It is recommended to run Cluster.idle before calling this function. The Cluster object itself is considered to be disposed and cannot be used anymore.
FAQs
Cluster management for puppeteer
The npm package @tcortega/puppeteer-cluster receives a total of 0 weekly downloads. As such, @tcortega/puppeteer-cluster popularity was classified as not popular.
We found that @tcortega/puppeteer-cluster demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.