Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Most powerful, popular and production crawling/scraping package for Node, happy hacking :)
Features:
forceUTF8
mode to let crawler deal for you with charset detection and conversion,Here is the CHANGELOG
Thanks to Authuir, we have a Chinese docs. Other languages are welcomed!
$ npm install crawler
const Crawler = require('crawler');
const c = new Crawler({
maxConnections: 10,
// This will be called for each crawled page
callback: (error, res, done) => {
if (error) {
console.log(error);
} else {
const $ = res.$;
// $ is Cheerio by default
//a lean implementation of core jQuery designed specifically for the server
console.log($('title').text());
}
done();
}
});
// Queue just one URL, with default callback
c.queue('http://www.amazon.com');
// Queue a list of URLs
c.queue(['http://www.google.com/','http://www.yahoo.com']);
// Queue URLs with custom callbacks & parameters
c.queue([{
uri: 'http://parishackers.org/',
jQuery: false,
// The global callback won't be called
callback: (error, res, done) => {
if (error) {
console.log(error);
} else {
console.log('Grabbed', res.body.length, 'bytes');
}
done();
}
}]);
// Queue some HTML code directly without grabbing (mostly for tests)
c.queue([{
html: '<p>This is a <strong>test</strong></p>'
}]);
Use rateLimit
to slow down when you are visiting web sites.
const Crawler = require('crawler');
const c = new Crawler({
rateLimit: 1000, // `maxConnections` will be forced to 1
callback: (err, res, done) => {
console.log(res.$('title').text());
done();
}
});
c.queue(tasks);//between two tasks, minimum time gap is 1000 (ms)
Sometimes you have to access variables from previous request/response session, what should you do is passing parameters as same as options:
c.queue({
uri: 'http://www.google.com',
parameter1: 'value1',
parameter2: 'value2',
parameter3: 'value3'
})
then access them in callback via res.options
console.log(res.options.parameter1);
Crawler picks options only needed by request, so don't worry about the redundancy.
If you are downloading files like image, pdf, word etc, you have to save the raw response body which means Crawler shouldn't convert it to string. To make it happen, you need to set encoding to null
const Crawler = require('crawler');
const fs = require('fs');
const c = new Crawler({
encoding: null,
jQuery: false,// set false to suppress warning message.
callback: (err, res, done) => {
if (err) {
console.error(err.stack);
} else {
fs.createWriteStream(res.options.filename).write(res.body);
}
done();
}
});
c.queue({
uri: 'https://nodejs.org/static/images/logos/nodejs-1920x1200.png',
filename: 'nodejs-1920x1200.png'
});
If you want to do something either synchronously or asynchronously before each request, you can try the code below. Note that direct requests won't trigger preRequest.
const c = new Crawler({
preRequest: (options, done) => {
// 'options' here is not the 'options' you pass to 'c.queue', instead, it's the options that is going to be passed to 'request' module
console.log(options);
// when done is called, the request will start
done();
},
callback: (err, res, done) => {
if (err) {
console.log(err);
} else {
console.log(res.statusCode);
}
}
});
c.queue({
uri: 'http://www.google.com',
// this will override the 'preRequest' defined in crawler
preRequest: (options, done) => {
setTimeout(() => {
console.log(options);
done();
}, 1000);
}
});
In case you want to send a request directly without going through the scheduler in Crawler, try the code below. direct
takes the same options as queue
, please refer to options for detail. The difference is when calling direct
, callback
must be defined explicitly, with two arguments error
and response
, which are the same as that of callback
of method queue
.
crawler.direct({
uri: 'http://www.google.com',
skipEventRequest: false, // default to true, direct requests won't trigger Event:'request'
callback: (error, response) => {
if (error) {
console.log(error)
} else {
console.log(response.statusCode);
}
}
});
Node-crawler now supports http request. Proxy functionality for http2 request does not be included now. It will be added in the future.
crawler.queue({
//unit test work with httpbin http2 server. It could be used for test
uri: 'https://nghttp2.org/httpbin/status/200',
method: 'GET',
http2: true, //set http2 to be true will make a http2 request
callback: (error, response, done) => {
if (error) {
console.error(error);
return done();
}
console.log(`inside callback`);
console.log(response.body);
return done();
}
})
Control rate limit for with limiter. All tasks submit to a limiter will abide the rateLimit
and maxConnections
restrictions of the limiter. rateLimit
is the minimum time gap between two tasks. maxConnections
is the maximum number of tasks that can be running at the same time. Limiters are independent of each other. One common use case is setting different limiters for different proxies. One thing is worth noticing, when rateLimit
is set to a non-zero value, maxConnections
will be forced to 1.
const Crawler = require('crawler');
const c = new Crawler({
rateLimit: 2000,
maxConnections: 1,
callback: (error, res, done) => {
if (error) {
console.log(error);
} else {
const $ = res.$;
console.log($('title').text());
}
done();
}
});
// if you want to crawl some website with 2000ms gap between requests
c.queue('http://www.somewebsite.com/page/1');
c.queue('http://www.somewebsite.com/page/2');
c.queue('http://www.somewebsite.com/page/3');
// if you want to crawl some website using proxy with 2000ms gap between requests for each proxy
c.queue({
uri:'http://www.somewebsite.com/page/1',
limiter:'proxy_1',
proxy:'proxy_1'
});
c.queue({
uri:'http://www.somewebsite.com/page/2',
limiter:'proxy_2',
proxy:'proxy_2'
});
c.queue({
uri:'http://www.somewebsite.com/page/3',
limiter:'proxy_3',
proxy:'proxy_3'
});
c.queue({
uri:'http://www.somewebsite.com/page/4',
limiter:'proxy_1',
proxy:'proxy_1'
});
Normally, all limiter instances in limiter cluster in crawler are instantiated with options specified in crawler constructor. You can change property of any limiter by calling the code below. Currently, we only support changing property 'rateLimit' of limiter. Note that the default limiter can be accessed by c.setLimiterProperty('default', 'rateLimit', 3000)
. We strongly recommend that you leave limiters unchanged after their instantiation unless you know clearly what you are doing.
const c = new Crawler({});
c.setLimiterProperty('limiterName', 'propertyName', value);
options
OptionsEmitted when a task is being added to scheduler.
crawler.on('schedule', (options) => {
options.proxy = 'http://proxy:port';
});
Emitted when limiter has been changed.
options
OptionsEmitted when crawler is ready to send a request.
If you are going to modify options at last stage before requesting, just listen on it.
crawler.on('request', (options) => {
options.qs.timestamp = new Date().getTime();
});
Emitted when queue is empty.
crawler.on('drain', () => {
// For example, release a connection to database.
db.end();// close connection to MySQL
});
Enqueue a task and wait for it to be executed.
Size of queue, read-only
You can pass these options to the Crawler() constructor if you want them to be global or as items in the queue() calls if you want them to be specific to that item (overwriting global options)
This options list is a strict superset of mikeal's request options and will be directly passed to the request() method.
options.uri
: String The url you want to crawl.options.timeout
: Number In milliseconds (Default 15000).callback(error, res, done)
: Function that will be called after a request was completed
error
: Errorres
: http.IncomingMessage A response of standard IncomingMessage includes $
and options
res.statusCode
: Number HTTP status code. E.G.200
res.body
: Buffer | String HTTP response content which could be a html page, plain text or xml document e.g.res.headers
: Object HTTP response headersres.request
: Request An instance of Mikeal's Request
instead of http.ClientRequest
res.options
: Options of this task$
: jQuery Selector A selector for html or xml document.done
: Function It must be called when you've done your work in callback.options.maxConnections
: Number Size of the worker pool (Default 10).options.rateLimit
: Number Number of milliseconds to delay between each requests (Default 0).options.priorityRange
: Number Range of acceptable priorities starting from 0 (Default 10).options.priority
: Number Priority of this request (Default 5). Low values have higher priority.options.retries
: Number Number of retries if the request fails (Default 3),options.retryTimeout
: Number Number of milliseconds to wait before retrying (Default 10000),options.jQuery
: Boolean|String|Object Use cheerio
with default configurations to inject document if true or 'cheerio'. Or use customized cheerio
if an object with Parser options. Disable injecting jQuery selector if false. If you have memory leak issue in your project, use 'whacko', an alternative parser,to avoid that. (Default true)options.forceUTF8
: Boolean If true crawler will get charset from HTTP headers or meta tag in html and convert it to UTF8 if necessary. Never worry about encoding anymore! (Default true),options.incomingEncoding
: String With forceUTF8: true to set encoding manually (Default null) so that crawler will not have to detect charset by itself. For example, incomingEncoding: 'windows-1255'
. See all supported encodingsoptions.skipDuplicates
: Boolean If true skips URIs that were already crawled, without even calling callback() (Default false). This is not recommended, it's better to handle outside Crawler
use seenreqoptions.rotateUA
: Boolean If true, userAgent
should be an array and rotate it (Default false)options.userAgent
: String|Array, If rotateUA
is false, but userAgent
is an array, crawler will use the first one.options.referer
: String If truthy sets the HTTP referer headeroptions.removeRefererHeader
: Boolean If true preserves the set referer during redirectsoptions.headers
: Object Raw key-value of http headersoptions.http2
: Boolean If true, request will be sent in http2 protocol (Default false)const Agent = require('socks5-https-client/lib/Agent');
//...
const c = new Crawler({
// rateLimit: 2000,
maxConnections: 20,
agentClass: Agent, //adding socks5 https agent
method: 'GET',
strictSSL: true,
agentOptions: {
socksHost: 'localhost',
socksPort: 9050
},
// debug: true,
callback: (error, res, done) => {
if (error) {
console.log(error);
}
done();
}
});
Crawler by default use Cheerio instead of JSDOM. JSDOM is more robust, if you want to use JSDOM you will have to require it require('jsdom')
in your own script before passing it to crawler.
jQuery: true //(default)
//OR
jQuery: 'cheerio'
//OR
jQuery: {
name: 'cheerio',
options: {
normalizeWhitespace: true,
xmlMode: true
}
}
These parsing options are taken directly from htmlparser2, therefore any options that can be used in htmlparser2
are valid in cheerio as well. The default options are:
{
normalizeWhitespace: false,
xmlMode: false,
decodeEntities: true
}
For a full list of options and their effects, see this and htmlparser2's options. source
In order to work with JSDOM you will have to install it in your project folder npm install jsdom
, and pass it to crawler.
const jsdom = require('jsdom');
const Crawler = require('crawler');
const c = new Crawler({
jQuery: jsdom
});
Crawler uses nock
to mock http request, thus testing no longer relying on http server.
$ npm install
$ npm test
$ npm run cover # code coverage
After installing Docker, you can run:
# Builds the local test environment
$ docker build -t node-crawler .
# Runs tests
$ docker run node-crawler sh -c "npm install && npm test"
# You can also ssh into the container for easier debugging
$ docker run -i -t node-crawler bash
FAQs
Crawler is a ready-to-use web spider that works with proxies, asynchrony, rate limit, configurable request pools, jQuery, and HTTP/2 support.
The npm package crawler receives a total of 2,959 weekly downloads. As such, crawler popularity was classified as popular.
We found that crawler demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.