node-crawler
Best crawling/scraping package for Node, 1.0.0 is released, happy hacking :)
Features:
- server-side DOM & automatic jQuery insertion with Cheerio (default) or JSDOM
- Configurable pool size and retries
- Control rate limit
- Priority queue of requests
- forceUTF8 mode to let crawler deal for you with charset detection and conversion
Here is the CHANGELOG
How to install
$ npm install crawler
Crash course
var Crawler = require("crawler");
var url = require('url');
var c = new Crawler({
maxConnections : 10,
callback : function (error, result, $) {
if(error){
console.log(error);
}else{
console.log($("title").text());
}
}
});
c.queue('http://www.amazon.com');
c.queue(['http://www.google.com/','http://www.yahoo.com']);
c.queue([{
uri: 'http://parishackers.org/',
jQuery: false,
callback: function (error, result) {
if(error){
console.log(error);
}else{
console.log('Grabbed', result.body.length, 'bytes');
}
}
}]);
c.queue([{
html: '<p>This is a <strong>test</strong></p>'
}]);
Work with bottleneck
Control rate limits for each connection, usually used with proxy.
var Crawler = require("crawler");
var c = new Crawler({
maxConnections : 3,
rateLimits:2000,
callback : function (error, result, $) {
if(error){
console.error(error);
}else{
console.log($('title').text());
}
}
});
c.queue({
uri:"http://www.google.com",
limiter:"key1",
proxy:"http://user:pass@127.0.0.1:8080"
});
c.queue({
uri:"http://www.google.com",
limiter:"key2",
proxy:"http://user:pass@127.0.0.1:8082"
});
c.queue({
uri:"http://www.google.com",
limiter:"key3",
proxy:"http://user:pass@127.0.0.1:8081"
});
Options reference
You can pass these options to the Crawler() constructor if you want them to be global or as
items in the queue() calls if you want them to be specific to that item (overwriting global options)
This options list is a strict superset of mikeal's request options and will be directly passed to
the request() method.
Basic request options:
Callbacks:
callback(error, result, $)
: A request was completed
Pool options:
maxConnections
: Number, Size of the worker pool (Default 10),priorityRange
: Number, Range of acceptable priorities starting from 0 (Default 10),priority
: Number, Priority of this request (Default 5),
Retry options:
retries
: Number of retries if the request fails (Default 3),retryTimeout
: Number of milliseconds to wait before retrying (Default 10000),
Server-side DOM options:
jQuery
: true, false, "whacko" or ConfObject (Default true). Crawler will use
Charset encoding:
forceUTF8
: Boolean, if true will get charset from HTTP headers or meta tag in html and convert it to UTF8 if necessary. Never worry about encoding anymore! (Default true),incomingEncoding
: String, with forceUTF8: true to set encoding manually (Default null)
incomingEncoding : 'windows-1255'
for example
Cache:
skipDuplicates
: Boolean, if true skips URIs that were already crawled, without even calling callback() (Default false). This is not recommended, it's better to handle outside Crawler
use seenreq
Other:
-
rotateUA
: Boolean, if true, userAgent
should be an array, and rotate it (Default false)
-
userAgent
: String or Array, if rotateUA
is false, but userAgent
is array, will use first one.
-
referer
: String, if truthy sets the HTTP referer header
-
rateLimits
: Number of milliseconds to delay between each requests (Default 0)
Class:Crawler
Event: 'limiterChange'
Emitted when limiter has been changed.
Event: 'request'
Emitted when crawler is ready to send a request.
If you are going to modify options at last stage before requesting, just listen on it.
crawler.on('request',function(options){
options.qs.timestamp = new Date().getTime();
});
Event: 'drain'
Emitted when queue is empty.
crawler.on('drain',function(){
// For example, release a connection to database.
db.end();// close connection to MySQL
});
crawler.queue(uri|options)
Enqueue a task and wait for it to be excuted.
crawler.queueSize
Size of queue, read-only
Working with Cheerio or JSDOM
Crawler by default use Cheerio instead of Jsdom. Jsdom is more robust but can be hard to install (espacially on windows) because of contextify.
Which is why, if you want to use jsdom you will have to build it, and require('jsdom')
in your own script before passing it to crawler. This is to avoid cheerio crawler user to build jsdom when installing crawler.
Working with Cheerio
jQuery: true
jQuery: 'cheerio'
jQuery: {
name: 'cheerio',
options: {
normalizeWhitespace: true,
xmlMode: true
}
}
These parsing options are taken directly from htmlparser2, therefore any options that can be used in htmlparser2
are valid in cheerio as well. The default options are:
{
normalizeWhitespace: false,
xmlMode: false,
decodeEntities: true
}
For a full list of options and their effects, see this and
htmlparser2's options.
source
Working with JSDOM
In order to work with JSDOM you will have to install it in your project folder npm install jsdom
, deal with compiling C++ and pass it to crawler.
var jsdom = require('jsdom');
var Crawler = require('crawler');
var c = new Crawler({
jQuery: jsdom
});
How to test
Install and run Httpbin
crawler use a local httpbin for testing purpose. You can install httpbin as a library from PyPI and run it as a WSGI app. For example, using Gunicorn:
$ pip install httpbin
// launch httpbin as a daemon with 6 worker on localhost
$ gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon
// Finally
$ npm install && npm test
Alternative: Docker
After installing Docker, you can run:
// Builds the local test environment
$ docker build -t node-crawler .
// Runs tests
$ docker run node-crawler sh -c "gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon && npm install && npm test"
// You can also ssh into the container for easier debugging
$ docker run -i -t node-crawler bash
Rough todolist
ChangeLog
See CHANGELOG