Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

node-web-crawler

Package Overview
Dependencies
Maintainers
1
Versions
6
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

node-web-crawler

Node Web Crawler is a web spider written with Nodejs. It gives you the full power of jQuery on the server to parse a big number of pages as they are downloaded, asynchronously. Scraping should be simple and fun!

  • 0.0.6
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
3
increased by200%
Maintainers
1
Weekly downloads
 
Created
Source

This is an updated version of crawler module (https://www.npmjs.com/package/node-crawler). We are looking for people who would like to maintain it.

Have a look at alternatives modules:

node-web-crawler

node-web-crawler aims to be the best crawling/scraping package for Node.

It features:

  • A clean, simple API
  • server-side DOM & automatic jQuery insertion with Cheerio (default) or JSDOM
  • Configurable pool size and retries
  • Priority of requests
  • forceUTF8 mode to let node-web-crawler deal for you with charset detection and conversion
  • A local cache
  • node 0.10 and 0.12 support
  • Fixes for Event leaks

Help & Forks welcomed!

How to install

$ npm install node-web-crawler

Crash course

var Crawler = require("node-web-crawler");
var url = require('url');

var c = new Crawler({
    maxConnections : 10,
    // This will be called for each crawled page
    callback : function (error, result, $) {
        // $ is Cheerio by default
        //a lean implementation of core jQuery designed specifically for the server
        $('a').each(function(index, a) {
            var toQueueUrl = $(a).attr('href');
            c.queue(toQueueUrl);
        });
    }
});

// Queue just one URL, with default callback
c.queue('http://joshfire.com');

// Queue a list of URLs
c.queue(['http://jamendo.com/','http://tedxparis.com']);

// Queue URLs with custom callbacks & parameters
c.queue([{
    uri: 'http://parishackers.org/',
    jQuery: false,

    // The global callback won't be called
    callback: function (error, result) {
        console.log('Grabbed', result.body.length, 'bytes');
    }
}]);

// Queue using a function
var googleSearch = function(search) {
  return 'http://www.google.fr/search?q=' + search;
};
c.queue({
  uri: googleSearch('cheese')
});

// Queue some HTML code directly without grabbing (mostly for tests)
c.queue([{
    html: '<p>This is a <strong>test</strong></p>'
}]);

For more examples, look at the tests.

Options reference

You can pass these options to the Crawler() constructor if you want them to be global or as items in the queue() calls if you want them to be specific to that item (overwriting global options)

This options list is a strict superset of mikeal's request options and will be directly passed to the request() method.

Basic request options:

Callbacks:

  • callback(error, result, $): A request was completed
  • onDrain(): There is no more queued requests

Pool options:

  • maxConnections: Number, Size of the worker pool (Default 10),
  • priorityRange: Number, Range of acceptable priorities starting from 0 (Default 10),
  • priority: Number, Priority of this request (Default 5),

Retry options:

  • retries: Number of retries if the request fails (Default 3),
  • retryTimeout: Number of milliseconds to wait before retrying (Default 10000),

Server-side DOM options:

Charset encoding:

  • forceUTF8: Boolean, if true will try to detect the page charset and convert it to UTF8 if necessary. Never worry about encoding anymore! (Default false),
  • incomingEncoding: String, with forceUTF8: true to set encoding manually (Default null) incomingEncoding : 'windows-1255' for example

Cache:

  • cache: Boolean, if true stores requests in memory (Default false)
  • skipDuplicates: Boolean, if true skips URIs that were already crawled, without even calling callback() (Default false)

Other:

  • userAgent: String, defaults to "node-web-crawler/[version]"
  • referer: String, if truthy sets the HTTP referer header
  • rateLimits: Number of milliseconds to delay between each requests (Default 0) Note that this option will force crawler to use only one connection (for now)

Working with Cheerio or JSDOM

Node Web Crawler by default use Cheerio instead of Jsdom. Jsdom is more robust but can be hard to install (espacially on windows) because of contextify. Which is why, if you want to use jsdom you will have to build it, and require('jsdom') in your own script before passing it to crawler. This is to avoid cheerio crawler user to build jsdom when installing crawler.

###Working with Cheerio

jQuery: true //(default)
//OR
jQuery: 'cheerio'
//OR
jQuery: {
    name: 'cheerio',
    options: {
        normalizeWhitespace: true,
        xmlMode: true
    }
}

These parsing options are taken directly from htmlparser2, therefore any options that can be used in htmlparser2 are valid in cheerio as well. The default options are:

{
    normalizeWhitespace: false,
    xmlMode: false,
    decodeEntities: true
}

For a full list of options and their effects, see this and htmlparser2's options. source

###Working with JSDOM

In order to work with JSDOM you will have to install it in your project folder npm install jsdom, deal with compiling C++ and pass it to crawler.

var jsdom = require('jsdom');
var Crawler = require('node-web-crawler');

var c = new Crawler({
    jQuery: jsdom
});

How to test

Install and run Httpbin

node-web-crawler use a local httpbin for testing purpose. You can install httpbin as a library from PyPI and run it as a WSGI app. For example, using Gunicorn:

$ pip install httpbin
// launch httpbin as a daemon with 6 worker on localhost
$ gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon

// Finally
$ npm install && npm test

Alternative: Docker

After installing Docker, you can run:

// Builds the local test environment
$ docker build -t node-web-crawler .

// Runs tests
$ docker run node-web-crawler sh -c "gunicorn httpbin:app -b 127.0.0.1:8000 -w 6 --daemon && npm install && npm test"

// You can also ssh into the container for easier debugging
$ docker run -i -t node-web-crawler bash

Rough todolist

  • Refactoring the code to be more maintainable, it's spaghetti code in there !
  • Have a look at the Cache feature and refactor it
  • Same for the Pool
  • Proxy feature
  • Make Sizzle tests pass (jsdom bug? https://github.com/tmpvar/jsdom/issues#issue/81)
  • More crawling tests
  • Document the API more (+ the result object)
  • Option to wait for callback to finish before freeing the pool resource (via another callback like next())

Keywords

FAQs

Package last updated on 26 Jan 2016

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc