Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

krawler

Package Overview
Dependencies
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

krawler

Fast and lightweight web crawler with built-in cheerio, xml and json parser.

  • 0.2.4
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
0
decreased by-100%
Maintainers
1
Weekly downloads
 
Created
Source

node Krawler Build Status

Fast and lightweight promise/event based web krawler with built-in cheerio, xml and json parser. And of course ... the best :)

How to install

npm install krawler

Basic example

var Krawler = require('krawler')

var urls = [
    'http://ondraplsek.cz'
];

var krawler = new Krawler;

krawler
    .queue(urls)
    .on('data', function($, url, response) {
        // $ - cheerio instance
        // url of the current webpage
        // response object from mikeal/request
    })
    .on('error', function(err, url) {
        // there has been an 'error' on 'url'
    })
    .on('end', function() {
        // all URLs has been fetched
    });

Options

Krawler provides following API:

var krawler = new Krawler({
    maxConnections: 10, // number of max simultaneously opened connections, default 10
    parser: 'cheerio',  // web page parser, default 'cheerio'
                        // another options are xml, json or false (no parser will be used, raw data will be returned)
    forceUTF8: false,   // if Krawler should convert source string to utf8, default false
});

mikeal/request is used for fetching web pages so any desired option from this package can be passed to Krawler's constructor.

Advanced Example

var urls = [
    'https://graph.facebook.com/nodejs',
    'https://graph.facebook.com/facebook',
    'https://graph.facebook.com/cocacola',
    'https://graph.facebook.com/google',
    'https://graph.facebook.com/microsoft',
];

var krawler = new Krawler({
    maxConnections: 5,
    parser: 'json',
    forceUTF8: true
});

krawler
    .on('data', function(json, url, response) {
        // do something with json...
    })
    .on('error', function(err, url) {
        // there has been an 'error' on 'url'
    })
    .on('end', function() {
        // all URLs has been fetched
    });

Promises

If your program flow is based on promises you can easily attach Krawler to your promise chain. Method fetchUrl() returns a Q.promise. When the promise is full filled, callback function is called with a result object.

Object has two properties

  • data - parsed/raw content of the web page base on parser setting
  • response - response object from mikeal/request
var krawler = new Krawler;

findUrl()
.then(function(url) {
    return krawler.fetchUrl(url);
})
.then(function(result) {
    // in this case result.data in a cheerio instance
    return processData(result.data);
})
// and so on ...

Keywords

FAQs

Package last updated on 13 Apr 2014

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc