Socket
Book a DemoInstallSign in
Socket

ngrab

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

ngrab

A lightweight node spider

latest
npmnpm
Version
1.0.2
Version published
Maintainers
1
Created
Source

Intro

A lightweight node spider. Supports:

  • FollowLink
  • Csutom headers
  • Bloom filter
  • Retry mechanism
  • Proxy Request
  • Routing
  • Crawling from last visited link
  • Free use of parser and memory

Usage

import { Crawler, userAgent } from 'ngrab'
import cheerio from 'cheerio'

// For example, crawling the hottest projects on Github
let crawler = new Crawler({
    // required && unique
    name: 'myCrawler',
    // enable bloom filter
    bloom: true,
    // set random intervals(ms) between requests
    interval: () => (Math.random() * 16 + 4) * 1000, // [4s, 20s]
    // initial Link
    startUrls: ['https://github.com/trending'],
})

// download(name, cb)
crawler.download('trending', async ({ req, res, followLinks, resolveLink }) => {
    if (!res) return
    // parsing HTML strings
    let $ = cheerio.load(res.body.toString())
    // extract data
    let repoList: Array<{ name: string; href: string }> = [],
        $rows = $('.Box-row')
    if ($rows.length) {
        $rows.each(function (index) {
            let $item = $(this)

            repoList.push({
                name: $('.lh-condensed a .text-normal', $item)
                    .text()
                    .replace(/\s+/g, ' ')
                    .trim(),
                href: $('.lh-condensed a', $item).attr('href') as string,
            })
        })
        // print
        console.log(repoList) // or store in your Database
        // follow links
        // repoList.forEach((v) => followLinks(resolveLink(v.href)))
    }
})

// start crawling
crawler.run()

Custom Headers

The request hook will execute before each request:

// request(name, cb)
crawler.request('headers', async (context) => {
    // set custom headers
    Object.assign(context.req.headers, {
        'Cache-Control': 'no-cache',
        'User-Agent': userAgent(), // set random UserAgent
        Accept: '*/*',
        'Accept-Encoding': 'gzip, deflate, compress',
        Connection: 'keep-alive',
    })
})

Routes

Instead of parsing everything in 'crawler.download()', you can split the parsing code into different routes:

crawler.route({
    url: 'https://github.com/trending', // for trending page (compatible with minimatch)
    async download(({req, res})){
        // parsing ...
    }
})

crawler.route({
    url: 'https://github.com/*/*', // for repository page
    async download(({req, res})){
        // parsing ...
    }
})

crawler.route({
    url: 'https://github.com/*/*/issues', // for issues page
    async download(({req, res})){
        // parsing ...
    }
})

Proxy

You can provider a proxy server getter when initializing the crawler:

let crawler = new Crawler({
    name: 'myCrawler',
    startUrls: ['https://github.com/trending'],
    async proxy() {
        let url = await getProxyUrlFromSomeWhere()
        // The return value will be used as a proxy when sending a request
        return url
    },
})

Keywords

crawler

FAQs

Package last updated on 17 Apr 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts