Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

x-crawl

Package Overview
Dependencies
Maintainers
1
Versions
66
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

x-crawl

x-crawl is a flexible Node.js multifunctional crawler library.

  • 6.0.0
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
84
decreased by-23.64%
Maintainers
1
Weekly downloads
 
Created
Source

x-crawl npm GitHub license

English | 简体中文

x-crawl is a flexible Node.js multifunctional crawler library. Used to crawl pages, crawl interfaces, crawl files, and poll crawls.

If you also like x-crawl, you can give x-crawl repository a star to support it, thank you for your support!

Features

  • 🔥 Async/Sync - Just change the mode property to toggle async/sync crawling mode.
  • ⚙️Multiple functions - Can crawl pages, crawl interfaces, crawl files and poll crawls. And it supports crawling single or multiple.
  • 🖋️ Flexible writing method - A function adapts to multiple crawling configurations and obtains crawling results. The writing method is very flexible.
  • 👀 Device Fingerprinting - Zero configuration/custom configuration to avoid fingerprinting to identify and track us from different locations.
  • ⏱️ Interval crawling - no interval/fixed interval/random interval, can effectively use/avoid high concurrent crawling.
  • 🔄 Retry on failure - It can be set for all crawling requests, for a single crawling request, and for a single request to set a failed retry.
  • 🚀 Priority Queue - Use priority crawling based on the priority of individual requests.
  • ☁️ Crawl SPA - Batch crawl SPA (Single Page Application) to generate pre-rendered content (ie "SSR" (Server Side Rendering)).
  • ⚒️ Controlling Pages - Headless browsers can submit forms, keystrokes, event actions, generate screenshots of pages, etc.
  • 🧾 Capture Record - Capture and record the crawled results, and highlight them on the console.
  • 🦾 TypeScript - Own types, implement complete types through generics.

Relationship with puppeteer

The crawlPage API has puppeteer built in, you only need to pass in some configuration options to complete some operations, and the result will expose Brower instances and Page instances.

Table of Contents

Install

Take NPM as an example:

npm install x-crawl

Example

Take some pictures of Airbnb hawaii experience and Plus listings automatically every day as an example:

// 1.Import module ES/CJS
import xCrawl from 'x-crawl'

// 2.Create a crawler instance
const myXCrawl = xCrawl({ maxRetry: 3, intervalTime: { max: 3000, min: 2000 } })

// 3.Set the crawling task
/*
  Call the startPolling API to start the polling function,
  and the callback function will be called every other day
*/
myXCrawl.startPolling({ d: 1 }, async (count, stopPolling) => {
  // Call crawlPage API to crawl Page
  const res = await myXCrawl.crawlPage([
    'https://zh.airbnb.com/s/hawaii/experiences',
    'https://zh.airbnb.com/s/hawaii/plus_homes'
  ])

  // Store the image URL to targets
  const targets = []
  const elSelectorMap = ['.c14whb16', '.a1stauiv']
  for (const item of res) {
    const { id } = item
    const { page } = item.data

    // Gets the URL of the page's wheel image element
    const boxHandle = await page.$(elSelectorMap[id - 1])
    const urls = await boxHandle!.$$eval('picture img', (imgEls) => {
      return imgEls.map((item) => item.src)
    })
    targets.push(...urls)

    // Close page
    page.close()
  }

  // Call the crawlFile API to crawl pictures
  myXCrawl.crawlFile({ targets, storeDir: './upload' })
})

running result:

**Note:** Do not crawl at will, you can check the **robots.txt** protocol before crawling. This is just to demonstrate how to use x-crawl.

Core concepts

Create application

An example of a crawler application

Create a new application instance via xCrawl():

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  // options
})

Related options can refer to XCrawlBaseConfig .

Crawl mode

A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  mode: 'async'
})

The mode option defaults to async .

  • async: asynchronous request, in batch requests, the next request is made without waiting for the current request to complete
  • sync: synchronous request, in batch requests, you need to wait for this request to complete before making the next request

If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.

Device fingerprint

A property can be used to control whether to use the default random fingerprint, or you can configure a custom fingerprint through subsequent crawling.

Device fingerprinting is set up to avoid identifying and tracking us from different locations through fingerprinting.

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  enableRandomFingerprint: true
})

The enableRandomFingerprint option defaults to true.

  • true: Enable random device fingerprinting. The fingerprint configuration of the target can be specified through the advanced version configuration or the detailed target version configuration.
  • false: Turn off random device fingerprinting, without affecting the fingerprint configuration specified for the target by the advanced configuration or the detailed configuration.
Multiple crawler application instances
import xCrawl from 'x-crawl'

const myXCrawl1 = xCrawl({
  // options
})

const myXCrawl2 = xCrawl({
  // options
})

Crawl page

Crawl a page via crawlPage() .

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl.crawlPage('https://www.example.com').then((res) => {
  const { browser, page } = res.data

  // Close the browser
  browser.close()
})
browser instance

It is an instance object of Browser. For specific usage, please refer to Browser.

The browser instance is a headless browser without a UI shell. What he does is to bring all modern network platform functions provided by the browser rendering engine to the code.

Note: The browser will stay up and running, causing the file not to be terminated. If you want to stop, you can execute browser.close() to close it. Do not call crawlPage or page if you need to use it later. Because when you modify the properties of the browser instance, it will affect the browser instance inside the crawlPage API of the crawler instance, the page instance that returns the result, and the browser instance, because the browser instance is shared within the crawlPage API of the same crawler instance.

page instance

It is an instance object of Page. The instance can also perform interactive operations such as events. For specific usage, please refer to [page](https://pptr.dev /api/puppeteer. page).

The browser instance will retain a reference to the page instance. If it is no longer used in the future, the page instance needs to be closed by itself, otherwise it will cause a memory leak.

Take Screenshot

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl.crawlPage('https://www.example.com').then(async (res) => {
  const { browser, page } = res.data

  // Get a screenshot of the rendered page
  await page.screenshot({ path: './upload/page.png' })

  console.log('Screen capture is complete')

  browser.close()
})
life cycle

Lifecycle functions owned by crawlPageAPI:

  • onCrawlItemComplete: executed when each crawl item is finished and processed
onCrawlItemComplete

In the onCrawlItemComplete function you can get the result of each crawl object.

Note: If you need to crawl many pages at one time, you need to use this life cycle function to process the results of each target and close the page instance after each page is crawled down. If you do not close the page instance, then The program will crash due to too many opened pages.

Crawl interface

Crawl interface data through crawlData() .

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })

const targets = [
  'https://www.example.com/api-1',
  'https://www.example.com/api-2',
  {
    url: 'https://www.example.com/api-3',
    method: 'POST',
    data: { name: 'coderhxl' }
  }
]

myXCrawl.crawlData({ targets }).then((res) => {
  // deal with
})
life cycle

Lifecycle functions owned by crawlPageAPI:

  • onCrawlItemComplete: executed when each crawl item is finished and processed
onCrawlItemComplete

In the onCrawlItemComplete function you can get the result of each crawl object.

Note: If you need to crawl many pages at one time, you need to use this life cycle function to process the results of each target and close the page instance after each page is crawled down. If you do not close the page instance, then The program will crash due to too many opened pages.

Crawl files

Crawl file data via crawlFile() .

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })

myXCrawl
  .crawlFile({
    targets: [
      'https://www.example.com/file-1',
      'https://www.example.com/file-2'
    ],
    fileConfig: {
      storeDir: './upload' // storage folder
    }
  })
  .then((res) => {
    console.log(res)
  })
life cycle

Life cycle functions owned by crawlFile API:

  • onCrawlItemComplete: executed when each crawl item is finished and processed

  • onBeforeSaveItemFile: executed before saving the file

onCrawlItemComplete

In the onCrawlItemComplete function you can get the result of each crawl object.

onBeforeSaveItemFile

In the onBeforeSaveItemFile function, you can get the Buffer type file, you can process the Buffer, and then you need to return a Promise, and the resolve is Buffer.

Resize picture

Use the sharp library to resize the images to be crawled:

import xCrawl from 'x-crawl'
import sharp from 'sharp'

const myXCrawl = xCrawl()

myXCrawl
  .crawlFile({
    targets: [
      'https://www.example.com/file-1.jpg',
      'https://www.example.com/file-2.jpg'
    ],
    fileConfig: {
      onBeforeSaveItemFile(info) {
        return sharp(info.data).resize(200).toBuffer()
      }
    }
  })
  .then((res) => {
    res.forEach((item) => {
      console.log(item.data?.data.isSuccess)
    })
  })

Start polling

Start a polling crawl with startPolling() .

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  timeout: 10000,
  intervalTime: { max: 3000, min: 1000 }
})

myXCrawl.startPolling({ h: 2, m: 30 }, async (count, stopPolling) => {
  // will be executed every two and a half hours
  // crawlPage/crawlData/crawlFile
  const res = await myXCrawl.crawlPage('https://www.example.com')
  res.data.page.close()
})

Using crawlPage in polling Note: The purpose of calling page.close() is to prevent the browser instance from retaining references to the page instance. If the current page is no longer used in the future, it needs to be closed by itself, otherwise it will cause a memory leak.

Callback function parameters:

  • The count attribute records the current number of polling operations.
  • stopPolling is a callback function, calling it can terminate subsequent polling operations.

Config priority

Some common configurations can be set in these three places:

  • Application instance configuration (global)
  • Advanced configuration (partial)
  • detailed target configuration (separately)

The priority is: detailed target configuration > advanced configuration > application instance configuration

Take crawlPage to crawl two pages as an example:

import xCrawl from 'x-crawl'

// Application instance configuration
const myXCrawl = xCrawl({
  intervalTime: { max: 3000, min: 1000 }
})

// advanced configuration
myXCrawl.crawlPage({
  targets: [
    'https://www.example.com/page-1',
    {
      // Detailed target configuration
      url: 'https://www.example.com/page-1',
      viewport: { width: 1920, height: 1080 }
    }
  ],
  intervalTime: 1000,
  viewport: { width: 800, height: 600 }
})

Device fingerprint

Customize the configuration to avoid fingerprinting and tracking us from different locations.

Multiple information can be passed in the fingerprint through advanced usage, and internally it will help you randomly assign each target to targets. It is also possible to set a specific fingerprint for a target directly with the detailed target configuration.

Take crawlPage as an example:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({ intervalTime: { max: 5000, min: 3000 } })

myXCrawl
  .crawlPage({
    targets: [
      'https://www.example.com/page-1',
      {
        // Specify the fingerprint
        url: 'https://www.example.com/page-2',
        fingerprint: {
          maxWidth: 1980,
          minWidth: 1980,
          maxHeight: 1080,
          minHidth: 1080,
          platform: 'Android'
        }
      }
    ],
    fingerprint: {
      // set fingerprint for each target in targets
      maxWidth: 1980,
      maxHeight: 1080,
      userAgents: [
        'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0',
        'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
        'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0'
      ],
      platforms: ['Chromium OS', 'iOS', 'Linux', 'macOS', 'Windows']
    }
  })
  .then((res) => {})

For more fingerprint options, you can go to the corresponding configuration to view.

In the above example, the interval time is set in both Application Instance Configuration and Advanced Configuration, then the interval time of Advanced Configuration will prevail. If the viewport is set in Advanced Configuration and Detailed Target Configuration, then the second target is to set the viewport, which will be based on the viewport of Detailed Target Configuration.

Interval time

The interval time can prevent too much concurrency and avoid too much pressure on the server.

The crawling interval is controlled internally by the instance method, not the entire crawling interval is controlled by the instance.

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData({
    targets: ['https://www.example.com/api-1', 'https://www.example.com/api-2'],
    intervalTime: { max: 2000, min: 1000 }
  })
  .then((res) => {})

The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.

  • number: The time that must wait before each request is fixed
  • Object: Randomly select a value from max and min, which is more anthropomorphic

Note: The first request will not trigger the interval.

Fail retry

Failed retry In the event of an error such as a timeout, the request will wait for the round to end and then retry.

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData({ url: 'https://www.example.com/api', maxRetry: 1 })
  .then((res) => {})

The maxRetry attribute determines how many times to retry.

Priority queue

A priority queue allows a request to be sent first.

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData([
    { url: 'https://www.example.com/api-1', priority: 1 },
    { url: 'https://www.example.com/api-2', priority: 10 },
    { url: 'https://www.example.com/api-3', priority: 8 }
  ])
  .then((res) => {})

The larger the value of the priority attribute, the higher the priority in the current crawling queue.

About results

For the result, the result of each request is uniformly wrapped with an object that provides information about the result of the request, such as id, result, success or not, maximum retry, number of retries, error information collected, and so on. Automatically determine whether the return value is wrapped in an array depending on the configuration you choose, and the type fits perfectly in TS.

The id of each object is determined according to the order of requests in your configuration, and if there is a priority used, it will be sorted by priority.

Details about configuration methods and results are as follows: crawlPage config, crawlData config, crawlFile config.

TypeScript

Type systems like TypeScript can detect many common errors at compile time through static analysis. This reduces runtime errors and gives us more confidence when refactoring large projects. TypeScript also improves the development experience and efficiency through type-based auto-completion in the IDE.

x-crawl itself is written in TypeScript and supports TypeScript. Comes with a type declaration file, out of the box.

API

xCrawl

Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.

Type

The xCrawl API is a function.

function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance

Parameter Type:

Return value type:

Example
import xCrawl from 'x-crawl'

// xCrawl API
const myXCrawl = xCrawl({
  baseUrl: 'https://www.example.com',
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the crawlPage/crawlData/crawlFile example.

crawlPage

crawlPage is the method of the crawler instance, usually used to crawl page.

Type

The crawlPage API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.

type crawlPage = {
  (
    config: string,
    callback?: (res: CrawlPageSingleRes) => void
  ): Promise<CrawlPageSingleRes>

  (
    config: CrawlPageDetailTargetConfig,
    callback?: (res: CrawlPageSingleRes) => void
  ): Promise<CrawlPageSingleRes>

  (
    config: (string | CrawlPageDetailTargetConfig)[],
    callback?: (res: CrawlPageSingleRes[]) => void
  ): Promise<CrawlPageSingleRes[]>

  (
    config: CrawlPageAdvancedConfig,
    callback?: (res: CrawlPageSingleRes[]) => void
  ): Promise<CrawlPageSingleRes[]>
}

Parameter Type:

Return value type:

Example
import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

// crawlPage API
myXCrawl.crawlPage('https://www.example.com').then((res) => {
  const { browser, page } = res.data

  // Close the browser
  browser.close()
})
Config

There are 4 types:

  • string
  • CrawlPageDetailTargetConfig
  • (string | CrawlPageDetailTargetConfig)[]
  • CrawlPageAdvancedConfig

1.string

This is a simple target configuration. if you just want to simply crawl this page, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl.crawlPage('https://www.example.com').then((res) => {})

The res you get will be an object.

2. CrawlPageDetailTargetConfig

This is the detailed target configuration. if you want to crawl this page and need to retry on failure, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlPage({
    url: 'https://www.example.com',
    proxy: 'xxx',
    maxRetry: 1
  })
  .then((res) => {})

The res you get will be an object.

More configuration options can view CrawlPageDetailTargetConfig.

3.(string | CrawlPageDetailTargetConfig)[]

This is a mixed target array configuration. if you want to crawl multiple pages, and some pages need to fail and retry, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlPage([
    'https://www.example.com/page-1',
    { url: 'https://www.example.com/page-2', maxRetry: 2 }
  ])
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlPageDetailTargetConfig.

4. CrawlPageAdvancedConfig

This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl multiple pages, and the request configuration (proxy, cookies, retry, etc.) does not want to be written repeatedly, if you need an interval, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlPage({
    targets: [
      'https://www.example.com/page-1',
      { url: 'https://www.example.com/page-2', maxRetry: 6 }
    ],
    intervalTime: { max: 3000, min: 1000 },
    cookies: 'xxx',
    maxRetry: 1
  })
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlPageAdvancedConfig.

More information about the results can be found at About results , which can be selected according to the actual situation.

crawlData

crawlData is the method of the crawler instance, which is usually used to crawl APIs to obtain JSON data and so on.

Type

The crawlData API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.

type crawlData = {
  <T = any>(
    config: CrawlDataDetailTargetConfig,
    callback?: (res: CrawlDataSingleRes<T>) => void
  ): Promise<CrawlDataSingleRes<T>>

  <T = any>(
    config: string,
    callback?: (res: CrawlDataSingleRes<T>) => void
  ): Promise<CrawlDataSingleRes<T>>

  <T = any>(
    config: (string | CrawlDataDetailTargetConfig)[],
    callback?: (res: CrawlDataSingleRes<T>[]) => void
  ): Promise<CrawlDataSingleRes<T>[]>

  <T = any>(
    config: CrawlDataAdvancedConfig<T>,
    callback?: (res: CrawlDataSingleRes<T>[]) => void
  ): Promise<CrawlDataSingleRes<T>[]>
}

Parameter Type:

Return value type:

Example
import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

myXCrawl
  .crawlData({
    targets: ['https://www.example.com/api-1', 'https://www.example.com/api-2'],
    intervalTime: { max: 3000, min: 1000 },
    cookies: 'xxx',
    maxRetry: 1
  })
  .then((res) => {
    console.log(res)
  })
Config

There are 4 types:

  • string
  • CrawlDataDetailTargetConfig
  • (string | CrawlDataDetailTargetConfig)[]
  • CrawlDataAdvancedConfig

1.string

This is a simple target configuration. if you just want to simply crawl the data, and the interface is GET, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl.crawlData('https://www.example.com/api').then((res) => {})

The res you get will be an object.

2. CrawlDataDetailTargetConfig

This is the detailed target configuration. if you want to crawl this data and need to retry on failure, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData({
    url: 'https://www.example.com/api',
    proxy: 'xxx',
    maxRetry: 1
  })
  .then((res) => {})

The res you get will be an object.

More configuration options can view CrawlDataDetailTargetConfig.

3.(string | CrawlDataDetailTargetConfig)[]

This is a mixed target array configuration. if you want to crawl multiple data, and some data needs to fail and retry, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData([
    'https://www.example.com/api-1',
    { url: 'https://www.example.com/api-2', maxRetry: 2 }
  ])
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlDataDetailTargetConfig.

4.CrawlDataAdvancedConfig

This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl multiple data, and the request configuration (proxy, cookies, retry, etc.) does not want to be written repeatedly, if you need an interval, you can try this writing method:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlData({
    targets: [
      'https://www.example.com/api-1',
      { url: 'https://www.example.com/api-2', maxRetry: 6 }
    ],
    intervalTime: { max: 3000, min: 1000 },
    cookies: 'xxx',
    maxRetry: 1
  })
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlPageAdvancedConfig .

More information about the results can be found at About results , which can be selected according to the actual situation.

crawlFile

crawlFile is the method of the crawler instance, which is usually used to crawl files, such as pictures, pdf files, etc.

Type

The crawlFile API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.

type crawlFile = {
  (
    config: CrawlFileDetailTargetConfig,
    callback?: (res: CrawlFileSingleRes) => void
  ): Promise<CrawlFileSingleRes>

  (
    config: CrawlFileDetailTargetConfig[],
    callback?: (res: CrawlFileSingleRes[]) => void
  ): Promise<CrawlFileSingleRes[]>

  (
    config: CrawlFileAdvancedConfig,
    callback?: (res: CrawlFileSingleRes[]) => void
  ): Promise<CrawlFileSingleRes[]>
}

Parameter Type:

Return value type:

Example
import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

// crawlFile API
myXCrawl
  .crawlFile({
    targets: [
      'https://www.example.com/file-1',
      'https://www.example.com/file-2'
    ],
    storeDir: './upload',
    intervalTime: { max: 3000, min: 1000 },
    maxRetry: 1
  })
  .then((res) => {})
Config

There are 3 types:

  • CrawlFileDetailTargetConfig

  • CrawlFileDetailTargetConfig[]

  • CrawlFileAdvancedConfig

1. CrawlFileDetailTargetConfig

This is the detailed target configuration. if you want to crawl this file and need to retry on failure, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlFile({
    url: 'https://www.example.com/file',
    proxy: 'xxx',
    maxRetry: 1,
    storeDir: './upload',
    fileName: 'xxx'
  })
  .then((res) => {})

The res you get will be an object.

More configuration options can view CrawlFileDetailTargetConfig.

2. CrawlFileDetailTargetConfig[]

This is the detailed target array configuration. if you want to crawl multiple files, and some data needs to be retried after failure, you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlFile([
    { url: 'https://www.example.com/file-1', storeDir: './upload' },
    { url: 'https://www.example.com/file-2', storeDir: './upload', maxRetry: 2 }
  ])
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlFileDetailTargetConfig.

3. CrawlFileAdvancedConfig

This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl multiple data, and the request configuration (storeDir, proxy, retry, etc.) does not want to be written repeatedly, and you need interval time, etc., you can try this way of writing:

import xCrawl from 'x-crawl'

const myXCrawl = xCrawl()

myXCrawl
  .crawlFile({
    targets: [
      'https://www.example.com/file-1',
      { url: 'https://www.example.com/file-2', storeDir: './upload/xxx' }
    ],
    storeDir: './upload',
    intervalTime: { max: 3000, min: 1000 },
    maxRetry: 1
  })
  .then((res) => {})

The res you get will be an array of objects.

More configuration options can view CrawlFileAdvancedConfig .

More information about the results can be found at About results , which can be selected according to the actual situation.

startPolling

crawlPolling is a method of the crawler instance, typically used to perform polling operations, such as getting news every once in a while.

Type
function startPolling(
  config: StartPollingConfig,
  callback: (count: number, stopPolling: () => void) => void
): void
Example
import xCrawl from 'x-crawl'

const myXCrawl = xCrawl({
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

// startPolling API
myXCrawl.startPolling({ h: 2, m: 30 }, (count, stopPolling) => {
  // will be executed every two and a half hours
  // crawlPage/crawlData/crawlFile
})

Types

API config

XCrawlConfig
export interface XCrawlConfig extends CrawlCommonConfig {
  mode?: 'async' | 'sync'
  enableRandomFingerprint?: boolean
  baseUrl?: string
  intervalTime?: IntervalTime
  crawlPage?: {
    launchBrowser?: PuppeteerLaunchOptions // puppeteer
  }
}
Detail target config
CrawlPageDetailTargetConfig
export interface CrawlPageDetailTargetConfig extends CrawlCommonConfig {
  url: string
  headers?: AnyObject | null
  cookies?: PageCookies | null
  priority?: number
  viewport?: Viewport | null // puppeteer
  fingerprint?:
    | (DetailTargetFingerprintCommon & {
        maxWidth: number
        minWidth?: number
        maxHeight: number
        minHidth?: number
      })
    | null
}
CrawlDataDetailTargetConfig
export interface CrawlDataDetailTargetConfig extends CrawlCommonConfig {
  url: string
  method?: Method
  headers?: AnyObject | null
  params?: AnyObject
  data?: any
  priority?: number
  fingerprint?: DetailTargetFingerprintCommon | null
}
CrawlFileDetailTargetConfig
export interface CrawlFileDetailTargetConfig extends CrawlCommonConfig {
  url: string
  headers?: AnyObject | null
  priority?: number
  storeDir?: string | null
  fileName?: string
  extension?: string | null
  fingerprint?: DetailTargetFingerprintCommon | null
}
Advanced config
CrawlPageAdvancedConfig
export interface CrawlPageAdvancedConfig extends CrawlCommonConfig {
  targets: (string | CrawlPageDetailTargetConfig)[]
  intervalTime?: IntervalTime
  fingerprint?: AdvancedFingerprintCommon & {
    maxWidth: number
    minWidth?: number
    maxHeight: number
    minHidth?: number
  }

  headers?: AnyObject
  cookies?: PageCookies
  viewport?: Viewport // puppeteer

  onCrawlItemComplete?: (crawlPageSingleRes: CrawlPageSingleRes) => void
}
CrawlDataAdvancedConfig
export interface CrawlDataAdvancedConfig<T> extends CrawlCommonConfig {
  targets: (string | CrawlDataDetailTargetConfig)[]
  intervalTime?: IntervalTime
  fingerprint?: AdvancedFingerprintCommon

  headers?: AnyObject

  onCrawlItemComplete?: (crawlDataSingleRes: CrawlDataSingleRes<T>) => void
}
CrawlFileAdvancedConfig
export interface CrawlFileAdvancedConfig extends CrawlCommonConfig {
  targets: (string | CrawlFileDetailTargetConfig)[]
  intervalTime?: IntervalTime
  fingerprint?: AdvancedFingerprintCommon

  headers?: AnyObject
  storeDir?: string
  extension?: string

  onCrawlItemComplete?: (crawlFileSingleRes: CrawlFileSingleRes) => void
  onBeforeSaveItemFile?: (info: {
    id: number
    fileName: string
    filePath: string
    data: Buffer
  }) => Promise<Buffer>
}
StartPollingConfig
export interface StartPollingConfig {
  d?: number
  h?: number
  m?: number
}
Crawl other config
CrawlCommonConfig
export interface CrawlCommonConfig {
  timeout?: number
  proxy?: string
  maxRetry?: number
}
DetailTargetFingerprintCommon
export interface DetailTargetFingerprintCommon {
  userAgent?: string
  ua?: string
  platform?: Platform
  platformVersion?: string
  mobile?: Mobile
  acceptLanguage?: string
}
AdvancedFingerprintCommon
export interface AdvancedFingerprintCommon {
  userAgents?: string[]
  uas?: string[]
  platforms?: Platform[]
  platformVersions?: string[]
  mobiles?: Mobile[]
  acceptLanguages?: string[]
}
Mobile
export type Mobile = '?0' | '?1'
Platform
export type Platform =
  | 'Android'
  | 'Chrome OS'
  | 'Chromium OS'
  | 'iOS'
  | 'Linux'
  | 'macOS'
  | 'Windows'
  | 'Unknown'
PageCookies
export type PageCookies =
  | string
  | Protocol.Network.CookieParam
  | Protocol.Network.CookieParam[]
Method
export type Method =
  | 'get'
  | 'GET'
  | 'delete'
  | 'DELETE'
  | 'head'
  | 'HEAD'
  | 'options'
  | 'OPTIONS'
  | 'post'
  | 'POST'
  | 'put'
  | 'PUT'
  | 'patch'
  | 'PATCH'
  | 'purge'
  | 'PURGE'
  | 'link'
  | 'LINK'
  | 'unlink'
  | 'UNLINK'
IntervalTime
export type IntervalTime = number | { max: number; min?: number }

API result

XCrawlInstance
export interface XCrawlInstance {
  crawlPage: {
    (
      config: string,
      callback?: (res: CrawlPageSingleRes) => void
    ): Promise<CrawlPageSingleRes>

    (
      config: CrawlPageDetailTargetConfig,
      callback?: (res: CrawlPageSingleRes) => void
    ): Promise<CrawlPageSingleRes>

    (
      config: (string | CrawlPageDetailTargetConfig)[],
      callback?: (res: CrawlPageSingleRes[]) => void
    ): Promise<CrawlPageSingleRes[]>

    (
      config: CrawlPageAdvancedConfig,
      callback?: (res: CrawlPageSingleRes[]) => void
    ): Promise<CrawlPageSingleRes[]>
  }

  crawlData: {
    <T = any>(
      config: CrawlDataDetailTargetConfig,
      callback?: (res: CrawlDataSingleRes<T>) => void
    ): Promise<CrawlDataSingleRes<T>>

    <T = any>(
      config: string,
      callback?: (res: CrawlDataSingleRes<T>) => void
    ): Promise<CrawlDataSingleRes<T>>

    <T = any>(
      config: (string | CrawlDataDetailTargetConfig)[],
      callback?: (res: CrawlDataSingleRes<T>[]) => void
    ): Promise<CrawlDataSingleRes<T>[]>

    <T = any>(
      config: CrawlDataAdvancedConfig<T>,
      callback?: (res: CrawlDataSingleRes<T>[]) => void
    ): Promise<CrawlDataSingleRes<T>[]>
  }

  crawlFile: {
    (
      config: CrawlFileDetailTargetConfig,
      callback?: (res: CrawlFileSingleRes) => void
    ): Promise<CrawlFileSingleRes>

    (
      config: CrawlFileDetailTargetConfig[],
      callback?: (res: CrawlFileSingleRes[]) => void
    ): Promise<CrawlFileSingleRes[]>

    (
      config: CrawlFileAdvancedConfig,
      callback?: (res: CrawlFileSingleRes[]) => void
    ): Promise<CrawlFileSingleRes[]>
  }

  startPolling: (
    config: StartPollingConfig,
    callback: (count: number, stopPolling: () => void) => void
  ) => void
}
CrawlCommonRes
export interface CrawlCommonRes {
  id: number
  isSuccess: boolean
  maxRetry: number
  retryCount: number
  crawlErrorQueue: Error[]
}
CrawlPageSingleRes
export interface CrawlPageSingleRes extends CrawlCommonRes {
  data: {
    browser: Browser // puppeteer
    response: HTTPResponse | null // puppeteer
    page: Page // puppeteer
  }
}
CrawlDataSingleRes
export interface CrawlDataSingleRes<D> extends CrawlCommonRes {
  data: {
    statusCode: number | undefined
    headers: IncomingHttpHeaders // node http
    data: D
  } | null
}
CrawlFileSingleRes
export interface CrawlFileSingleRes extends CrawlCommonRes {
  data: {
    statusCode: number | undefined
    headers: IncomingHttpHeaders // node http
    data: {
      isSuccess: boolean
      fileName: string
      fileExtension: string
      mimeType: string
      size: number
      filePath: string
    }
  } | null
}

API Other

AnyObject
export interface AnyObject extends Object {
  [key: string | number | symbol]: any
}

More

If you have problems, needs, good suggestions please raise Issues in https://github.com/coder-hxl/x-crawl/issues.

Thank you all for your support.

Keywords

FAQs

Package last updated on 19 Apr 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc