Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

x-crawl

Package Overview
Dependencies
Maintainers
1
Versions
66
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

x-crawl

XCrawl is a Nodejs multifunctional crawler library.

  • 2.0.0
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
128
decreased by-2.29%
Maintainers
1
Weekly downloads
 
Created
Source

x-crawl

English | 简体中文

x-crawl is a Nodejs multifunctional crawler library.

Feature

  • Crawl HTML, JSON, file resources, etc. with simple configuration
  • Use puppeteer to crawl HTML, and use JSDOM library to parse HTML, or parse HTML by yourself
  • Support asynchronous/synchronous way to crawl data
  • Support Promise/Callback way to get the result
  • Polling function
  • Anthropomorphic request interval
  • Written in TypeScript, provides generics

Table of Contents

Install

Take NPM as an example:

npm install x-crawl

Example

Get the title of https://docs.github.com/zh/get-started as an example:

// Import module ES/CJS
import xCrawl from 'x-crawl'

// Create a crawler instance
const docsXCrawl = xCrawl({
  baseUrl: 'https://docs.github.com',
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

// Call fetchHTML API to crawl
docsXCrawl.fetchHTML('/zh/get-started').then((res) => {
  const { jsdom } = res.data
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

Core concepts

x-crawl

Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.

Type

For more detailed types, please see the Types section

function xCrawl(baseConfig?: XCrawlBaseConifg): XCrawlInstance
Example
const myXCrawl = xCrawl({
  baseUrl: 'https://xxx.com',
  timeout: 10000,
  // The interval between requests, multiple requests are valid
  intervalTime: {
    max: 2000,
    min: 1000
  }
})

Passing baseConfig is for fetchHTML/fetchData/fetchFile to use these values by default.

Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchHTML/fetchData/fetchFile example.

Mode

The mode option defaults to async .

  • async: In batch requests, the next request is made without waiting for the current request to complete
  • sync: In batch requests, you need to wait for this request to complete before making the next request

If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.

IntervalTime

The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.

  • number: The time that must wait before each request is fixed
  • Object: Randomly select a value from max and min, which is more anthropomorphic

The first request is not to trigger the interval.

fetchHTML

fetchHTML is the method of the above myXCrawl instance, usually used to crawl HTML.

Type
function fetchHTML: (
  config: FetchHTMLConfig,
  callback?: (res: FetchHTML) => void
) => Promise<FetchHTML>
Example
myXCrawl.fetchHTML('/xxx').then((res) => {
  const { jsdom } = res.data
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

fetchData

fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.

Type
function fetchData: <T = any>(
  config: FetchDataConfig,
  callback?: (res: FetchResCommonV1<T>) => void
) => Promise<FetchResCommonArrV1<T>>
Example
const requestConifg = [
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' }
]

myXCrawl.fetchData({ 
  requestConifg, // Request configuration, can be RequestConfig | RequestConfig[]
  intervalTime: { max: 5000, min: 1000 } // The intervalTime passed in when not using myXCrawl
}).then(res => {
  console.log(res)
})

fetchFile

fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.

Type
function fetchFile: (
  config: FetchFileConfig,
  callback?: (res: FetchResCommonV1<FileInfo>) => void
) => Promise<FetchResCommonArrV1<FileInfo>>
Example
const requestConifg = [
  { url: '/xxxx' },
  { url: '/xxxx' },
  { url: '/xxxx' }
]

myXCrawl.fetchFile({
  requestConifg,
  fileConfig: {
    storeDir: path.resolve(__dirname, './upload') // storage folder
  }
}).then(fileInfos => {
  console.log(fileInfos)
})

startPolling

fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.

Type
function startPolling(
  config: StartPollingConfig,
  callback: (count: number) => void
): void
Example
myXCrawl.startPolling({ h: 1, m: 30 }, () => {
  // will be executed every one and a half hours
  // fetchHTML/fetchData/fetchFile
})

Types

AnyObject

interface AnyObject extends Object {
  [key: string | number | symbol]: any
}

Method

type Method = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'

RequestConfig

interface RequestConfig {
  url: string
  method?: Method
  headers?: AnyObject
  params?: AnyObject
  data?: any
  timeout?: number
  proxy?: string
}

IntervalTime

type IntervalTime = number | {
  max: number
  min?: number
}

XCrawlBaseConifg

interface XCrawlBaseConifg {
  baseUrl?: string
  timeout?: number
  intervalTime?: IntervalTime
  mode?: 'async' | 'sync'
  proxy?: string
}

FetchBaseConifgV1

interface FetchBaseConifgV1 {
  requestConifg: RequestConfig | RequestConfig[]
  intervalTime?: IntervalTime
}

FetchBaseConifgV2

interface FetchBaseConifgV2 {
  url: string
  header?: AnyObject
  timeout?: number
  proxy?: string
}

FetchHTMLConfig

type FetchHTMLConfig = string | FetchBaseConifgV2

FetchDataConfig

interface FetchDataConfig extends FetchBaseConifgV1 {
}

FetchFileConfig

interface FetchFileConfig extends FetchBaseConifgV1 {
  fileConfig: {
    storeDir: string // Store folder
    extension?: string // filename extension
  }
}

StartPollingConfig

interface StartPollingConfig {
  d?: number // day
  h?: number // hour
  m?: number // minute
}

FetchResCommonV1

interface FetchCommon<T> {
  id: number
  statusCode: number | undefined
  headers: IncomingHttpHeaders // node: http type
  data: T
}

FetchResCommonArrV1

type FetchCommonArr<T> = FetchCommon<T>[]

FileInfo

interface FileInfo {
  fileName: string
  mimeType: string
  size: number
  filePath: string
}

FetchHTML

interface FetchHTML {
  httpResponse: HTTPResponse | null // The type of HTTPResponse in the puppeteer library
  data: {
    page: Page
    content: string
    jsdom: JSDOM // The type of JSDOM in the jsdom library
  }
}

More

If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .

Keywords

FAQs

Package last updated on 27 Feb 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc