Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

x-crawl

Package Overview
Dependencies
Maintainers
1
Versions
66
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

x-crawl

XCrawl is a Nodejs multifunctional crawler library. Crawl HTML, JSON, file resources, etc. through simple configuration.

  • 0.1.2
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
128
decreased by-2.29%
Maintainers
1
Weekly downloads
 
Created
Source

x-crawl

English | 简体中文

XCrawl is a Nodejs multifunctional crawler library. Crawl HTML, JSON, file resources, etc. through simple configuration.

highlights

  • Call the API to grab HTML, JSON, file resources, etc
  • Batch requests can choose the mode of sending asynchronously or sending synchronously

Install

Take NPM as an example:

npm install x-crawl

example

Get the title of https://docs.github.com/zh/get-started as an example:

// Import module ES/CJS
import XCrawl from 'x-crawl'

// Create a crawler instance
const docsXCrawl = new XCrawl({
  baseUrl: 'https://docs.github.com',
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

// Call fetchHTML API to crawl
docsXCrawl.fetchHTML('/zh/get-started').then((jsdom) => {
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

Core concepts

XCrawl

Create a crawler instance via new XCrawl.

Type
class XCrawl {
  private readonly baseConfig
  constructor(baseConfig?: IXCrawlBaseConifg)
  fetchHTML(config: string | IFetchHTMLConfig): Promise<JSDOM>
  fetchData<T = any>(config: IFetchDataConfig): Promise<IFetchCommon<T>>
  fetchFile(config: IFetchFileConfig): Promise<IFetchCommon<IFileInfo>>
}
Example

myXCrawl is the crawler instance of the following example.

const myXCrawl = new XCrawl({
  baseUrl: 'https://xxx.com',
  timeout: 10000,
  // The interval between requests, multiple requests are valid
  intervalTime: {
    max: 2000,
    min: 1000
  }
})
About the pattern

The mode option defaults to async .

  • async: In batch requests, the next request is made without waiting for the current request to complete
  • sync: In batch requests, you need to wait for this request to complete before making the next request

If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.

fetchHTML

fetchHTML is the method of the above myXCrawl instance, usually used to crawl HTML.

Type
function fetchHTML(config: string | IFetchHTMLConfig): Promise<JSDOM>
Example
myXCrawl.fetchHTML('/xxx').then((jsdom) => {
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

fetchData

fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.

Type
function fetchData<T = any>(config: IFetchDataConfig): Promise<IFetchCommon<T>>
Example
const requestConifg = [
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' }
]

myXCrawl.fetchData({ 
  requestConifg, // Request configuration, can be IRequestConfig | IRequestConfig[]
  intervalTime: 800 // Interval between next requests, multiple requests are valid
}).then(res => {
  console.log(res)
})

fetchFile

fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.

Type
function fetchFile(config: IFetchFileConfig): Promise<IFetchCommon<IFileInfo>>
Example
const requestConifg = [
  { url: '/xxxx' },
  { url: '/xxxx' },
  { url: '/xxxx' }
]

myXCrawl.fetchFile({
  requestConifg,
  fileConfig: {
    storeDir: path.resolve(__dirname, './upload') // storage folder
  }
}).then(fileInfos => {
  console.log(fileInfos)
})

Types

  • IAnyObject
interface IAnyObject extends Object {
  [key: string | number | symbol]: any
}
  • IMethod
type IMethod = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTIONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
  • IRequestConfig
interface IRequestConfig {
  url: string
  method?: IMethod
  headers?: IAnyObject
  params?: IAnyObject
  data?: any
  timeout?: number
}
  • IIntervalTime
type IIntervalTime = number | {
  max: number
  min?: number
}
  • IFetchBaseConifg
interface IFetchBaseConifg {
  requestConifg: IRequestConfig | IRequestConfig[]
  intervalTime?: IIntervalTime
}
  • IFetchCommon
type IFetchCommon<T> = {
  id: number
  statusCode: number | undefined
  headers: IncomingHttpHeaders // node:http type
  data: T
}[]
  • IFileInfo
interface IFileInfo {
  fileName: string
  mimeType: string
  size: number
  filePath: string
}
  • IXCrawlBaseConifg
interface IXCrawlBaseConifg {
  baseUrl?: string
  timeout?: number
  intervalTime?: IIntervalTime
  mode?: 'async' | 'sync' // default: 'async'
}
  • IFetchHTMLConfig
interface IFetchHTMLConfig extends IRequestConfig {}
  • IFetchDataConfig
interface IFetchDataConfig extends IFetchBaseConifg {
}
  • IFetchFileConfig
interface IFetchFileConfig extends IFetchBaseConifg {
  fileConfig: {
    storeDir: string
  }
}

More

If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .

Keywords

FAQs

Package last updated on 02 Feb 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc