x-crawl
English | 简体中文
XCrawl is a Nodejs multifunctional crawler library.
Feature
- Crawl HTML, JSON, file resources, etc. with simple configuration
- Use the JSDOM library to parse HTML, or parse HTML by yourself
- The request method supports asynchronous/synchronous
- Support Promise/Callback
- Polling function
- Anthropomorphic request interval
- Written in TypeScript
Table of Contents
Install
Take NPM as an example:
npm install x-crawl
Example
Get the title of https://docs.github.com/zh/get-started as an example:
import XCrawl from 'x-crawl'
const docsXCrawl = new XCrawl({
baseUrl: 'https://docs.github.com',
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
docsXCrawl.fetchHTML('/zh/get-started').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
Core concepts
XCrawl
Create a crawler instance via new XCrawl. The request queue is maintained by the instance method itself and is not shared.
Type
class XCrawl {
constructor(baseConfig?: IXCrawlBaseConifg)
fetchHTML(
config: IFetchHTMLConfig,
callback?: (res: IFetchHTML) => void
): Promise<IFetchHTML>
fetchData<T = any>(
config: IFetchDataConfig,
callback?: (res: IFetchCommon<T>) => void
): Promise<IFetchCommonArr<T>>
fetchFile(
config: IFetchFileConfig,
callback?: (res: IFetchCommon<IFileInfo>) => void
): Promise<IFetchCommonArr<IFileInfo>>
fetchPolling(
config: IFetchPollingConfig,
callback: (count: number) => void
): void
}
Example
const myXCrawl = new XCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
intervalTime: {
max: 2000,
min: 1000
}
})
Passing baseConfig is for fetchHTML/fetchData/fetchFile to use these values by default.
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchHTML/fetchData/fetchFile example.
Mode
The mode option defaults to async .
- async: In batch requests, the next request is made without waiting for the current request to complete
- sync: In batch requests, you need to wait for this request to complete before making the next request
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
IntervalTime
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
- number: The time that must wait before each request is fixed
- Object: Randomly select a value from max and min, which is more anthropomorphic
The first request is not to trigger the interval.
fetchHTML
fetchHTML is the method of the above myXCrawl instance, usually used to crawl HTML.
Type
fetchHTML(
config: IFetchHTMLConfig,
callback?: (res: IFetchHTML) => void
): Promise<IFetchHTML>
Example
myXCrawl.fetchHTML('/xxx').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
fetchData
fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.
Type
fetchData<T = any>(
config: IFetchDataConfig,
callback?: (res: IFetchCommon<T>) => void
): Promise<IFetchCommonArr<T>>
Example
const requestConifg = [
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' }
]
myXCrawl.fetchData({
requestConifg,
intervalTime: { max: 5000, min: 1000 }
}).then(res => {
console.log(res)
})
fetchFile
fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.
Type
fetchFile(
config: IFetchFileConfig,
callback?: (res: IFetchCommon<IFileInfo>) => void
): Promise<IFetchCommonArr<IFileInfo>>
Example
const requestConifg = [
{ url: '/xxxx' },
{ url: '/xxxx' },
{ url: '/xxxx' }
]
myXCrawl.fetchFile({
requestConifg,
fileConfig: {
storeDir: path.resolve(__dirname, './upload')
}
}).then(fileInfos => {
console.log(fileInfos)
})
fetchPolling
fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.
Type
function fetchPolling(
config: IFetchPollingConfig,
callback: (count: number) => void
): void
Example
myXCrawl.fetchPolling({ h: 1, m: 30 }, () => {
})
Types
IAnyObject
interface IAnyObject extends Object {
[key: string | number | symbol]: any
}
IMethod
type IMethod = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTIONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
IRequestConfig
interface IRequestConfig {
url: string
method?: IMethod
headers?: IAnyObject
params?: IAnyObject
data?: any
timeout?: number
proxy?: string
}
IIntervalTime
type IIntervalTime = number | {
max: number
min?: number
}
IFetchBaseConifg
interface IFetchBaseConifg {
requestConifg: IRequestConfig | IRequestConfig[]
intervalTime?: IIntervalTime
}
IXCrawlBaseConifg
interface IXCrawlBaseConifg {
baseUrl?: string
timeout?: number
intervalTime?: IIntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
IFetchHTMLConfig
type IFetchHTMLConfig = string | IRequestConfig
IFetchDataConfig
interface IFetchDataConfig extends IFetchBaseConifg {
}
IFetchFileConfig
interface IFetchFileConfig extends IFetchBaseConifg {
fileConfig: {
storeDir: string
extension?: string
}
}
IFetchPollingConfig
interface IFetchPollingConfig {
Y?: number
M?: number
d?: number
h?: number
m?: number
}
IFetchCommon
interface IFetchCommon<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders
data: T
}
IFetchCommonArr
type IFetchCommonArr<T> = IFetchCommon<T>[]
IFileInfo
interface IFileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
IFetchHTML
interface IFetchHTML {
statusCode: number | undefined
headers: IncomingHttpHeaders
data: {
html: string
jsdom: JSDOM
}
}
More
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .