x-crawl
English | 简体中文
x-crawl is a Nodejs multifunctional crawler library.
Features
- Crawl pages, JSON, file resources, etc. with simple configuration.
- The built-in puppeteer crawls the page, and uses the jsdom library to parse the page.
- Support asynchronous/synchronous way to crawl data.
- Support Promise/Callback method to get the result.
- Polling function, fixed-point crawling.
- Anthropomorphic request interval.
- Written in TypeScript, providing generics.
Relationship with puppeteer
The fetchPage API internally uses the puppeteer library to crawl pages.
The following can be done:
- Generate screenshots and PDFs of pages.
- Crawl a SPA (Single-Page Application) and generate pre-rendered content (i.e. "SSR" (Server-Side Rendering)).
- Automate form submission, UI testing, keyboard input, etc.
Table of Contents
Install
Take NPM as an example:
npm install x-crawl
Example
Example of fetching featured video cover image for youtube homepage every other day:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 2000 }
})
myXCrawl.startPolling({ d: 1 }, () => {
myXCrawl.fetchPage('https://www.youtube.com/').then((res) => {
const { jsdom } = res.data
const imgEls = jsdom.window.document.querySelectorAll(
'.yt-core-image--fill-parent-width'
)
const requestConfig = []
imgEls.forEach((item) => {
if (item.src) {
requestConfig.push({ url: item.src })
}
})
myXCrawl.fetchFile({ requestConfig, fileConfig: { storeDir: './upload' } })
})
})
running result:
Note: Do not crawl randomly, here is just to demonstrate how to use x-crawl, and control the request frequency within 3000ms to 2000ms.
Core concepts
x-crawl
Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
Type
For more detailed types, please see the Types section
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
Example
const myXCrawl = xCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
intervalTime: {
max: 2000,
min: 1000
}
})
Passing baseConfig is for fetchPage/fetchData/fetchFile to use these values by default.
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchPage/fetchData/fetchFile example.
Mode
The mode option defaults to async .
- async: In batch requests, the next request is made without waiting for the current request to complete
- sync: In batch requests, you need to wait for this request to complete before making the next request
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
IntervalTime
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
- number: The time that must wait before each request is fixed
- Object: Randomly select a value from max and min, which is more anthropomorphic
The first request is not to trigger the interval.
fetchPage
fetchPage is the method of the above myXCrawl instance, usually used to crawl page.
Type
function fetchPage: (
config: FetchPageConfig,
callback?: (res: FetchPage) => void
) => Promise<FetchPage>
Example
myXCrawl.fetchPage('/xxx').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
About page
Get the page instance from res.data.page, which can do interactive operations such as events. For specific usage, refer to page.
fetchData
fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.
Type
function fetchData: <T = any>(
config: FetchDataConfig,
callback?: (res: FetchResCommonV1<T>) => void
) => Promise<FetchResCommonArrV1<T>>
Example
const requestConfig = [
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' }
]
myXCrawl.fetchData({
requestConfig,
intervalTime: { max: 5000, min: 1000 }
}).then(res => {
console.log(res)
})
fetchFile
fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.
Type
function fetchFile: (
config: FetchFileConfig,
callback?: (res: FetchResCommonV1<FileInfo>) => void
) => Promise<FetchResCommonArrV1<FileInfo>>
Example
const requestConfig = [
{ url: '/xxxx' },
{ url: '/xxxx' },
{ url: '/xxxx' }
]
myXCrawl.fetchFile({
requestConfig,
fileConfig: {
storeDir: path.resolve(__dirname, './upload')
}
}).then(fileInfos => {
console.log(fileInfos)
})
startPolling
fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.
Type
function startPolling(
config: StartPollingConfig,
callback: (count: number) => void
): void
Example
myXCrawl.startPolling({ h: 1, m: 30 }, () => {
})
Types
AnyObject
interface AnyObject extends Object {
[key: string | number | symbol]: any
}
Method
type Method = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
RequestBaseConfig
interface RequestBaseConfig {
url: string
timeout?: number
proxy?: string
}
RequestConfig
interface RequestConfig extends RequestBaseConfig {
method?: Method
headers?: AnyObject
params?: AnyObject
data?: any
}
IntervalTime
type IntervalTime = number | {
max: number
min?: number
}
XCrawlBaseConfig
interface XCrawlBaseConfig {
baseUrl?: string
timeout?: number
intervalTime?: IntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
FetchBaseConfigV1
interface FetchBaseConfigV1 {
requestConfig: RequestConfig | RequestConfig[]
intervalTime?: IntervalTime
}
FetchPageConfig
type FetchPageConfig = string | RequestBaseConfig
FetchDataConfig
interface FetchDataConfig extends FetchBaseConfigV1 {
}
FetchFileConfig
interface FetchFileConfig extends FetchBaseConfigV1 {
fileConfig: {
storeDir: string
extension?: string
}
}
StartPollingConfig
interface StartPollingConfig {
d?: number
h?: number
m?: number
}
FetchResCommonV1
interface FetchCommon<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders
data: T
}
FetchResCommonArrV1
type FetchResCommonArrV1<T> = FetchResCommonV1<T>[]
FileInfo
interface FileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
FetchPage
interface FetchPage {
httpResponse: HTTPResponse | null
data: {
page: Page
jsdom: JSDOM
}
}
More
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .