Security News
Input Validation Vulnerabilities Dominate MITRE's 2024 CWE Top 25 List
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
English | 简体中文
x-crawl is a flexible nodejs crawler library. It can crawl pages in batches, network requests in batches, download file resources in batches, polling and crawling, etc. Flexible and simple to use, friendly to JS/TS developers.
If you like x-crawl, you can give x-crawl repository a star to support it, not only for its recognition, but also for Approved by the developer.
The crawlPage API internally uses the puppeteer library to help us crawl pages and expose Brower instances and Page instances.
Take NPM as an example:
npm install x-crawl
Timing capture: Take the automatic capture of the cover image of Airbnb Plus listings every day as an example:
// 1.Import module ES/CJS
import xCrawl from 'x-crawl'
// 2.Create a crawler instance
const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 2000 } })
// 3.Set the crawling task
/*
Call the startPolling API to start the polling function,
and the callback function will be called every other day
*/
myXCrawl.startPolling({ d: 1 }, async (count, stopPolling) => {
// Call crawlPage API to crawl Page
const res = await myXCrawl.crawlPage('https://zh.airbnb.com/s/*/plus_homes')
const { page } = res.data
// set request configuration
const plusBoxHandle = await page.$('.a1stauiv')
const requestConfigs = await plusBoxHandle!.$$eval(
'picture img',
(imgEls) => {
return imgEls.map((item) => item.src)
}
)
// Call the crawlFile API to crawl pictures
myXCrawl.crawlFile({ requestConfigs, fileConfig: { storeDir: './upload' } })
// Close page
page.close()
})
running result:
Note: Do not crawl at will, you can check the robots.txt protocol before crawling. This is just to demonstrate how to use x-crawl.
Create a new application instance via xCrawl():
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
// options
})
Related options can refer to XCrawlBaseConfig .
A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
mode: 'async'
})
The mode option defaults to async .
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
import xCrawl from 'x-crawl'
const myXCrawl1 = xCrawl({
// options
})
const myXCrawl2 = xCrawl({
// options
})
Crawl a page via crawlPage() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://xxx.com').then((res) => {
const { browser, page } = res.data
// Close the browser
browser.close()
})
It is an instance object of Browser. For specific usage, please refer to Browser.
The browser instance is a headless browser without a UI shell. What he does is to bring all modern network platform functions provided by the browser rendering engine to the code.
Note: An event loop will always be generated inside the browser instance, causing the file not to be terminated. If you want to stop, you can execute browser.close() to close it. Do not call crawlPage or page if you need to use it later. Because when you modify the properties of the browser instance, it will affect the browser instance inside the crawlPage API of the crawler instance, the page instance that returns the result, and the browser instance, because the browser instance is shared within the crawlPage API of the same crawler instance.
It is an instance object of Page. The instance can also perform interactive operations such as events. For specific usage, please refer to [page](https://pptr.dev /api/puppeteer. page).
The browser instance will retain a reference to the page instance. If it is no longer used in the future, the page instance needs to be closed by itself, otherwise it will cause a memory leak.
Take Screenshot
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://xxx.com').then(async (res) => {
const { browser, page } = res.data
// Get a screenshot of the rendered page
await page.screenshot({ path: './upload/page.png' })
console.log('Screen capture is complete')
browser.close()
})
Crawl interface data through crawlData() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })
const requestConfigs = [
'https://xxx.com/xxxx',
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } }
]
myXCrawl.crawlData({ requestConfigs }).then((res) => {
// deal with
})
Crawl file data via crawlFile() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })
myXCrawl
.crawlFile({
requestConfigs: ['https://xxx.com/xxxx', 'https://xxx.com/xxxx'],
fileConfig: {
storeDir: './upload' // storage folder
}
})
.then((res) => {
console.log(res)
})
Start a polling crawl with startPolling() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
myXCrawl.startPolling({ h: 2, m: 30 }, async (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
const res = await myXCrawl.crawlPage('https://xxx.com')
res.data.page.close()
})
Using crawlPage in polling Note: The purpose of calling page.close() is to prevent the browser instance from retaining references to the page instance. If the current page is no longer used in the future, it needs to be closed by itself, otherwise it will cause a memory leak.
Callback function parameters:
Some general configuration can be set in three places:
The priority is: request config > API config > base config
The interval time can prevent too much concurrency and avoid too much pressure on the server.
The crawling interval is controlled internally by the instance method, not the entire crawling interval is controlled by the instance.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({
requestConfigs: ['https://xxx.com/xxxx', 'https://xxx.com/xxxx'],
intervalTime: { max: 2000, min: 1000 }
})
.then((res) => {})
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
Note: The first request will not trigger the interval.
Failed retries can be re-requested when timeouts and the like.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
intervalTime: { max: 3000, min: 1000 }
})
myXCrawl.crawlData({ url: 'https://xxx.com/xxxx', maxRetry: 1 }).then((res) => {})
The maxRetry attribute determines how many times to retry.
A priority queue allows a request to be sent first.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
intervalTime: { max: 3000, min: 1000 }
})
myXCrawl
.crawlData([
{ url: 'https://xxx.com/xxxx', priority: 1 },
{ url: 'https://xxx.com/xxxx', priority: 10 },
{ url: 'https://xxx.com/xxxx', priority: 8 }
])
.then((res) => {})
The larger the value of the priority attribute, the higher the priority in the current crawling queue.
For the result, the result of each request is uniformly wrapped with an object that provides information about the result of the request, such as id, result, success or not, maximum retry, number of retries, error information collected, and so on. Automatically determine whether the return value is wrapped in an array depending on the configuration you choose, and the type fits perfectly in TS.
Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
import xCrawl from 'x-crawl'
// xCrawl API
const myXCrawl = xCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the crawlPage/crawlData/crawlFile example.
crawlPage is the method of the crawler instance, usually used to crawl page.
function crawlPage: <T extends CrawlPageConfig>(
config: T,
callback?: ((res: CrawlPageSingleRes) => void) | undefined
) => Promise<CrawlPageRes<T>>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
// crawlPage API
myXCrawl.crawlPage('https://xxx.com/xxxx').then((res) => {
const { browser, page } = res.data
// Close the browser
browser.close()
})
There are 4 types:
1.string
If you just want to simply crawl this page, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://xxx.com/xxxx').then((res) => {})
2. PageRequestConfig
More configuration options of PageRequestConfig can be found in PageRequestConfig .
If you want to crawl this page and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage({
url: 'https://xxx.com/xxxx',
proxy: 'xxx',
maxRetry: 1
})
.then((res) => {})
3.(string | PageRequestConfig)[]
More configuration options of PageRequestConfig can be found in PageRequestConfig .
If you want to crawl multiple pages, and some pages need to fail and retry, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage(['https://xxx.com/xxxx', { url: 'https://xxx.com/xxxx', maxRetry: 2 }])
.then((res) => {})
4. CrawlPageConfigObject
For more configuration options of CrawlPageConfigObject, please refer to CrawlPageConfigObject .
If you want to crawl multiple pages, and the request configuration (proxy, cookies, retry, etc.) does not want to be written repeatedly, if you need an interval, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage({
requestConfigs: [
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', maxRetry: 6 }
],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
}).then((res) => {})
It can be selected according to the actual situation.
crawlData is the method of the crawler instance, which is usually used to crawl APIs to obtain JSON data and so on.
function crawlData<D = any, T extends CrawlDataConfig = CrawlDataConfig>(
config: T,
callback?: ((res: CrawlDataSingleRes<D>) => void) | undefined
) => Promise<CrawlDataRes<D, T>>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
myXCrawl
.crawlData({
requestConfigs: ['https://xxx.com/xxxx', 'https://xxx.com/xxxx'],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
})
.then((res) => {
console.log(res)
})
There are 4 types:
1.string
If you just want to simply crawl the data, and the interface is GET, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlData('https://xxx.com/xxxx').then((res) => {})
2. DataRequestConfig
More configuration options of DataRequestConfig can be found in DataRequestConfig .
If you want to crawl this data and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({
url: 'https://xxx.com/xxxx',
proxy: 'xxx',
maxRetry: 1
})
.then((res) => {})
3.(string | DataRequestConfig)[]
More configuration options of DataRequestConfig can be found in DataRequestConfig .
If you want to crawl multiple data, and some data needs to fail and retry, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage(['https://xxx.com/xxxx', { url: 'https://xxx.com/xxxx', maxRetry: 2 }])
.then((res) => {})
4. CrawlDataConfigObject
For more configuration options of CrawlPageConfigObject, please refer to CrawlPageConfigObject .
If you want to crawl multiple data, and the request configuration (proxy, cookies, retry, etc.) does not want to be written repeatedly, if you need an interval, you can try this writing method:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlData({
requestConfigs: [
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', maxRetry: 6 }
],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
}).then((res) => {})
It can be selected according to the actual situation.
crawlFile is the method of the crawler instance, which is usually used to crawl files, such as pictures, pdf files, etc.
function crawlFile<T extends CrawlFileConfig>(
config: T,
callback?: ((res: CrawlFileSingleRes) => void) | undefined
) => Promise<CrawlFileRes<T>>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// crawlFile API
myXCrawl
.crawlFile({
requestConfigs: ['https://xxx.com/xxxx', 'https://xxx.com/xxxx'],
storeDir: './upload',
intervalTime: { max: 3000, min: 1000 },
maxRetry: 1
})
.then((res) => {})
There are 3 types:
FileRequestConfig
FileRequestConfig[]
CrawlFileConfigObject
1. FileRequestConfig
More configuration options of FileRequestConfig can be found in FileRequestConfig .
If you want to crawl this file and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile({
url: 'https://xxx.com/xxxx',
proxy: 'xxx',
maxRetry: 1,
storeDir: './upload',
fileName: 'xxx'
})
.then((res) => {})
2. FileRequestConfig[]
More configuration options of FileRequestConfig can be found in FileRequestConfig .
If you want to crawl multiple files, and some data needs to be retried after failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile([
{ url: 'https://xxx.com/xxxx', storeDir: './upload' },
{ url: 'https://xxx.com/xxxx', storeDir: './upload', maxRetry: 2 }
])
.then((res) => {})
3. CrawlFileConfigObject
For more configuration options of CrawlFileConfigObject, please refer to CrawlFileConfigObject .
If you want to crawl multiple data, and the request configuration (storeDir, proxy, retry, etc.) does not want to be written repeatedly, and you need interval time, etc., you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlFile({
requestConfigs: [
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', storeDir: './upload/xxx' }
],
storeDir: './upload',
intervalTime: { max: 3000, min: 1000 },
maxRetry: 1
}).then((res) => {})
It can be selected according to the actual situation.
crawlPolling is a method of the crawler instance, typically used to perform polling operations, such as getting news every once in a while.
function startPolling(
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
): void
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// startPolling API
myXCrawl.startPolling({ h: 2, m: 30 }, (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
})
export type IntervalTime = number | { max: number; min?: number }
export type Method =
| 'get'
| 'GET'
| 'delete'
| 'DELETE'
| 'head'
| 'HEAD'
| 'options'
| 'OPTIONS'
| 'post'
| 'POST'
| 'put'
| 'PUT'
| 'patch'
| 'PATCH'
| 'purge'
| 'PURGE'
| 'link'
| 'LINK'
| 'unlink'
| 'UNLINK'
export type PageRequestConfigCookies =
| string
| Protocol.Network.CookieParam
| Protocol.Network.CookieParam[]
export interface PageRequestConfig {
url: string
headers?: AnyObject
timeout?: number
proxy?: string
cookies?: PageRequestConfigCookies
maxRetry?: number
priority?: number
}
export interface DataRequestConfig {
url: string
method?: Method
headers?: AnyObject
params?: AnyObject
data?: any
timeout?: number
proxy?: string
maxRetry?: number
priority?: number
}
export interface FileRequestConfig {
url: string
headers?: AnyObject
timeout?: number
proxy?: string
maxRetry?: number
priority?: number
storeDir?: string
fileName?: string
extension?: string
}
export interface CrawlPageConfigObject {
requestConfigs: (string | PageRequestConfig)[]
proxy?: string
timeout?: number
cookies?: PageRequestConfigCookies
intervalTime?: IntervalTime
maxRetry?: number
}
export interface CrawlDataConfigObject {
requestConfigs: (string | DataRequestConfig)[]
proxy?: string
timeout?: number
intervalTime?: IntervalTime
maxRetry?: number
}
export interface CrawlFileConfigObject {
requestConfigs: (string | FileRequestConfig)[]
proxy?: string
timeout?: number
intervalTime?: IntervalTime
maxRetry?: number
fileConfig?: {
storeDir?: string
extension?: string
beforeSave?: (info: {
id: number
fileName: string
filePath: string
data: Buffer
}) => Buffer | void
}
}
export interface XCrawlBaseConfig {
baseUrl?: string
timeout?: number
intervalTime?: IntervalTime
mode?: 'async' | 'sync'
proxy?: string
maxRetry?: number
}
export type CrawlPageConfig =
| string
| PageRequestConfig
| (string | PageRequestConfig)[]
| CrawlPageConfigObject
export type CrawlDataConfig =
| string
| DataRequestConfig
| (string | DataRequestConfig)[]
| CrawlDataConfigObject
export type CrawlFileConfig = FileRequestConfig | FileRequestConfig[] | CrawlFileConfigObject
export interface StartPollingConfig {
d?: number
h?: number
m?: number
}
export interface XCrawlInstance {
crawlPage: <T extends CrawlPageConfig>(
config: T,
callback?: ((res: CrawlPageSingleRes) => void) | undefined
) => Promise<CrawlPageRes<T>>
crawlData: <D = any, T extends CrawlDataConfig = CrawlDataConfig>(
config: T,
callback?: ((res: CrawlDataSingleRes<D>) => void) | undefined
) => Promise<CrawlDataRes<D, T>>
crawlFile: <T extends CrawlFileConfig>(
config: T,
callback?: ((res: CrawlFileSingleRes) => void) | undefined
) => Promise<CrawlFileRes<T>>
startPolling: (
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
) => void
}
export interface CrawlCommonRes {
id: number
isSuccess: boolean
maxRetry: number
crawlCount: number
retryCount: number
errorQueue: Error[]
}
export interface CrawlPageSingleRes extends CrawlCommonRes {
data: {
browser: Browser
response: HTTPResponse | null
page: Page
}
}
export interface CrawlDataSingleRes<D> extends CrawlCommonRes {
data: {
statusCode: number | undefined
headers: IncomingHttpHeaders
data: D
} | null
}
export interface CrawlFileSingleRes extends CrawlCommonRes {
data: {
statusCode: number | undefined
headers: IncomingHttpHeaders
data: {
isSuccess: boolean
fileName: string
fileExtension: string
mimeType: string
size: number
filePath: string
}
} | null
}
export type CrawlPageRes<R extends CrawlPageConfig> = R extends
| (string | PageRequestConfig)[]
| CrawlPageConfigObject
? CrawlPageSingleRes[]
: CrawlPageSingleRes
export type CrawlDataRes<D, R extends CrawlDataConfig> = R extends
| (string | DataRequestConfig)[]
| CrawlDataConfigObject
? CrawlDataSingleRes<D>[]
: CrawlDataSingleRes<D>
export type CrawlFileRes<R extends CrawlFileConfig> = R extends
| FileRequestConfig[]
| CrawlFileConfigObject
? CrawlFileSingleRes[]
: CrawlFileSingleRes
export interface AnyObject extends Object {
[key: string | number | symbol]: any
}
If you have problems, needs, good suggestions please raise Issues in https://github.com/coder-hxl/x-crawl/issues.
v5.0.0 (2023-04-06)
FAQs
x-crawl is a flexible Node.js AI-assisted crawler library.
The npm package x-crawl receives a total of 106 weekly downloads. As such, x-crawl popularity was classified as not popular.
We found that x-crawl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.