Security News
Input Validation Vulnerabilities Dominate MITRE's 2024 CWE Top 25 List
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
English | 简体中文
x-crawl is a flexible Node.js multifunctional crawler library. Flexible usage and numerous functions can help you quickly, safely, and stably crawl pages, interfaces, and files.
If you also like x-crawl, you can give x-crawl repository a star to support it, thank you for your support!
x-crawl is an open source project under the MIT license, completely free to use. If you benefit from the projects I develop and maintain at work, please consider supporting my work through the Afdian platform.
Take NPM as an example:
npm install x-crawl
Take the automatic acquisition of some photos of experiences and homes around the world every day as an example:
// 1.Import module ES/CJS
import xCrawl from 'x-crawl'
// 2.Create a crawler instance
const myXCrawl = xCrawl({ maxRetry: 3, intervalTime: { max: 3000, min: 2000 } })
// 3.Set the crawling task
/*
Call the startPolling API to start the polling function,
and the callback function will be called every other day
*/
myXCrawl.startPolling({ d: 1 }, async (count, stopPolling) => {
// Call crawlPage API to crawl Page
const res = await myXCrawl.crawlPage({
targets: [
'https://www.airbnb.cn/s/experiences',
'https://www.airbnb.cn/s/plus_homes'
],
viewport: { width: 1920, height: 1080 }
})
// Store the image URL to targets
const targets = []
const elSelectorMap = ['._fig15y', '._aov0j6']
for (const item of res) {
const { id } = item
const { page } = item.data
// Wait for the page to load
await new Promise((r) => setTimeout(r, 300))
// Gets the URL of the page image
const urls = await page.$$eval(`${elSelectorMap[id - 1]} img`, (imgEls) => {
return imgEls.map((item) => item.src)
})
targets.push(...urls)
// Close page
page.close()
}
// Call the crawlFile API to crawl pictures
myXCrawl.crawlFile({ targets, storeDirs: './upload' })
})
running result:
Note: Do not crawl at will, you can check the robots.txt protocol before crawling. This is just to demonstrate how to use x-crawl.
Create a new application instance via xCrawl():
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
// options
})
Related options can refer to XCrawlBaseConfig .
A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
mode: 'async'
})
The mode option defaults to async .
If there is an interval time set, it is necessary to wait for the end of the interval time before crawling the next target.
Note: The crawling process of the crawling API is performed separately, and this mode is only valid for batch crawling targets.
A property can be used to control whether to use the default random fingerprint, or you can configure a custom fingerprint through subsequent crawling.
Device fingerprinting is set up to avoid identifying and tracking us from different locations through fingerprinting.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
enableRandomFingerprint: true
})
The enableRandomFingerprint option defaults to true.
import xCrawl from 'x-crawl'
const myXCrawl1 = xCrawl({
// options
})
const myXCrawl2 = xCrawl({
// options
})
Crawl a page via crawlPage() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://www.example.com').then((res) => {
const { browser, page } = res.data
// Close the browser
browser.close()
})
When you call crawlPage API to crawl pages in the same crawler instance, the browser instance used is the same, because the crawlPage API of the browser instance in the same crawler instance is shared. For specific usage, please refer to Browser.
Note: The browser will keep running and the file will not be terminated. If you want to stop, you can execute browser.close() to close it. Do not call crawlPage or page if you need to use it later. Because the crawlPage API of the browser instance in the same crawler instance is shared.
When you call crawlPage API to crawl pages in the same crawler instance, a new page instance will be generated from the browser instance. For specific usage, please refer to Page.
The browser instance will retain a reference to the page instance. If it is no longer used in the future, the page instance needs to be closed by itself, otherwise it will cause a memory leak.
Take Screenshot
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://www.example.com').then(async (res) => {
const { browser, page } = res.data
// Get a screenshot of the rendered page
await page.screenshot({ path: './upload/page.png' })
console.log('Screen capture is complete')
browser.close()
})
Lifecycle functions owned by the crawlPage API:
In the onCrawlItemComplete function, you can get the results of each crawled goal in advance.
Note: If you need to crawl many pages at one time, you need to use this life cycle function to process the results of each target and close the page instance after each page is crawled down. If you do not close the page instance, then The program will crash due to too many opened pages.
Disable running the browser in headless mode.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
maxRetry: 3,
// Cancel running the browser in headless mode
crawlPage: { launchBrowser: { headless: false } }
})
myXCrawl.crawlPage('https://www.example.com').then((res) => {})
Crawl interface data through crawlData() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })
const targets = [
'https://www.example.com/api-1',
'https://www.example.com/api-2',
{
url: 'https://www.example.com/api-3',
method: 'POST',
data: { name: 'coderhxl' }
}
]
myXCrawl.crawlData({ targets }).then((res) => {
// deal with
})
Life cycle functions owned by crawlData API:
In the onCrawlItemComplete function, you can get the results of each crawled goal in advance.
Crawl file data via crawlFile() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ intervalTime: { max: 3000, min: 1000 } })
myXCrawl
.crawlFile({
targets: [
'https://www.example.com/file-1',
'https://www.example.com/file-2'
],
storeDirs: './upload' // storage folder
})
.then((res) => {
console.log(res)
})
Life cycle functions owned by crawlFile API:
onCrawlItemComplete: Call back when each crawl is complete
onBeforeSaveItemFile: Callback before saving the file
In the onCrawlItemComplete function, you can get the results of each crawled goal in advance.
In the onBeforeSaveItemFile function, you can get the Buffer type file, you can process the Buffer, and then you need to return a Promise, and the resolve is a Buffer, which will replace the obtained Buffer and store it in the file.
Resize Picture
Use the sharp library to resize the images to be crawled:
import xCrawl from 'x-crawl'
import sharp from 'sharp'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile({
targets: [
'https://www.example.com/file-1.jpg',
'https://www.example.com/file-2.jpg'
],
onBeforeSaveItemFile(info) {
return sharp(info.data).resize(200).toBuffer()
}
})
.then((res) => {
res.forEach((item) => {
console.log(item.data?.data.isSuccess)
})
})
Start a polling crawl with startPolling() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
myXCrawl.startPolling({ h: 2, m: 30 }, async (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
const res = await myXCrawl.crawlPage('https://www.example.com')
res.data.page.close()
})
Using crawlPage in polling Note: The browser instance will retain a reference to the page instance. If it is no longer used in the future, you need to close the page instance yourself, otherwise it will cause a memory leak.
Callback function parameters:
Some common configurations can be set in these three places:
The priority is: detailed target configuration > advanced configuration > application instance configuration
Take crawlPage to crawl two pages as an example:
import xCrawl from 'x-crawl'
// Application instance configurationconst testXCrawl = xCrawl({
proxy: {
urls: [
'https://www.example.com/proxy-1',
'https://www.example.com/proxy-2',
'https://www.example.com/proxy-3'
],
switchByErrorCount: 3,
switchByHttpStatus: [401, 403]
}
})
// Advanced configuration
testXCrawl
.crawlPage({
targets: [
'https://www.example.com/page-1',
'https://www.example.com/page-2',
// Detailed target configuration
{
url: 'https://www.example.com/page-3',
proxy: { urls: ['https://www.example.com/proxy-5'] }
}
],
maxRetry: 10,
proxy: {
urls: [
'https://www.example.com/proxy-3',
'https://www.example.com/proxy-4'
],
switchByErrorCount: 3,
switchByHttpStatus: [401, 403]
}
})
.then((res) => {})
})
In the above example, Proxy is set in Application Instance Configuration, Advanced Configuration and Detailed Target Configuration, page3 will use its own proxy configuration, page1 and page2 will use the proxy configuration of the advanced configuration.
The interval time can prevent too much concurrency and avoid too much pressure on the server.
The crawling interval is controlled by the crawling API itself, not by the crawler instance.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({
targets: ['https://www.example.com/api-1', 'https://www.example.com/api-2'],
intervalTime: { max: 2000, min: 1000 }
})
.then((res) => {})
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
Note: The first crawl target will not trigger the interval.
It can avoid crawling failure due to temporary problems, and will wait for the end of this round of crawling targets to crawl again.
You can create crawler application instance, advanced usage, detailed target these three places Settings.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({ url: 'https://www.example.com/api', maxRetry: 9 })
.then((res) => {})
The maxRetry attribute determines how many times to retry.
With failed retries, custom error times and HTTP status codes, the proxy is automatically rotated for crawling targets.
You can create crawler application instance, advanced usage, detailed target these three places Settings.
Take crawlPage as an example:
import xCrawl from 'x-crawl'
const testXCrawl = xCrawl()
testXCrawl
.crawlPage({
targets: [
'https://www.example.com/page-1',
'https://www.example.com/page-2',
'https://www.example.com/page-3',
'https://www.example.com/page-4',
// Undelegate for this target
{ url: 'https://www.example.com/page-6', proxy: null },
// Set the proxy individually for this target
{
url: 'https://www.example.com/page-6',
proxy: {
urls: [
'https://www.example.com/proxy-4',
'https://www.example.com/proxy-5'
],
switchByErrorCount: 3
}
}
],
maxRetry: 10,
// Set the proxy uniformly for this target
proxy: {
urls: [
'https://www.example.com/proxy-1',
'https://www.example.com/proxy-2',
'https://www.example.com/proxy-3'
],
switchByErrorCount: 3,
switchByHttpStatus: [401, 403]
}
})
.then((res) => {})
Note: This function needs to cooperate with failure retry to work normally.
Customize the configuration of device fingerprints to avoid identifying and tracking us from different locations through fingerprint recognition.
Multiple information can be passed in fingerprints through advanced usage, and internally it will help you randomly assign each target to targets. It is also possible to set a specific fingerprint for a target directly with the detailed target configuration.
Take crawlPage as an example:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ intervalTime: { max: 5000, min: 3000 } })
myXCrawl.crawlPage({
targets: [
'https://www.example.com/page-1',
'https://www.example.com/page-2',
'https://www.example.com/page-3',
// Cancel the fingerprint for this target
{ url: 'https://www.example.com/page-4', fingerprint: null },
// Set a separate fingerprint for this target
{
url: 'https://www.example.com/page-5',
fingerprint: {
mobile: 'random',
platform: 'Windows',
acceptLanguage: `zh-CN,zh;q=0.9,en;q=0.8`,
userAgent: {
value:
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
versions: [
{ name: 'Chrome', maxMinorVersion: 10, maxPatchVersion: 5615 },
{ name: 'Safari', maxMinorVersion: 36, maxPatchVersion: 2333 }
]
}
}
}
],
// Set fingerprints uniformly for this target
fingerprints: [
// Device fingerprint 1
{
maxWidth: 1024,
maxHeight: 800,
platform: 'Windows',
mobile: 'random',
userAgent: {
value:
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36',
versions: [
{
name: 'Chrome',
// Browser version
maxMajorVersion: 112,
minMajorVersion: 100,
maxMinorVersion: 20,
maxPatchVersion: 5000
},
{
name: 'Safari',
maxMajorVersion: 537,
minMajorVersion: 500,
maxMinorVersion: 36,
maxPatchVersion: 5000
}
]
}
},
// Device fingerprint 2
{
platform: 'Windows',
mobile: 'random',
userAgent: {
value:
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.59',
versions: [
{
name: 'Chrome',
maxMajorVersion: 91,
minMajorVersion: 88,
maxMinorVersion: 10,
maxPatchVersion: 5615
},
{ name: 'Safari', maxMinorVersion: 36, maxPatchVersion: 2333 },
{ name: 'Edg', maxMinorVersion: 10, maxPatchVersion: 864 }
]
}
},
// Device fingerprint 3
{
platform: 'Windows',
mobile: 'random',
userAgent: {
value:
'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0',
versions: [
{
name: 'Firefox',
maxMajorVersion: 47,
minMajorVersion: 43,
maxMinorVersion: 10,
maxPatchVersion: 5000
}
]
}
}
]
})
For more fingerprint options, you can go to the corresponding configuration to view.
A priority queue allows a crawl target to be sent first.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData([
{ url: 'https://www.example.com/api-1', priority: 1 },
{ url: 'https://www.example.com/api-2', priority: 10 },
{ url: 'https://www.example.com/api-3', priority: 8 }
])
.then((res) => {})
The larger the value of the priority attribute, the higher the priority in the current crawling queue.
Each crawl target will generate a detail object, which will contain the following properties:
If it is a specific configuration, it will automatically determine whether the details object is stored in an array according to the configuration method you choose, and return the array, otherwise return the details object. Already fits types perfectly in TypeScript.
Details about configuration methods and results are as follows: crawlPage config, crawlData config, crawlFile config.
Type systems like TypeScript can detect many common errors at compile time through static analysis. This reduces runtime errors and gives us more confidence when refactoring large projects. TypeScript also improves the development experience and efficiency through type-based auto-completion in the IDE.
x-crawl itself is written in TypeScript and supports TypeScript. Comes with a type declaration file, out of the box.
Create a crawler instance via call xCrawl. The crawl target queue is maintained by the instance method itself, not by the instance itself.
The xCrawl API is a function.
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
Parameter Type:
Return value type:
import xCrawl from 'x-crawl'
// xCrawl API
const myXCrawl = xCrawl({
baseUrl: 'https://www.example.com',
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the crawlPage/crawlData/crawlFile example.
crawlPage is the method of the crawler instance, usually used to crawl page.
The crawlPage API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.
type crawlPage = {
(
config: string,
callback?: (res: CrawlPageSingleResult) => void
): Promise<CrawlPageSingleResult>
(
config: CrawlPageDetailTargetConfig,
callback?: (res: CrawlPageSingleResult) => void
): Promise<CrawlPageSingleResult>
(
config: (string | CrawlPageDetailTargetConfig)[],
callback?: (res: CrawlPageSingleResult[]) => void
): Promise<CrawlPageSingleResult[]>
(
config: CrawlPageAdvancedConfig,
callback?: (res: CrawlPageSingleResult[]) => void
): Promise<CrawlPageSingleResult[]>
}
Parameter Type:
Return value type:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
// crawlPage API
myXCrawl.crawlPage('https://www.example.com').then((res) => {
const { browser, page } = res.data
// Close the browser
browser.close()
})
There are 4 types:
This is a simple target configuration. if you just want to simply crawl this page, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlPage('https://www.example.com').then((res) => {})
The res you get will be an object.
This is the detailed target configuration. if you want to crawl this page and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage({
url: 'https://www.example.com',
proxy: 'xxx',
maxRetry: 1
})
.then((res) => {})
The res you get will be an object.
More configuration options can view CrawlPageDetailTargetConfig.
This is a mixed target array configuration. if you want to crawl multiple pages, and some pages need to fail and retry, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage([
'https://www.example.com/page-1',
{ url: 'https://www.example.com/page-2', maxRetry: 2 }
])
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlPageDetailTargetConfig.
This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl multiple pages and crawl target configurations (proxy, cookies, retries, etc.) that you don't want to write repeatedly, but also need interval time, device fingerprint, lifecycle, etc., try this:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlPage({
targets: [
'https://www.example.com/page-1',
{ url: 'https://www.example.com/page-2', maxRetry: 6 }
],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
})
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlPageAdvancedConfig.
More information about the results can be found at About results , which can be selected according to the actual situation.
crawlData is the method of the crawler instance, which is usually used to crawl APIs to obtain JSON data and so on.
The crawlData API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.
type crawlData = {
<T = any>(
config: CrawlDataDetailTargetConfig,
callback?: (res: CrawlDataSingleResult<T>) => void
): Promise<CrawlDataSingleResult<T>>
<T = any>(
config: string,
callback?: (res: CrawlDataSingleResult<T>) => void
): Promise<CrawlDataSingleResult<T>>
<T = any>(
config: (string | CrawlDataDetailTargetConfig)[],
callback?: (res: CrawlDataSingleResult<T>[]) => void
): Promise<CrawlDataSingleResult<T>[]>
<T = any>(
config: CrawlDataAdvancedConfig<T>,
callback?: (res: CrawlDataSingleResult<T>[]) => void
): Promise<CrawlDataSingleResult<T>[]>
}
Parameter Type:
Return value type:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
myXCrawl
.crawlData({
targets: ['https://www.example.com/api-1', 'https://www.example.com/api-2'],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
})
.then((res) => {
console.log(res)
})
There are 4 types:
This is a simple target configuration. if you just want to simply crawl the data, and the interface is GET, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl.crawlData('https://www.example.com/api').then((res) => {})
The res you get will be an object.
This is the detailed target configuration. if you want to crawl this data and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({
url: 'https://www.example.com/api',
proxy: 'xxx',
maxRetry: 1
})
.then((res) => {})
The res you get will be an object.
More configuration options can view CrawlDataDetailTargetConfig.
This is a mixed target array configuration. if you want to crawl multiple data, and some data needs to fail and retry, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData([
'https://www.example.com/api-1',
{ url: 'https://www.example.com/api-2', maxRetry: 2 }
])
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlDataDetailTargetConfig.
This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl more than one piece of data and crawl target configurations (proxy, cookies, retries, etc.) don't want to write twice, but also need interval time, device fingerprint, lifecycle, etc., try this:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlData({
targets: [
'https://www.example.com/api-1',
{ url: 'https://www.example.com/api-2', maxRetry: 6 }
],
intervalTime: { max: 3000, min: 1000 },
cookies: 'xxx',
maxRetry: 1
})
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlPageAdvancedConfig .
More information about the results can be found at About results , which can be selected according to the actual situation.
crawlFile is the method of the crawler instance, which is usually used to crawl files, such as pictures, pdf files, etc.
The crawlFile API is a function. A type is an overloaded function which can be called (in terms of type) with different configuration parameters.
type crawlFile = {
(
config: CrawlFileDetailTargetConfig,
callback?: (res: CrawlFileSingleResult) => void
): Promise<CrawlFileSingleResult>
(
config: CrawlFileDetailTargetConfig[],
callback?: (res: CrawlFileSingleResult[]) => void
): Promise<CrawlFileSingleResult[]>
(
config: CrawlFileAdvancedConfig,
callback?: (res: CrawlFileSingleResult[]) => void
): Promise<CrawlFileSingleResult[]>
}
Parameter Type:
Return value type:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// crawlFile API
myXCrawl
.crawlFile({
targets: [
'https://www.example.com/file-1',
'https://www.example.com/file-2'
],
storeDirs: './upload',
intervalTime: { max: 3000, min: 1000 },
maxRetry: 1
})
.then((res) => {})
There are 3 types:
This is the detailed target configuration. if you want to crawl this file and need to retry on failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile({
url: 'https://www.example.com/file',
proxy: 'xxx',
maxRetry: 1,
storeDir: './upload',
fileName: 'xxx'
})
.then((res) => {})
The res you get will be an object.
More configuration options can view CrawlFileDetailTargetConfig.
This is the detailed target array configuration. if you want to crawl multiple files, and some data needs to be retried after failure, you can try this way of writing:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile([
{ url: 'https://www.example.com/file-1', storeDir: './upload' },
{ url: 'https://www.example.com/file-2', storeDir: './upload', maxRetry: 2 }
])
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlFileDetailTargetConfig.
This is an advanced configuration, targets is a mixed target array configuration. if you want to crawl more than one piece of data and crawl target configurations (proxy, storeDir, retry, etc.) don't want to write twice, but also need interval time, device fingerprint, life cycle, etc., try this:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl()
myXCrawl
.crawlFile({
targets: [
'https://www.example.com/file-1',
{ url: 'https://www.example.com/file-2', storeDir: './upload/xxx' }
],
storeDirs: './upload',
intervalTime: { max: 3000, min: 1000 },
maxRetry: 1
})
.then((res) => {})
The res you get will be an array of objects.
More configuration options can view CrawlFileAdvancedConfig .
More information about the results can be found at About results , which can be selected according to the actual situation.
crawlPolling is a method of the crawler instance, typically used to perform polling operations, such as getting news every once in a while.
function startPolling(
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
): void
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// startPolling API
myXCrawl.startPolling({ h: 2, m: 30 }, (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
})
export interface XCrawlConfig extends CrawlCommonConfig {
mode?: 'async' | 'sync'
enableRandomFingerprint?: boolean
baseUrl?: string
intervalTime?: IntervalTime
crawlPage?: {
launchBrowser?: PuppeteerLaunchOptions // puppeteer
}
}
Default Value
export interface CrawlPageDetailTargetConfig extends CrawlCommonConfig {
url: string
headers?: AnyObject | null
cookies?: PageCookies | null
priority?: number
viewport?: Viewport | null // puppeteer
fingerprint?:
| (DetailTargetFingerprintCommon & {
maxWidth?: number
minWidth?: number
maxHeight?: number
minHidth?: number
})
| null
}
Default Value
export interface CrawlDataDetailTargetConfig extends CrawlCommonConfig {
url: string
method?: Method
headers?: AnyObject | null
params?: AnyObject
data?: any
priority?: number
fingerprint?: DetailTargetFingerprintCommon | null
}
Default Value
export interface CrawlFileDetailTargetConfig extends CrawlCommonConfig {
url: string
headers?: AnyObject | null
priority?: number
storeDir?: string | null
fileName?: string | null
extension?: string | null
fingerprint?: DetailTargetFingerprintCommon | null
}
Default Value
export interface CrawlPageAdvancedConfig extends CrawlCommonConfig {
targets: (string | CrawlPageDetailTargetConfig)[]
intervalTime?: IntervalTime
fingerprints?: (DetailTargetFingerprintCommon & {
maxWidth?: number
minWidth?: number
maxHeight?: number
minHidth?: number
})[]
headers?: AnyObject
cookies?: PageCookies
viewport?: Viewport
onCrawlItemComplete?: (crawlPageSingleResult: CrawlPageSingleResult) => void
}
Default Value
export interface CrawlDataAdvancedConfig<T> extends CrawlCommonConfig {
targets: (string | CrawlDataDetailTargetConfig)[]
intervalTime?: IntervalTime
fingerprints?: DetailTargetFingerprintCommon[]
headers?: AnyObject
onCrawlItemComplete?: (
crawlDataSingleResult: CrawlDataSingleResult<T>
) => void
}
Default Value
export interface CrawlFileAdvancedConfig extends CrawlCommonConfig {
targets: (string | CrawlFileDetailTargetConfig)[]
intervalTime?: IntervalTime
fingerprints?: DetailTargetFingerprintCommon[]
storeDirs?: string | (string | null)[]
extensions?: string | (string | null)[]
fileNames?: (string | null)[]
headers?: AnyObject
onCrawlItemComplete?: (crawlFileSingleResult: CrawlFileSingleResult) => void
onBeforeSaveItemFile?: (info: {
id: number
fileName: string
filePath: string
data: Buffer
}) => Promise<Buffer>
}
Default Value
export interface StartPollingConfig {
d?: number
h?: number
m?: number
}
Default Value
export interface CrawlCommonConfig {
timeout?: number | null
proxy?: {
urls: string[]
switchByHttpStatus?: number[]
switchByErrorCount?: number
} | null
maxRetry?: number | null
}
Default Value
export interface DetailTargetFingerprintCommon {
ua?: string
mobile?: '?0' | '?1' | 'random'
platform?: Platform
platformVersion?: string
acceptLanguage?: string
userAgent?: {
value: string
versions?: {
name: string
maxMajorVersion?: number
minMajorVersion?: number
maxMinorVersion?: number
minMinorVersion?: number
maxPatchVersion?: number
minPatchVersion?: number
}[]
}
}
Default Value
export type Mobile = '?0' | '?1'
export type Platform =
| 'Android'
| 'Chrome OS'
| 'Chromium OS'
| 'iOS'
| 'Linux'
| 'macOS'
| 'Windows'
| 'Unknown'
export type PageCookies =
| string
| Protocol.Network.CookieParam // puppeteer
| Protocol.Network.CookieParam[] // puppeteer
export type Method =
| 'get'
| 'GET'
| 'delete'
| 'DELETE'
| 'head'
| 'HEAD'
| 'options'
| 'OPTIONS'
| 'post'
| 'POST'
| 'put'
| 'PUT'
| 'patch'
| 'PATCH'
| 'purge'
| 'PURGE'
| 'link'
| 'LINK'
| 'unlink'
| 'UNLINK'
export type IntervalTime = number | { max: number; min?: number }
export interface XCrawlInstance {
crawlPage: {
(
config: string,
callback?: (res: CrawlPageSingleResult) => void
): Promise<CrawlPageSingleResult>
(
config: CrawlPageDetailTargetConfig,
callback?: (res: CrawlPageSingleResult) => void
): Promise<CrawlPageSingleResult>
(
config: (string | CrawlPageDetailTargetConfig)[],
callback?: (res: CrawlPageSingleResult[]) => void
): Promise<CrawlPageSingleResult[]>
(
config: CrawlPageAdvancedConfig,
callback?: (res: CrawlPageSingleResult[]) => void
): Promise<CrawlPageSingleResult[]>
}
crawlData: {
<T = any>(
config: CrawlDataDetailTargetConfig,
callback?: (res: CrawlDataSingleResult<T>) => void
): Promise<CrawlDataSingleResult<T>>
<T = any>(
config: string,
callback?: (res: CrawlDataSingleResult<T>) => void
): Promise<CrawlDataSingleResult<T>>
<T = any>(
config: (string | CrawlDataDetailTargetConfig)[],
callback?: (res: CrawlDataSingleResult<T>[]) => void
): Promise<CrawlDataSingleResult<T>[]>
<T = any>(
config: CrawlDataAdvancedConfig<T>,
callback?: (res: CrawlDataSingleResult<T>[]) => void
): Promise<CrawlDataSingleResult<T>[]>
}
crawlFile: {
(
config: CrawlFileDetailTargetConfig,
callback?: (res: CrawlFileSingleResult) => void
): Promise<CrawlFileSingleResult>
(
config: CrawlFileDetailTargetConfig[],
callback?: (res: CrawlFileSingleResult[]) => void
): Promise<CrawlFileSingleResult[]>
(
config: CrawlFileAdvancedConfig,
callback?: (res: CrawlFileSingleResult[]) => void
): Promise<CrawlFileSingleResult[]>
}
startPolling: (
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
) => void
}
export interface CrawlCommonResult {
id: number
isSuccess: boolean
maxRetry: number
retryCount: number
proxyDetails: ProxyDetails
crawlErrorQueue: Error[]
}
export interface CrawlPageSingleResult extends CrawlCommonResult {
data: {
browser: Browser // puppeteer
response: HTTPResponse | null // puppeteer
page: Page // puppeteer
}
}
export interface CrawlDataSingleResult<D> extends CrawlCommonResult {
data: {
statusCode: number | undefined
headers: IncomingHttpHeaders // nodejs http
data: D
} | null
}
export interface CrawlFileSingleResult extends CrawlCommonResult {
data: {
statusCode: number | undefined
headers: IncomingHttpHeaders // nodejs http
data: {
isSuccess: boolean
fileName: string
fileExtension: string
mimeType: string
size: number
filePath: string
}
} | null
}
export interface AnyObject extends Object {
[key: string | number | symbol]: any
}
The crawlPage API has built-in puppeteer, you only need to pass in some configuration options to let x-crawl help you simplify the operation and get the intact Brower instance and Page instance , x-crawl does not override it.
Discord Chat: Ask questions and discuss live with other x-crawl users via Discord.
GitHub Discussions: Use GitHub Discussions for message board-style questions and discussions.
If you have questions, needs, or good suggestions, you can raise them at GitHub Issues.
x-crawl is an open source project under the MIT license, completely free to use. If you benefit from the projects I develop and maintain at work, please consider supporting my work through the Afdian platform.
v7.1.2 (2023-06-25)
FAQs
x-crawl is a flexible Node.js AI-assisted crawler library.
The npm package x-crawl receives a total of 106 weekly downloads. As such, x-crawl popularity was classified as not popular.
We found that x-crawl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.