Security News
Input Validation Vulnerabilities Dominate MITRE's 2024 CWE Top 25 List
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
English | 简体中文
x-crawl is a flexible nodejs crawler library. You can crawl pages and control operations such as pages, batch network requests, and batch downloads of file resources. Support asynchronous/synchronous mode crawling data. Running on nodejs, the usage is flexible and simple, friendly to JS/TS developers.
If you feel good, you can give x-crawl repository a Star to support it, your Star will be the motivation for my update.
The crawlPage API internally uses the puppeteer library to help us crawl pages.
The return value of the crawlPage API will be able to do the following:
Take NPM as an example:
npm install x-crawl
Regular crawling: Get the recommended pictures of the youtube homepage every other day as an example:
// 1.Import module ES/CJS
import xCrawl from 'x-crawl'
// 2.Create a crawler instance
const myXCrawl = xCrawl({
timeout: 10000, // overtime time
intervalTime: { max: 3000, min: 2000 } // crawl interval
})
// 3.Set the crawling task
// Call the startPolling API to start the polling function, and the callback function will be called every other day
myXCrawl.startPolling({ d: 1 }, () => {
// Call crawlPage API to crawl Page
myXCrawl.crawlPage('https://www.youtube.com/').then((res) => {
const { browser, jsdom } = res // By default, the JSDOM library is used to parse Page
// Get the cover image element of the Promoted Video
const imgEls = jsdom.window.document.querySelectorAll(
'.yt-core-image--fill-parent-width'
)
// set request configuration
const requestConfig = []
imgEls.forEach((item) => {
if (item.src) {
requestConfig.push(item.src)
}
})
// Call the crawlFile API to crawl pictures
myXCrawl.crawlFile({ requestConfig, fileConfig: { storeDir: './upload' } })
})
})
running result:
Note: Do not crawl at will, you can check the robots.txt protocol before crawling. This is just to demonstrate how to use x-crawl.
Create a new application instance via xCrawl():
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
// options
})
Related options can refer to XCrawlBaseConfig .
A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
mode: 'async'
})
The mode option defaults to async .
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
import xCrawl from 'x-crawl'
const myXCrawl1 = xCrawl({
// options
})
const myXCrawl2 = xCrawl({
// options
})
Crawl a page via crawlPage() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000
})
myXCrawl.crawlPage('https://xxx.com').then(res => {
const { jsdom, browser, page } = res
// Close the browser
browser.close()
})
It is an instance object of JSDOM, please refer to jsdom for specific usage.
Note: The jsdom instance only parses the content of page instance, if you use page instance for event operation, you may need to parse the latest by yourself For details, please refer to the self-parsing page of page instance.
It is an instance object of Browser. For specific usage, please refer to Browser.
The browser instance is a headless browser without a UI shell. What he does is to bring all modern network platform functions provided by the browser rendering engine to the code.
Note: An event loop will always be generated inside the browser instance, causing the file not to be terminated. If you want to stop, you can execute browser.close() to close it. Do not call crawlPage or page if you need to use it later. Because when you modify the properties of the browser instance, it will affect the browser instance inside the crawlPage API of the crawler instance, the page instance that returns the result, and the browser instance, because the browser instance is shared within the crawlPage API of the same crawler instance.
It is an instance object of Page. The instance can also perform interactive operations such as events. For specific usage, please refer to [page](https://pptr.dev /api/puppeteer. page).
Parse the page by yourself
Take the jsdom library as an example:
import xCrawl from 'x-crawl'
import { JSDOM } from 'jsdom'
const myXCrawl = xCrawl({ timeout: 10000 })
myXCrawl.crawlPage('https://www.xxx.com').then(async (res) => {
const { page } = res
// Get the latest page content
const content = await page.content()
// Use the jsdom library to parse it yourself
const jsdom = new JSDOM(content)
console.log(jsdom.window.document.querySelector('title').textContent)
})
Take Screenshot
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ timeout: 10000 })
myXCrawl
.crawlPage('https://xxx.com')
.then(async (res) => {
const { page } = res
// Get a screenshot of the rendered page
await page.screenshot({ path: './upload/page.png' })
console.log('Screen capture is complete')
})
Crawl interface data through crawlData() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
myXCrawl.crawlData({ requestConfig }).then(res => {
// deal with
})
Crawl file data via crawlFile() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
const requestConfig = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
myXCrawl
.crawlFile({
requestConfig,
fileConfig: {
storeDir: './upload' // storage folder
}
})
.then((fileInfos) => {
console.log(fileInfos)
})
Start a polling crawl with startPolling() .
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
myXCrawl. startPolling({ h: 2, m: 30 }, (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
myXCrawl.crawlPage('https://xxx.com').then(res => {
const { jsdom, browser, page } = res
})
})
Callback function parameters:
Setting the requests interval time can prevent too much concurrency and avoid too much pressure on the server.
It can be set when creating a crawler instance, or you can choose to set it separately for an API. The crawl interval is controlled internally by the instance method, not by the instance to control the entire crawl interval.
import xCrawl from 'x-crawl'
// Unified settings
const myXCrawl = xCrawl({
intervalTime: { max: 3000, min: 1000 }
})
// Set individually (high priority)
myXCrawl.crawlFile({
requestConfig: [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ],
intervalTime: { max: 2000, min: 1000 }
})
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
Note: The first request will not trigger the interval.
The writing method of requestConfig is very flexible, there are 5 types in total, which can be:
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
// requestConfig writing method 1:
const requestConfig1 = 'https://xxx.com/xxxx'
// requestConfig writing method 2:
const requestConfig2 = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
// requestConfig writing method 3:
const requestConfig3 = {
url: 'https://xxx.com/xxxx',
method: 'POST',
data: { name: 'coderhxl' }
}
// requestConfig writing method 4:
const requestConfig4 = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
// requestConfig writing method 5:
const requestConfig5 = [
'https://xxx.com/xxxx',
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
'https://xxx.com/xxxx'
]
myXCrawl.crawlData({ requestConfig: requestConfig5 }).then(res => {
console.log(res)
})
It can be selected according to the actual situation.
There are three ways to get the result: Promise, Callback and Promise + Callback.
These three methods apply to crawlPage, crawlData and crawlFile.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 3000, min: 1000 }
})
const requestConfig = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
// Method 1: Promise
myXCrawl
.crawlFile({
requestConfig,
fileConfig: { storeDir: './upload' }
})
.then((fileInfos) => {
console.log('Promise: ', fileInfos)
})
// Method 2: Callback
myXCrawl.crawlFile(
{
requestConfig,
fileConfig: { storeDir: './upload' }
},
(fileInfo) => {
console.log('Callback: ', fileInfo)
}
)
// Method 3: Promise + Callback
myXCrawl
.crawlFile(
{
requestConfig,
fileConfig: { storeDir: './upload' }
},
(fileInfo) => {
console.log('Callback: ', fileInfo)
}
)
.then((fileInfos) => {
console.log('Promise: ', fileInfos)
})
It can be selected according to the actual situation.
Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
import xCrawl from 'x-crawl'
// xCrawl API
const myXCrawl = xCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
// Crawling interval time, batch crawling is only valid
intervalTime: {
max: 2000,
min: 1000
}
})
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the crawlPage/crawlData/crawlFile example.
crawlPage is the method of the crawler instance, usually used to crawl page.
function crawlPage: (
config: CrawlPageConfig,
callback?: (res: CrawlPage) => void
) => Promise<CrawlPage>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({ timeout: 10000 })
// crawlPage API
myXCrawl.crawlPage('https://xxx.com/xxxx').then((res) => {
const { jsdom, browser, page } = res
console.log(jsdom.window.document.querySelector('title')?.textContent)
// Close the browser
browser.close()
})
crawlData is the method of the crawler instance, which is usually used to crawl APIs to obtain JSON data and so on.
function crawlData: <T = any>(
config: CrawlDataConfig,
callback?: (res: CrawlResCommonV1<T>) => void
) => Promise<CrawlResCommonArrV1<T>>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx', method: 'POST', data: { name: 'coderhxl' } },
{ url: 'https://xxx.com/xxxx' }
]
// crawlData API
myXCrawl.crawlData({ requestConfig }).then(res => {
console.log(res)
})
crawlFile is the method of the crawler instance, which is usually used to crawl files, such as pictures, pdf files, etc.
function crawlFile: (
config: CrawlFileConfig,
callback?: (res: CrawlResCommonV1<FileInfo>) => void
) => Promise<CrawlResCommonArrV1<FileInfo>>
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
const requestConfig = [ 'https://xxx.com/xxxx', 'https://xxx.com/xxxx' ]
myXCrawl
.crawlFile({
requestConfig,
fileConfig: {
storeDir: './upload' // storage folder
}
})
.then((fileInfos) => {
console.log(fileInfos)
})
crawlPolling is a method of the crawler instance, typically used to perform polling operations, such as getting news every once in a while.
function startPolling(
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
): void
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// startPolling API
myXCrawl.startPolling({ h: 2, m: 30 }, (count, stopPolling) => {
// will be executed every two and a half hours
// crawlPage/crawlData/crawlFile
})
interface AnyObject extends Object {
[key: string | number | symbol]: any
}
type Method = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
interface RequestConfigObjectV1 {
url: string
headers?: AnyObject
timeout?: number
proxy?: string
}
interface RequestConfigObjectV2 {
url: string
method?: Method
headers?: AnyObject
params?: AnyObject
data?: any
timeout?: number
proxy?: string
}
type RequestConfig = string | RequestConfigObjectV2
type IntervalTime = number | {
max: number
min?: number
}
interface XCrawlBaseConfig {
baseUrl?: string
timeout?: number
intervalTime?: IntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
type CrawlPageConfig = string | RequestConfigObjectV1
interface CrawlBaseConfigV1 {
requestConfig: RequestConfig | RequestConfig[]
intervalTime?: IntervalTime
}
interface CrawlDataConfig extends CrawlBaseConfigV1 {
}
interface CrawlFileConfig extends CrawlBaseConfigV1 {
fileConfig: {
storeDir: string // Store folder
extension?: string // Filename extension
}
}
interface StartPollingConfig {
d?: number // day
h?: number // hour
m?: number // minute
}
interface XCrawlInstance {
crawlPage: (
config: CrawlPageConfig,
callback?: (res: CrawlPage) => void
) => Promise<CrawlPage>
crawlData: <T = any>(
config: CrawlDataConfig,
callback?: (res: CrawlResCommonV1<T>) => void
) => Promise<CrawlResCommonArrV1<T>>
crawlFile: (
config: CrawlFileConfig,
callback?: (res: CrawlResCommonV1<FileInfo>) => void
) => Promise<CrawlResCommonArrV1<FileInfo>>
startPolling: (
config: StartPollingConfig,
callback: (count: number, stopPolling: () => void) => void
) => void
}
interface CrawlResCommonV1<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders // nodejs: http type
data: T
}
type CrawlResCommonArrV1<T> = CrawlResCommonV1<T>[]
interface CrawlPage {
httpResponse: HTTPResponse | null // The type of HTTPResponse in the puppeteer library
browser: Browser // The Browser type of the puppeteer library
page: Page // The Page type of the puppeteer library
jsdom: JSDOM // jsdom type of the JSDOM library
}
interface FileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .
FAQs
x-crawl is a flexible Node.js AI-assisted crawler library.
The npm package x-crawl receives a total of 106 weekly downloads. As such, x-crawl popularity was classified as not popular.
We found that x-crawl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.