Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
English | 简体中文
x-crawl is a Nodejs multifunctional crawler library.
If it helps you, please give the repository a Star to support it.
The fetchPage API internally uses the puppeteer library to crawl pages.
The following can be done:
Take NPM as an example:
npm install x-crawl
Example of fetching featured video cover image for youtube homepage every other day:
// 1.Import module ES/CJS
import xCrawl from 'x-crawl'
// 2.Create a crawler instance
const myXCrawl = xCrawl({
timeout: 10000, // overtime time
intervalTime: { max: 3000, min: 2000 } // control request frequency
})
// 3.Set the crawling task
// Call the startPolling API to start the polling function, and the callback function will be called every other day
myXCrawl.startPolling({ d: 1 }, () => {
// Call fetchPage API to crawl Page
myXCrawl.fetchPage('https://www.youtube.com/').then((res) => {
const { jsdom } = res.data // By default, the JSDOM library is used to parse Page
// Get the cover image element of the Promoted Video
const imgEls = jsdom.window.document.querySelectorAll(
'.yt-core-image--fill-parent-width'
)
// set request configuration
const requestConfig = []
imgEls.forEach((item) => {
if (item.src) {
requestConfig.push({ url: item.src })
}
})
// Call the fetchFile API to crawl pictures
myXCrawl.fetchFile({ requestConfig, fileConfig: { storeDir: './upload' } })
})
})
running result:
Note: Do not crawl randomly, here is just to demonstrate how to use x-crawl, and control the request frequency within 3000ms to 2000ms.
Create a new application instance via xCrawl():
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
// options
})
Related options can refer to XCrawlBaseConfig .
A crawler application instance has two crawling modes: asynchronous/synchronous, and each crawler instance can only choose one of them.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
mode: 'async'
})
The mode option defaults to async .
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
Setting the interval time can prevent too much concurrency and avoid too much pressure on the server.
import xCrawl from 'x-crawl'
const myXCrawl = xCrawl({
intervalTime: { max: 3000, min: 1000 }
})
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
The first request is not to trigger the interval.
import xCrawl from 'x-crawl'
const myXCrawl1 = xCrawl({
// options
})
const myXCrawl2 = xCrawl({
// options
})
Fetch a page via fetchPage()
myXCrawl.fetchPage('https://xxx.com').then(res => {
const { jsdom, page } = res.data
})
Crawl interface data through fetchData()
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx' }
]
myXCrawl.fetchData({ requestConfig }).then(res => {
// deal with
})
Fetch file data via fetchFile()
import path from 'node:path'
const requestConfig = [
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx' },
{ url: 'https://xxx.com/xxxx' }
]
myXCrawl. fetchFile({
requestConfig,
fileConfig: {
storeDir: path.resolve(__dirname, './upload') // storage folder
}
}).then(fileInfos => {
console. log(fileInfos)
})
Create a crawler instance via call xCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
For more detailed types, please see the Types section
function xCrawl(baseConfig?: XCrawlBaseConfig): XCrawlInstance
const myXCrawl = xCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
// The interval between requests, multiple requests are valid
intervalTime: {
max: 2000,
min: 1000
}
})
Passing baseConfig is for fetchPage/fetchData/fetchFile to use these values by default.
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchPage/fetchData/fetchFile example.
The mode option defaults to async .
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
The first request is not to trigger the interval.
fetchPage is the method of the above myXCrawl instance, usually used to crawl page.
function fetchPage: (
config: FetchPageConfig,
callback?: (res: FetchPage) => void
) => Promise<FetchPage>
myXCrawl.fetchPage('/xxx').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
Get the page instance from res.data.page, which can do interactive operations such as events. For specific usage, refer to page.
fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.
function fetchData: <T = any>(
config: FetchDataConfig,
callback?: (res: FetchResCommonV1<T>) => void
) => Promise<FetchResCommonArrV1<T>>
const requestConfig = [
{ url: '/xxxx' },
{ url: '/xxxx' },
{ url: '/xxxx' }
]
myXCrawl.fetchData({ requestConfig }).then(res => {
console.log(res)
})
fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.
function fetchFile: (
config: FetchFileConfig,
callback?: (res: FetchResCommonV1<FileInfo>) => void
) => Promise<FetchResCommonArrV1<FileInfo>>
const requestConfig = [
{ url: '/xxxx' },
{ url: '/xxxx' },
{ url: '/xxxx' }
]
myXCrawl.fetchFile({
requestConfig,
fileConfig: {
storeDir: path.resolve(__dirname, './upload') // storage folder
}
}).then(fileInfos => {
console.log(fileInfos)
})
fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.
function startPolling(
config: StartPollingConfig,
callback: (count: number) => void
): void
myXCrawl.startPolling({ h: 1, m: 30 }, () => {
// will be executed every one and a half hours
// fetchPage/fetchData/fetchFile
})
interface AnyObject extends Object {
[key: string | number | symbol]: any
}
type Method = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
interface RequestBaseConfig {
url: string
timeout?: number
proxy?: string
}
interface RequestConfig extends RequestBaseConfig {
method?: Method
headers?: AnyObject
params?: AnyObject
data?: any
}
type IntervalTime = number | {
max: number
min?: number
}
interface XCrawlBaseConfig {
baseUrl?: string
timeout?: number
intervalTime?: IntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
interface FetchBaseConfigV1 {
requestConfig: RequestConfig | RequestConfig[]
intervalTime?: IntervalTime
}
type FetchPageConfig = string | RequestBaseConfig
interface FetchDataConfig extends FetchBaseConfigV1 {
}
interface FetchFileConfig extends FetchBaseConfigV1 {
fileConfig: {
storeDir: string // Store folder
extension?: string // Filename extension
}
}
interface StartPollingConfig {
d?: number // day
h?: number // hour
m?: number // minute
}
interface FetchCommon<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders // nodejs: http type
data: T
}
type FetchResCommonArrV1<T> = FetchResCommonV1<T>[]
interface FileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
interface FetchPage {
httpResponse: HTTPResponse | null // The type of HTTPResponse in the puppeteer library
data: {
page: Page // The type of Page in the puppeteer library
jsdom: JSDOM // The type of JSDOM in the jsdom library
}
}
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .
FAQs
x-crawl is a flexible Node.js AI-assisted crawler library.
The npm package x-crawl receives a total of 106 weekly downloads. As such, x-crawl popularity was classified as not popular.
We found that x-crawl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.