
Security News
AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.
A lightweight node spider. Supports:
import { Crawler, userAgent } from 'ngrab'
import cheerio from 'cheerio'
// init
let crawler = new Crawler({
// required && unique
name: 'myCrawler',
// enable bloom filter
bloom: true,
// set random intervals(ms) between requests
interval: () => (Math.random() * 16 + 4) * 1000, // [4s, 20s]
// initial Link
startUrls: ['https://github.com/trending'],
})
// download(name, cb)
crawler.download('trending', async ({ req, res, followLinks, resolveLink }) => {
// parsing HTML strings
let $ = cheerio.load(res.body.toString())
// extract data
let repoList = [],
$rows = $('.Box-row')
if ($rows.length) {
$rows.each(function (index) {
let $item = $(this)
repoList.push({
name: $('.lh-condensed a .text-normal', $item)
.text()
.replace(/\s+/g, ' ')
.trim(),
href: $('.lh-condensed a', $item).attr('href'),
})
repoList.push(rank)
})
// print
console.log(repoList) // or store in your Database
// follow links
rankList.forEach((v) => followLinks(resolveLink(v.href)))
}
})
// start crawling
crawler.run()
The request hook will execute before each request:
// request(name, cb)
crawler.request('headers', async (context) => {
// set custom headers
Object.assign(context.req.headers, {
'Cache-Control': 'no-cache',
'User-Agent': userAgent(), // set random UserAgent
Accept: '*/*',
'Accept-Encoding': 'gzip, deflate, compress',
Connection: 'keep-alive',
})
})
Instead of parsing everything in 'crawler.download()', you can split the parsing code into different routes:
crawler.route({
url: 'https://github.com/trending', // for trending page (compatible with minimatch)
async download(({req, res})){
// parsing ...
}
})
crawler.route({
url: 'https://github.com/*/*', // for repository page
async download(({req, res})){
// parsing ...
}
})
crawler.route({
url: 'https://github.com/*/*/issues', // for issues page
async download(({req, res})){
// parsing ...
}
})
You can provider a proxy server getter when initializing the crawler:
let crawler = new Crawler({
name: 'myCrawler',
startUrls: ['https://github.com/trending'],
async proxy() {
let url = await getProxyUrlFromSomeWhere()
// The return value will be used as a proxy when sending a request
return url
},
})
FAQs
A lightweight node spider
The npm package ngrab receives a total of 10 weekly downloads. As such, ngrab popularity was classified as not popular.
We found that ngrab demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
An AI agent is merging PRs into major OSS projects and cold-emailing maintainers to drum up more work.

Research
/Security News
Chrome extension CL Suite by @CLMasters neutralizes 2FA for Facebook and Meta Business accounts while exfiltrating Business Manager contact and analytics data.

Security News
After Matplotlib rejected an AI-written PR, the agent fired back with a blog post, igniting debate over AI contributions and maintainer burden.