Security News
GitHub Removes Malicious Pull Requests Targeting Open Source Repositories
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
nodejs-web-scraper
Advanced tools
Nodejs-web-scraper is a simple, yet powerful tool for Node programmers who want to quickly setup a complex scraping job of server-side rendered web sites. It supports features like automatic retries of failed requests, concurrency limitation, request delay, etc.
$ npm install nodejs-web-scraper
const { Scraper, Root, DownloadContent, OpenLinks, CollectContent } = require('nodejs-web-scraper');
(async()=>{
var config = {
baseSiteUrl: `https://www.nytimes.com/`,
startUrl: `https://www.nytimes.com/`,
concurrency: 10,
maxRetries: 3,//The scraper will try to repeat a failed request few times(excluding 404)
cloneImages: true,
filePath: './images/',
logPath: './logs/'
}
var scraper = new Scraper(config);//Create a new Scraper instance, and pass config to it.
var root = new Root();//The root object fetches the start URL, and starts the process.
const category = new OpenLinks('.css-1wjnrbv');//Opens each category page.
const article = new OpenLinks('article a');//Opens each article page
const image = new DownloadContent('img');//Downloads every image from a given page.
const h1 = new CollectContent('h1');//"Collects" the text from each H1 element.
root.addOperation(category);//Then we create a scraping "tree":
category.addOperation(article);
article.addOperation(image);
article.addOperation(h1);
await scraper.scrape(root);//Pass the root object to the Scraper.scrape method, and the work begins.
})();
This basically means: "go to www.nytimes.com; Open every category; Then open every article in each category page; Then collect the h1 tags in each article, and download all images on that page".
const ads=[];
const afterOneAdPageScraped = async (dataFromAd) => {
ads.push(dataFromAd)
}//This is passed as a callback to "afterOneLinkScraped", in the jobAd object.Receives formatted data as an argument.
config = {
baseSiteUrl: `https://www.profesia.sk`,
startUrl: `https://www.profesia.sk/praca/`,
filePath: './images/'
logPath: './logs/'
}
const scraper = new Scraper(config);
const root = new Root({ pagination: { queryString: 'page_num', begin: 1, end: 10 } });//Open pages 1-10. You need to supply the querystring that the site uses(more details in the API docs).
const jobAd = new OpenLinks('.list-row a.title', { afterOneLinkScrape:afterOneAdPageScraped });//Opens every job ad, and calls a callback after every page is done.
const image = new DownloadContent('img:first', { name: 'Good Image' });//Notice that you can give each operation a name, for clarity in the logs.
const span = new CollectContent('span');
const header = new CollectContent('h4,h2');
root.addOperation(jobAd);
jobAd.addOperation(span);
jobAd.addOperation(header);
jobAd.addOperation(image);
root.addOperation(header);//Notice that i use the same "header" object object as a child of two different objects. This means, the data will be collected both from the root, and from each job ad page. You can compose your scraping as you wish.
await execute();
console.log(ads)//Doing something with the array we created from the callbacks...
Let's describe again in words, what's going on here: "Go to https://www.profesia.sk/praca/; Then paginate the root page, from 1 to 10; Then, on each pagination page, open every job ad; Then, collect span,h2,h4 elements and download the first image; Also collect h2,h4 in the root(each pagination)."
const processElementContent = (contentString)=>{
if(contentString.includes('Hey!')){
return `${contentString} some appended phrase...`;//You need to return a new string.
}
//If you dont return anything, the original string is used.
}
config = {
baseSiteUrl: `https://www.some-content-site.com`,
startUrl: `https://www.some-content-site.com/videos`,
filePath: './videos/'
logPath: './logs/'
}
const scraper = new Scraper(config);
const root = new Root();
const video = new DownloadContent('a.video',{ contentType: 'file' });//The "contentType" makes it clear for the scraper that this is NOT an image(therefore the "href is used instead of "src").
const description = new CollectContent('h1'{processElementContent});//Using a callback on each node text.
root.addOperation(video);
root.addOperation(description);//Notice that i use the same "header" object object as a child of two different objects. This means, the data will be collected both from the root, and from each job ad page. You can compose your scraping as you wish.
await execute();
console.log(description.getData())//You can call the "getData" method on every operation object, giving you the aggregated data collected by it.
Description: "Go to https://www.some-content-site.com; Download every video; Collect each h1, while processing the content with a callback; At the end, get the entire data from the "description" object;
FAQs
A web scraper for NodeJs
The npm package nodejs-web-scraper receives a total of 854 weekly downloads. As such, nodejs-web-scraper popularity was classified as not popular.
We found that nodejs-web-scraper demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.