Security News
GitHub Removes Malicious Pull Requests Targeting Open Source Repositories
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
nodejs-web-scraper
Advanced tools
Nodejs-web-scraper is a simple, yet powerful tool for Node programmers who want to quickly setup a complex scraping job of server-side rendered web sites. It supports features like automatic retries of failed requests, concurrency limitation, request delay, etc.
$ npm install nodejs-web-scraper
const { Scraper, Root, DownloadContent, OpenLinks, CollectContent } = require('nodejs-web-scraper');
(async()=>{
var config = {
baseSiteUrl: `https://www.nytimes.com/`,
startUrl: `https://www.nytimes.com/`,
concurrency: 10,
maxRetries: 3,//The scraper will try to repeat a failed request few times(excluding 404)
cloneImages: true,//Will create a new image file with a modified name, if the name already exists.
filePath: './images/',
logPath: './logs/'//Highly recommended: specify a path for logs .One is created for each object.
}
var scraper = new Scraper(config);//Create a new Scraper instance, and pass config to it.
var root = new Root();//The root object fetches the start URL, and starts the process.
const category = new OpenLinks('.css-1wjnrbv');//Opens each category page.
const article = new OpenLinks('article a');//Opens each article page
const image = new DownloadContent('img');//Downloads every image from a given page.
const h1 = new CollectContent('h1');//"Collects" the text from each H1 element.
root.addOperation(category);//Then we create a scraping "tree":
category.addOperation(article);
article.addOperation(image);
article.addOperation(h1);
await scraper.scrape(root);//Pass the root object to the Scraper.scrape method, and the work begins.
})();
This basically means: "go to www.nytimes.com; Open every category; Then open every article in each category page; Then collect the h1 tags in each article, and download all images on that page".
const ads=[];
const afterOneAdPageScraped = async (dataFromAd) => {
ads.push(dataFromAd)
}//This is passed as a callback to "afterOneLinkScraped", in the jobAd object.Receives formatted data as an argument.
config = {
baseSiteUrl: `https://www.profesia.sk`,
startUrl: `https://www.profesia.sk/praca/`,
filePath: './images/'
logPath: './logs/'
}
const scraper = new Scraper(config);
const root = new Root({ pagination: { queryString: 'page_num', begin: 1, end: 10 } });//Open pages 1-10. You need to supply the querystring that the site uses(more details in the API docs).
const jobAd = new OpenLinks('.list-row a.title', { afterOneLinkScrape:afterOneAdPageScraped });//Opens every job ad, and calls a callback after every page is done.
const image = new DownloadContent('img:first', { name: 'Good Image' });//Notice that you can give each operation a name, for clarity in the logs.
const span = new CollectContent('span');
const header = new CollectContent('h4,h2');
root.addOperation(jobAd);
jobAd.addOperation(span);
jobAd.addOperation(header);
jobAd.addOperation(image);
root.addOperation(header);//Notice that i use the same "header" object object as a child of two different objects. This means, the data will be collected both from the root, and from each job ad page. You can compose your scraping as you wish.
await scraper.scrape(root);
console.log(ads)//Doing something with the array we created from the callbacks...
Let's describe again in words, what's going on here: "Go to https://www.profesia.sk/praca/; Then paginate the root page, from 1 to 10; Then, on each pagination page, open every job ad; Then, collect span,h2,h4 elements and download the first image; Also collect h2,h4 in the root(each pagination)."
const processElementContent = (contentString)=>{
if(contentString.includes('Hey!')){
return `${contentString} some appended phrase...`;//You need to return a new string.
}
//If you dont return anything, the original string is used.
}
config = {
baseSiteUrl: `https://www.some-content-site.com`,
startUrl: `https://www.some-content-site.com/videos`,
filePath: './videos/'
logPath: './logs/'
}
const scraper = new Scraper(config);
const root = new Root();
const video = new DownloadContent('a.video',{ contentType: 'file' });//The "contentType" makes it clear for the scraper that this is NOT an image(therefore the "href is used instead of "src").
const description = new CollectContent('h1'{processElementContent});//Using a callback on each node text.
root.addOperation(video);
root.addOperation(description);//Notice that i use the same "header" object object as a child of two different objects. This means, the data will be collected both from the root, and from each job ad page. You can compose your scraping as you wish.
await scraper.scrape(root);
console.log(description.getData())//You can call the "getData" method on every operation object, giving you the aggregated data collected by it.
Description: "Go to https://www.some-content-site.com; Download every video; Collect each h1, while processing the content with a callback; At the end, get the entire data from the "description" object;
The main nodejs-web-scraper object. Starts the scraping entire scraping process via Scraper.scrape(Root). Holds the configuration and global state.
These are available options for the scraper, with their default values:
const config ={
baseSiteUrl: '',//Mandatory.If your site sits in a subfolder, provide the path WITHOUT it.
startUrl: '',//Mandatory. The page from which the process begins.
logPath://Highly recommended.Will create a log for each scraping operation(object).
cloneImages: true,//If an image with the same name exists, a new file with a number appended to it is created. Otherwise. it's overwritten.
fileFlag: 'w',//The flag provided to the file saving function.
concurrency: 3,//Maximum concurrent requests.Highly recommended to keep it at 10 at most.
maxRetries: 5,//Maximum number of retries of a failed request.
imageResponseType: 'arraybuffer',//Either 'stream' or 'arraybuffer'
delay: 100,
timeout: 5000,
filePath: null,//Needs to be provided only if a "downloadContent" operation is created.
auth: null,//Can provide basic auth credentials(no clue what sites actually use it).
headers: null//Provide custom headers for the requests.
}
Public methods:
scrape(Root){}//After all operations have created and assembled, you begin the process by calling this method, passing the root object
Root is responsible for fetching the first page, and then scrape the children. It can also be paginated, hence the optional config. For instance:
const root= new Root({ pagination: { queryString: 'page', begin: 1, end: 100 }})
Public methods:
getData(){}//Gets all data collected by this operation. In the case of root, it will just be the entire scraping tree.
getErrors(){}//Gets all errors encountered by this operation.
Responsible for "opening links" in a given page. Basically it just creates a nodelist of anchor elements, fetches their html, and continues the process of scraping, in those pages - according to the user-defined scraping tree.
The optional config can have these properties:
{
name:'some name',//Like every operation object, you can specify a name, for better clarity in the logs.
pagination:{},//Look at the pagination API for more details.
getElementList:(elementList)=>{},//Is called each time an element list is created. In the case of OpenLinks, will happen with each list of anchor tags that it collects. Those elements all have Cheerio methods available to them.
afterOneLinkScrape:(cleanData)=>{}//Called after every all data was collected from a link, opened by this operation.(if a given page has 10 links, it will be called 10 times, with the child data).
beforeOneLinkScrape:(axiosResponse)=>{}//Will be called after a link's html was fetched, but BEFORE the child operations are performed on it(like, collecting some data from it). Is passed the axios response object. Notice that any modification to this object, might result in an unexpected behavior with the child operations of that page.
afterScrape:(data)=>{},//Is called after all scraping associated with the current "OpenLinks" operation is completed(like opening 10 pages, and downloading all images form them). Notice that if this operation was added as a child(via "addOperation()") in more than one place, then this callback will be called multiple times, each time with its corresponding data.
}
FAQs
A web scraper for NodeJs
The npm package nodejs-web-scraper receives a total of 854 weekly downloads. As such, nodejs-web-scraper popularity was classified as not popular.
We found that nodejs-web-scraper demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.