headless-crawler 👻

A crawler implemented using a headless browser (Chrome).
Features
Usage
import puppeteer from 'puppeteer';
import {
createHeadlessCrawler
} from 'headless-crawler';
const main = async () => {
const browser = puppeteer.launch();
const headlessCrawler = createHeadlessCrawler({
onResult: (resource) => {
console.log(resource.content.title);
},
browser
});
await headlessCrawler.queue('http://gajus.com/');
};
main();
Configuration
type HeadlessCrawlerUserConfigurationType = {|
+browser: PuppeteerBrowserType,
+concurrency: number,
+extractContent?: ExtractContentHandlerType,
+filterLink?: FilterLinkHandlerType,
+onPage?: PageHandlerType,
+onResult?: ResultHandlerType,
+sortQueuedLinks?: SortQueuedLinksHandlerType,
+waitFor?: WaitForHandlerType
|};
The default extractContent function extracts page title.
(): ExtractContentHandlerType => {
return `(() => {
return {
title: document.title
};
})();`;
};
Default headlessCrawlerConfiguration.filterLink
The default filterLink function includes all URLs allowed by robots.txt and does not visit previously scraped URLs.
type DefaultFilterLinkHandlerConfigurationType = {|
+maxLinkDepth: number,
+respectRobots: boolean
|};
type DefaultFilterLinkHandlerUserConfigurationType = {|
+maxLinkDepth?: number,
+respectRobots?: boolean
|};
(userConfiguration: DefaultFilterLinkHandlerUserConfigurationType): FilterLinkHandlerType => {
const configuration: DefaultFilterLinkHandlerConfigurationType = {
maxLinkDepth: 10,
respectRobots: true,
...userConfiguration
};
let robotsAgent;
if (configuration.respectRobots) {
robotsAgent = createRobotsAgent();
}
return async (link, scrapedLinkHistory) => {
if (link.linkDepth > configuration.maxLinkDepth) {
return false;
}
if (configuration.respectRobots && robotsAgent.isRobotsAvailable(link.linkUrl) && !robotsAgent.isAllowed(link.linkUrl)) {
return false;
}
for (const scrapedLink of scrapedLinkHistory) {
if (scrapedLink.linkUrl === link.linkUrl) {
return false;
}
}
return true;
};
};
Note: robots.txt support is implemented using robots-agent.
Default headlessCrawlerConfiguration.onError
(): ErrorHandlerType => {
return (error) => {};
};
Default headlessCrawlerConfiguration.onResult
The default onResult logs the result and advances crawler to the next URL.
(): ResultHandlerType => {
return (scrapeResult) => {
log.debug({
scrapeResult
}, 'new result');
return true;
};
};
Default headlessCrawlerConfiguration.sortQueuedLinks
(): SortQueuedLinksHandlerType => {
return (links) => {
return links;
};
};
Default headlessCrawlerConfiguration.waitFor
(): WaitForHandlerType => {
return (page) => {
return page.waitForNavigation({
waitUntil: 'networkidle2'
});
};
};
Create default handlers
You can import factory functions to create default handlers:
import {
createDefaultExtractContentHandler,
createDefaultFilterLinkHandler,
createDefaultResultHandler,
createDefaultSortQueuedLinksHandler,
createDefaultWaitForHandler
} from 'headless-crawler';
This is useful for extending the default handlers, e.g.
const defaultFilterHandler = createDefaultFilterLinkHandler();
const myCustomFilterLinkHandler = (link, scrapedLinkHistory) => {
if (link.linkUrl.startsWith('https://google.com/')) {
return false;
}
return defaultFilterHandler(link, scrapedLinkHistory);
};
Recipes
Inject jQuery
Use extractContent to manipulate the Puppeteer Page object after it has been determined to be ready and create the function used to extract content from the website.
const main = async () => {
const browser = await puppeteer.launch();
const headlessCrawler = createHeadlessCrawler({
browser,
extractContent: async (page) => {
await page.addScriptTag({
url: 'https://code.jquery.com/jquery-3.3.1.min.js'
});
return `(() => {
return $('title').text();
})()`;
}
});
};
main();
Configure request parameters
Request parameters (such as geolocation, user-agent and viewport) can be configured using onPage handler, e.g.
const main = async () => {
const browser = await puppeteer.launch();
const onPage = async (page, scrapeConfiguration) => {
await page.setGeolocation({
latitude: 59.95,
longitude: 30.31667
});
await page.setUserAgent('headless-crawler');
};
const headlessCrawler = createHeadlessCrawler({
browser,
onPage
});
};
main();
Capture a screenshot
The extractContent method can capture the screenshot of the website as it was at the time just before the content-extraction function is executed, e.g.
const extractContent = async (page) => {
await page.screenshot({
fullPage: true,
path: 'screenshot.png'
});
return `(() => {
return {
title: document.title
};
})()`;
};
Refer to Puppeteer page#screenshot documentation for other properties.
Configure a proxy
Note: These instructions are not specific headless-crawler; these are generic instructions for instructing Puppeteer to use HTTP proxy.
You must:
- Configure
ignoreHTTPSErrors
- Configure
--proxy-server
Example:
import puppeteer from 'puppeteer';
import {
createHeadlessCrawler
} from 'headless-crawler';
const main = async () => {
const browser = puppeteer.launch({
args: [
'--proxy-server=http://127.0.0.1:8080'
],
ignoreHTTPSErrors: true
});
const headlessCrawler = createHeadlessCrawler({
onResult: (resource) => {
console.log(resource.content.title);
},
browser
});
await headlessCrawler.queue('http://gajus.com/');
};
main();
Types
This package is using Flow type annotations.
Refer to ./src/types.js for method parameter and result types.
Logging
This package is using roarr logger to log the program's state.
Export ROARR_LOG=true environment variable to enable log printing to stdout.
Use roarr-cli program to pretty-print the logs.
FAQ
What makes headless-crawler different from headless-chrome-crawler?
headless-chrome-crawler is the only other headless crawler in the Node.js ecosystem.
It appears that headless-chrome-crawler is no longer maintained. At the time of this writing, the author of headless-chrome-crawler has not made public contributions in over 6 months and the package includes bugs as a result of hardcoded dependency versions.
Maintenance issues aside, the headless-chrome-crawler is a feature-rich and configuration-driven framework. Meanwhile, headless-crawler provides a bare-bones framework for navigating the website and extracting the content. The consumer of the framework can extend the functionality using the provided handlers and directly consuming the Puppeteer API (e.g. see Capture a screenshot).