🐿 linkinator
A super simple site crawler and broken link checker.
Behold my latest inator! The linkinator
provides an API and CLI for crawling websites and validating links. It's got a ton of sweet features:
- 🔥 Easily perform scans on remote sites or local files
- 🔥 Scan any element that includes links, not just
<a href>
- 🔥 Supports redirects, absolute links, relative links, all the things
- 🔥 Configure specific regex patterns to skip
- 🔥 Scan markdown files without transpilation
Installation
$ npm install linkinator
Not into the whole node.js or npm thing? You can also download a standalone binary that bundles node, linkinator, and anything else you need. See releases.
Command Usage
You can use this as a library, or as a CLI. Let's see the CLI!
$ linkinator LOCATIONS [ --arguments ]
Positional arguments
LOCATIONS
Required. Either the URLs or the paths on disk to check for broken links.
Supports multiple paths, and globs.
Flags
--concurrency
The number of connections to make simultaneously. Defaults to 100.
--config
Path to the config file to use. Looks for `linkinator.config.json` by default.
--format, -f
Return the data in CSV or JSON format.
--help
Show this command.
--include, -i
List of urls in regexy form to include. The opposite of --skip.
--markdown
Automatically parse and scan markdown if scanning from a location on disk.
--recurse, -r
Recursively follow links on the same root domain.
--server-root
When scanning a locally directory, customize the location on disk
where the server is started. Defaults to the path passed in [LOCATION].
--silent
Only output broken links.
--skip, -s
List of urls in regexy form to not include in the check.
--timeout
Request timeout in ms. Defaults to 0 (no timeout).
Command Examples
You can run a shallow scan of a website for busted links:
$ npx linkinator http://jbeckwith.com
That was fun. What about local files? The linkinator will stand up a static web server for yinz:
$ npx linkinator ./docs
But that only gets the top level of links. Lets go deeper and do a full recursive scan!
$ npx linkinator ./docs --recurse
Aw, snap. I didn't want that to check those links. Let's skip em:
$ npx linkinator ./docs --skip www.googleapis.com
The --skip
parameter will accept any regex! You can do more complex matching, or even tell it to only scan links with a given domain:
$ linkinator http://jbeckwith.com --skip '^(?!http://jbeckwith.com)'
Maybe you're going to pipe the output to another program. Use the --format
option to get JSON or CSV!
$ linkinator ./docs --format CSV
Let's make sure the README.md
in our repo doesn't have any busted links:
$ linkinator ./README.md --markdown
You know what, we better check all of the markdown files!
$ linkinator "**/*.md" --markdown
Configuration file
You can pass options directly to the linkinator
CLI, or you can define a config file. By default, linkinator
will look for a linkinator.config.json
file in the current working directory.
All options are optional. It should look like this:
{
"format": "json",
"recurse": true,
"silent": true,
"concurrency": 100,
"timeout": 0,
"markdown": true,
"skip": "www.googleapis.com"
}
To load config settings outside the CWD, you can pass the --config
flag to the linkinator
CLI:
$ linkinator --config /some/path/your-config.json
API Usage
linkinator.check(options)
Asynchronous method that runs a site wide scan. Options come in the form of an object that includes:
path
(string|string[]) - A fully qualified path to the url to be scanned, or the path(s) to the directory on disk that contains files to be scanned. required.concurrency
(number) - The number of connections to make simultaneously. Defaults to 100.port
(number) - When the path
is provided as a local path on disk, the port
on which to start the temporary web server. Defaults to a random high range order port.recurse
(boolean) - By default, all scans are shallow. Only the top level links on the requested page will be scanned. By setting recurse
to true
, the crawler will follow all links on the page, and continue scanning links on the same domain for as long as it can go. Results are cached, so no worries about loops.serverRoot
(string) - When scanning a locally directory, customize the location on disk
where the server is started. Defaults to the path passed in path
.timeout
(number) - By default, requests made by linkinator do not time out (or follow the settings of the OS). This option (in milliseconds) will fail requests after the configured amount of time.markdown
(boolean) - Automatically parse and scan markdown if scanning from a location on disk.linksToSkip
(array | function) - An array of regular expression strings that should be skipped, OR an async function that's called for each link with the link URL as its only argument. Return a Promise that resolves to true
to skip the link or false
to check it.
linkinator.LinkChecker()
Constructor method that can be used to create a new LinkChecker
instance. This is particularly useful if you want to receive events as the crawler crawls. Exposes the following events:
pagestart
(string) - Provides the url that the crawler has just started to scan.link
(object) - Provides an object with
url
(string) - The url that was scannedstate
(string) - The result of the scan. Potential values include BROKEN
, OK
, or SKIPPED
.status
(number) - The HTTP status code of the request.
Simple example
const link = require('linkinator');
async function simple() {
const results = await link.check({
path: 'http://example.com'
});
console.log(`Passed: ${results.passed}`);
console.log(results);
}
simple();
Complete example
In most cases you're going to want to respond to events, as running the check command can kinda take a long time.
const link = require('linkinator');
async function complex() {
const checker = new link.LinkChecker();
checker.on('pagestart', url => {
console.log(`Scanning ${url}`);
});
checker.on('link', result => {
console.log(` ${result.url}`);
console.log(` ${result.state}`);
console.log(` ${result.status}`);
console.log(` ${result.parent}`);
});
const result = await checker.check({
path: 'http://example.com',
});
console.log(result.passed ? 'PASSED :D' : 'FAILED :(');
console.log(`Scanned total of ${result.links.length} links!`);
const brokeLinksCount = result.links.filter(x => x.state === 'BROKEN');
console.log(`Detected ${brokeLinksCount.length} broken links.`);
}
complex();
Notes
Using a proxy
This library supports proxies via the HTTP_PROXY
and HTTPS_PROXY
environment variables. This guide provides a nice overview of how to format and set these variables.
Globbing
You may have noticed in the example, when using a glob the pattern is encapsulated in quotes:
$ linkinator "**/*.md" --markdown
Without the quotes, some shells will attempt to expand the glob paths on their own. Various shells (bash, zsh) have different, somewhat unpredictable behaviors when left to their own devices. Using the quotes ensures consistent, predictable behavior by letting the library expand the pattern.
License
MIT