Sitemap Generator

Easily create XML sitemaps for your website.
Generates a sitemap by crawling your site. Uses streams to efficiently write the sitemap to your drive and runs asynchronously to avoid blocking the thread. Is cappable of creating multiple sitemaps if threshold is reached. Respects robots.txt and meta tags.
Table of contents
Install
This module is available on npm.
$ npm install -S sitemap-generator
This module is running only with Node.js and is not meant to be used in the browser.
const SitemapGenerator = require('sitemap-generator');
Usage
const SitemapGenerator = require('sitemap-generator');
const generator = SitemapGenerator('http://example.com', {
stripQuerystring: false
});
generator.on('done', () => {
});
generator.start();
The crawler will fetch all folder URL pages and file types parsed by Google. If present the robots.txt
will be taken into account and possible rules are applied for each URL to consider if it should be added to the sitemap. Also the crawler will not fetch URL's from a page if the robots meta tag with the value nofollow
is present and ignore them completely if noindex
rule is present. The crawler is able to apply the base
value to found links.
API
The generator offers straightforward methods to start and stop it. You can also query some information about status and output.
getPaths()
Returns array of paths to generated sitemaps. Empty until the crawler is done.
getStats()
Returns object with info about fetched URL's. Get's updated live during crawling process.
{
added: 0,
ignored: 0,
errored: 0
}
getStatus()
Returns the status of the generator. Possible values are waiting
, started
, stopped
and done
.
start()
Starts crawler asynchronously and writes sitemap to disk.
stop()
Stops the running crawler and halts the sitemap generation.
queueURL(url)
Add a URL to crawler's queue. Useful to help crawler fetch pages it can't find itself.
Options
You can provide some options to alter the behaviour of the crawler.
var generator = SitemapGenerator('http://example.com', {
crawlerMaxDepth: 0,
filepath: path.join(process.cwd(), 'sitemap.xml'),
maxEntriesPerFile: 50000,
stripQuerystring: true
});
authUser
Type: string
Default: undefined
Provides an username for basic authentication. Requires authPass
option.
authPass
Type: string
Default: undefined
Password for basic authentication. Has to be used with authUser
option.
changeFreq
Type: string
Default: undefined
If defined, adds a <changefreq>
line to each URL in the sitemap. Possible values are always
, hourly
, daily
, weekly
, monthly
, yearly
, never
. All other values are ignored.
crawlerMaxDepth
Type: number
Default: 0
Defines a maximum distance from the original request at which resources will be fetched.
filepath
Type: string
Default: ./sitemap.xml
Filepath for the new sitemap. If multiple sitemaps are created "part_$index" is appended to each filename.
httpAgent
Type: HTTPAgent
Default: http.globalAgent
Controls what HTTP agent to use. This is useful if you want configure HTTP connection through a HTTP/HTTPS proxy (see http-proxy-agent).
httpsAgent
Type: HTTPAgent
Default: https.globalAgent
Controls what HTTPS agent to use. This is useful if you want configure HTTPS connection through a HTTP/HTTPS proxy (see https-proxy-agent).
lastMod
Type: boolean
Default: false
Whether to add a <lastmod>
line to each URL in the sitemap, and fill it with today's date.
maxEntriesPerFile
Type: number
Default: 50000
Google limits the maximum number of URLs in one sitemap to 50000. If this limit is reached the sitemap-generator creates another sitemap. A sitemap index file will be created as well.
priorityMap
Type: array
Default: []
If provided, adds a <priority>
line to each URL in the sitemap. Each value in priorityMap array corresponds with the depth of the URL being added. For example, the priority value given to a URL equals priorityMap[depth - 1]
. If a URL's depth is greater than the length of the priorityMap array, the last value in the array will be used. Valid values are between 1.0
and 0.0
.
stripQueryString
Type: boolean
Default: true
Whether to treat URL's with query strings like http://www.example.com/?foo=bar
as indiviual sites and add them to the sitemap.
userAgent
Type: string
Default: Node/SitemapGenerator
Set the User Agent used by the crawler.
timeout
Type: number
Default: 300000
The maximum time in miliseconds before continuing to gather url's
Events
The Sitemap Generator emits several events which can be listened to.
add
Triggered when the crawler successfully added a resource to the sitemap. Passes the url as argument.
generator.on('add', (url) => {
});
done
Triggered when the crawler finished and the sitemap is created. Provides statistics as first argument. Stats are the same as from getStats
.
generator.on('done', (stats) => {
});
error
Thrown if there was an error while fetching an URL. Passes an object with the http status code, a message and the url as argument.
generator.on('error', (error) => {
console.log(error);
});
ignore
If an URL matches a disallow rule in the robots.txt
file or meta robots noindex is present this event is triggered. The URL will not be added to the sitemap. Passes the ignored url as argument.
generator.on('ignore', (url) => {
});
License
MIT © Lars Graubner