New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

website-scrap-engine

Package Overview
Dependencies
Maintainers
1
Versions
27
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

website-scrap-engine

Configurable website scraper in typescript

latest
Source
npmnpm
Version
0.9.0
Version published
Weekly downloads
366
281.25%
Maintainers
1
Weekly downloads
 
Created
Source

website-scrap-engine

Configurable website scraper library in TypeScript. Consumers provide a DownloadOptions config (which includes a ProcessingLifeCycle) and instantiate a downloader to recursively scrape websites to local disk.

Features

  • Configurable processing pipeline with hook arrays at every stage
  • Single-thread and multi-thread (native worker_threads) downloaders
  • HTML, CSS, SVG, and sitemap parsing with automatic link discovery
  • CSS url() extraction and rewriting
  • srcset, Open Graph meta tags, inline styles, and SVG xlink:href support
  • Automatic URL-to-relative-path rewriting so saved sites work offline
  • Streaming download support for large binary resources
  • PQueue-based concurrency with runtime adjustment
  • URL deduplication with configurable search-param stripping
  • Configurable retry with exponential backoff, jitter, and Retry-After header support
  • Local file:// source support for re-processing previously saved sites
  • Configurable logging via log4js with dedicated categories (skip, retry, error, notFound, etc.)

Installation

npm install website-scrap-engine

Requires Node.js >= 18.17.0.

Usage

The downloader takes a path (or file:// URL) to a module that default-exports a DownloadOptions object. This pattern allows worker threads to independently load the same configuration.

Step 1: Create an options module (e.g. my-options.js)

import {lifeCycle, options, resource} from 'website-scrap-engine';

const {defaultLifeCycle} = lifeCycle;
const {defaultDownloadOptions} = options;
const {ResourceType} = resource;

const lc = defaultLifeCycle();

// Example: skip binary resources deeper than depth 2
lc.processBeforeDownload.push((res) => {
  if (res.depth > 2 && res.type === ResourceType.Binary) return;
  return res;
});

export default defaultDownloadOptions({
  ...lc,
  localRoot: '/path/to/save',
  maxDepth: 3,
  initialUrl: ['https://example.com'],
});

Step 2: Create and run the downloader

import path from 'path';
import {downloader} from 'website-scrap-engine';

const {SingleThreadDownloader} = downloader;

const d = new SingleThreadDownloader(
  'file://' + path.resolve('my-options.js')
);
d.start();
d.onIdle().then(() => d.dispose());

For CPU-intensive workloads, use MultiThreadDownloader instead (see Multi-Thread Processing).

You can also pass override options as the second argument to the downloader constructor, which are merged into the options module's export:

new SingleThreadDownloader('file://' + path.resolve('my-options.js'), {
  localRoot: '/different/path',
  concurrency: 8,
});

Adapter Helpers

The library provides adapter functions in lifeCycle.adapter for common customization patterns:

AdapterStageDescription
skipProcess(fn)linkRedirectSkip URLs matching a predicate
dropResource(fn)processBeforeDownloadMark matching resources as discard-only (replace link but don't download)
preProcess(fn)processBeforeDownloadInspect/modify resources before download
requestRedirect(fn)processBeforeDownloadRewrite the download URL
redirectFilter(fn)processAfterDownloadRewrite or discard redirect URLs
processHtml(fn)processAfterDownloadTransform the parsed HTML (cheerio $)
processHtmlAsync(fn)processAfterDownloadAsync version of processHtml
import {lifeCycle} from 'website-scrap-engine';

const lc = lifeCycle.defaultLifeCycle();

// Skip all URLs containing "/api/"
lc.linkRedirect.push(lifeCycle.adapter.skipProcess(
  (url) => url.includes('/api/')
));

// Drop images from download but still rewrite their links
lc.processBeforeDownload.push(lifeCycle.adapter.dropResource(
  (res) => res.type === ResourceType.Binary && res.url.endsWith('.png')
));

Architecture

Pipeline Life Cycle

Resources are processed through a sequential pipeline of hook arrays. Each stage is an array of functions executed in order. Returning void/undefined from any function discards the resource from that stage onward.

init (once per downloader/worker startup)
 |
 v
URL
 |
 v
1. linkRedirect -----> skip or redirect URLs before processing
 |
 v
2. detectResourceType -> determine type (Html, Css, Binary, Svg, SiteMap, etc.)
 |
 v
3. createResource ----> build a Resource with save paths and relative replacement paths
 |
 v
4. processBeforeDownload -> filter/modify resources; link replacement in parent happens after this
 |
 v
5. download ----------> fetch resource via HTTP (loop ends early once body is set)
 |
 v
6. processAfterDownload -> parse content, discover child resources via submit() callback
 |
 v
7. saveToDisk --------> write to local filesystem
 |
 v
dispose (once per downloader shutdown / worker exit)

Consumers extend the pipeline by prepending or appending functions to any stage array via defaultLifeCycle(). See Usage for examples.

Default Pipeline Handlers

StageDefault handlers
linkRedirectskipLinks - filters out non-HTTP URI schemes (mailto, javascript, data, etc.)
detectResourceTypedetectResourceType - infers type from element/context
createResourcecreateResource - builds Resource with URL resolution, save path, and replace path
downloaddownloadResource, downloadStreamingResource, readOrCopyLocalResource
processAfterDownloadprocessRedirectedUrl, processHtml, processHtmlMetaRefresh, processSvg, processCss, processSiteMap
saveToDisksaveHtmlToDisk, saveResourceToDisk

Resource Types

Defined in ResourceType enum:

TypeEncodingDescription
BinarynullNot parsed, saved as-is
Htmlutf8Parsed with cheerio, links discovered and rewritten
Cssutf8CSS url() references extracted and rewritten
CssInlineutf8Inline <style> blocks and style attributes
SiteMaputf8URLs discovered but not rewritten
Svgutf8Parsed with cheerio (same as HTML)
StreamingBinarynullStreamed directly to disk, for large files

HTML Source Definitions

The scraper discovers linked resources from HTML using configurable source definitions. The defaults cover:

  • Images: img[src], img[srcset], picture source[srcset]
  • Styles: link[rel="stylesheet"], <style> blocks, [style] attributes
  • Scripts: script[src]
  • Links: a[href], frame[src], iframe[src]
  • Media: video[src], video[poster], audio[src], source[src], track[src]
  • SVG: *[xlink:href], *[href]
  • Meta: meta[property="og:image"], og:audio, og:video and their variants
  • Other: embed[src], object[data], input[src], [background], link[rel*="icon"], link[rel*="preload"]

Override via options.sources with an array of {selector, attr, type} definitions.

Key Abstractions

  • Resource (src/resource.ts) - Central data object carrying URL, save path, replacement path, body, and metadata. RawResource is the serializable subset used for cross-thread communication.
  • PipelineExecutor (interface in src/life-cycle/pipeline-executor.ts, impl in src/downloader/pipeline-executor-impl.ts) - Orchestrates life cycle execution. createAndProcessResource() runs stages 1-4 in one call.
  • AbstractDownloader (src/downloader/main.ts) - Base class with PQueue-based concurrency, URL deduplication, and the download loop.
  • SingleThreadDownloader (src/downloader/single.ts) - Runs all pipeline stages in the main thread.
  • MultiThreadDownloader (src/downloader/multi.ts) - Downloads in main thread, sends to worker pool for post-processing.

Multi-Thread Processing

Use multi-thread processing when post-download work (HTML/CSS parsing, link discovery) is CPU-intensive.

Main thread:

  • Runs the download queue with PQueue concurrency control
  • Executes stages 1-5 (linkRedirect through download)
  • Transfers downloaded resources to worker threads
  • Receives discovered child resources back and enqueues non-duplicates

Worker threads:

  • Receive downloaded resources from the main thread
  • Execute stages 6-7 (processAfterDownload + saveToDisk)
  • Parse HTML/CSS/SVG, discover child resources
  • Run stages 1-4 on discovered children to prepare them
  • Send prepared child resources back to the main thread as RawResource[]

Worker count defaults to Math.min(concurrency, workerCount). The worker pool uses a 2-pass water-fill algorithm to balance tasks across workers by load.

Logging

The library uses log4js with dedicated logger categories:

LoggerPurpose
skipResources filtered/discarded at any pipeline stage
skipExternalExternal resources skipped by scope
retryHTTP retry attempts with backoff details
errorDownload and processing errors
notFound404 responses
request / responseHTTP request/response logging
completeSuccessfully processed resources
mkdirDirectory creation
adjustConcurrencyRuntime concurrency changes

Configure logging via options.configureLogger and options.logSubDir.

Key Dependencies

  • cheerio - HTML/SVG parsing and manipulation
  • got - HTTP client with retry logic
  • p-queue - Download concurrency control
  • urijs - URL resolution and path generation
  • css-url-parser - CSS url() extraction
  • srcset - srcset attribute parsing

License

ISC

Keywords

typescript

FAQs

Package last updated on 03 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts