New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

scrapman

Package Overview
Dependencies
Maintainers
1
Versions
10
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

scrapman

Retrieve real (with Javascript executed) HTML code from an URL, ultra fast and supports multiple parallel loading of webs

  • 2.3.1
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
6
decreased by-75%
Maintainers
1
Weekly downloads
 
Created
Source

Scrapman

Ski-bi dibby dib yo da dub dub
Yo da dub dub
Ski-bi dibby dib yo da dub dub
Yo da dub dub

I'm the Scrapman!

###THE FASTEST SCRAPPER EVER*... AND IT SUPPORTS PARALLEL REQUESTS (*arguably)

Scrapman is a blazingly fast real (with Javascript executed) HTML scrapper, built from the ground up to support parallel fetches, with this you can get the HTML code for 50+ URLs in seconds (~30 seconds).

On NodeJS you can easily use request to fetch the HTML from a page, but what if the page you are trying to load is NOT a static HTML page, but it has dynamic content added with Javascript? What do you do then? Well, you use The Scrapman.

It uses Electron to dynamically load web pages into several <webview> within a single Chromium instance. This is why it fetches the HTML exactly as you would see it if you inspect the page with DevTools.

This is NOT an browser automation tool (yet), it's a node module that gives you the processed HTML from an URL, it focuses on multiple parallel operations and speed.

##USAGE

1.- Install it

npm install scrapman -S

2.- Require it

var scrapman = require("scrapman");

3.- Use it (as many times as you need)

Single URL request

scrapman.load("http://google.com", function(results){
	//results contains the HTML obtained from the url
	console.log(results);
});

Parallel URL requests

//yes, you can use it within a loop.
for(var i=1; i<=50; i++){
    scrapman.load("https://www.website.com/page/" + i, function(results){
        console.log(results);
    });
}

##API

###- scrapman.load(url, callback)

####url Type: String

The URL from which the HTML code is going to be obtained.

####callback(results) Type: Function

The callback function to be executed when the loading is done. The loaded HTML will be in the results parameter.

###- scrapman.configure(config)

####config The configuration object can set the following values

  • maxConcurrentOperations: Integer - The intensity of processing, how many URLs can be loaded at the same time, default: 50

  • wait: Integer - The amount of milliseconds to wait before returning the HTML code of a webpage after it has been completely loaded, default: 0

Questions

Feel free to open Issues to ask questions about using this package, PRs are very welcomed and encouraged.

SE HABLA ESPAÑOL

License

MIT © Daniel Nieto

Keywords

FAQs

Package last updated on 27 Jan 2017

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc