Socket
Socket
Sign inDemoInstall

@crawlee/http

Package Overview
Dependencies
Maintainers
1
Versions
984
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@crawlee/http - npm Package Compare versions

Comparing version 3.0.4-beta.8 to 3.0.4-beta.9

6

package.json
{
"name": "@crawlee/http",
"version": "3.0.4-beta.8",
"version": "3.0.4-beta.9",
"description": "The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.",

@@ -58,4 +58,4 @@ "engines": {

"@apify/utilities": "^2.0.0",
"@crawlee/basic": "^3.0.4-beta.8",
"@crawlee/types": "^3.0.4-beta.8",
"@crawlee/basic": "^3.0.4-beta.9",
"@crawlee/types": "^3.0.4-beta.9",
"@types/content-type": "^1.1.5",

@@ -62,0 +62,0 @@ "content-type": "^1.0.4",

@@ -9,3 +9,3 @@ <h1 align="center">

<br>
<small>The scalable web crawling and scraping library for JavaScript</small>
<small>The web scraping and browser automation library</small>
</h1>

@@ -20,55 +20,34 @@

>👉👉👉 Crawlee is the successor to [Apify SDK](https://sdk.apify.com). 🎉 Fully rewritten in **TypeScript** for a better developer experience, and with even more powerful anti-blocking features. The interface is almost the same as Apify SDK so upgrading is a breeze. Read [the upgrading guide](https://crawlee.dev/docs/upgrading/upgrading-to-v3) to learn about the changes. 👈👈👈
> ℹ️ Crawlee is the successor to [Apify SDK](https://sdk.apify.com). 🎉 Fully rewritten in **TypeScript** for a better developer experience, and with even more powerful anti-blocking features. The interface is almost the same as Apify SDK so upgrading is a breeze. Read [the upgrading guide](https://crawlee.dev/docs/upgrading/upgrading-to-v3) to learn about the changes. ℹ️
Crawlee simplifies the development of web crawlers, scrapers, data extractors and web automation jobs. It provides tools to manage and automatically scale a pool of headless browsers, to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud, rotate proxies and much more. Crawlee is available as the [`crawlee`](https://www.npmjs.com/package/crawlee) NPM package. It can be used either stand-alone in your own applications or in [actors](https://docs.apify.com/actor) running on the [Apify Cloud](https://apify.com/).
Crawlee covers your crawling and scraping end-to-end and **helps you build reliable scrapers. Fast.**
**View full documentation, guides and examples on the [Crawlee project website](https://crawlee.dev)**
Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.
> Would you like to work with us on Crawlee or similar projects? [We are hiring!](https://apify.com/jobs#senior-node.js-engineer)
Crawlee is available as the [`crawlee`](https://www.npmjs.com/package/crawlee) NPM package.
## Motivation
> 👉 **View full documentation, guides and examples on the [Crawlee project website](https://crawlee.dev)** 👈
Thanks to tools like [Playwright](https://github.com/microsoft/playwright), [Puppeteer](https://github.com/puppeteer/puppeteer) or [Cheerio](https://www.npmjs.com/package/cheerio), it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:
## Installation
- Perform a deep crawl of an entire website using a persistent queue of URLs.
- Run your scraping code on a list of 100k URLs in a CSV file, without losing any data when your code crashes.
- Rotate proxies to hide your browser origin and keep user-like sessions.
- Disable browser fingerprinting protections used by websites.
We recommend visiting the [Introduction tutorial](https://crawlee.dev/docs/introduction) in Crawlee documentation for more information.
Python has [Scrapy](https://scrapy.org/) for these tasks, but there was no such library for **JavaScript, the language of the web**. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.
> Crawlee requires **Node.js 16 or higher**.
The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.
### With Crawlee CLI
## Overview
The fastest way to try Crawlee out is to use the **Crawlee CLI** and choose the **Getting started example**. The CLI will install all the necessary dependencies and add boilerplate code for you to play with.
Crawlee is available as the [`crawlee`](https://www.npmjs.com/package/crawlee) NPM package and is also available via `@crawlee/*` packages. It provides the following tools:
```bash
npx crawlee create my-crawler
```
- [`CheerioCrawler`](https://crawlee.dev/api/cheerio-crawler/class/CheerioCrawler) - Enables the parallel crawling of a large number of web pages using the [cheerio](https://www.npmjs.com/package/cheerio) HTML parser. This is the most efficient web crawler, but it does not work on websites that require JavaScript. Available also under `@crawlee/cheerio` package.
```bash
cd my-crawler
npm start
```
- [`PuppeteerCrawler`](https://crawlee.dev/api/puppeteer-crawler/class/PuppeteerCrawler) - Enables the parallel crawling of a large number of web pages using the headless Chrome browser and [Puppeteer](https://github.com/puppeteer/puppeteer). The pool of Chrome browsers is automatically scaled up and down based on available system resources. Available also under `@crawlee/puppeteer` package.
### Manual installation
If you prefer adding Crawlee **into your own project**, try the example below. Because it uses `PlaywrightCrawler` we also need to install [Playwright](https://playwright.dev). It's not bundled with Crawlee to reduce install size.
- [`PlaywrightCrawler`](https://crawlee.dev/api/playwright-crawler/class/PlaywrightCrawler) - Unlike `PuppeteerCrawler` you can use [Playwright](https://github.com/microsoft/playwright) to manage almost any headless browser. It also provides a cleaner and more mature interface while keeping the ease of use and advanced features. Available also under `@crawlee/playwright` package.
- [`BasicCrawler`](https://crawlee.dev/api/basic-crawler/class/BasicCrawler) - Provides a simple framework for the parallel crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class serves as a base for the more specialized crawlers above. Available also under `@crawlee/basic` package.
- [`RequestList`](https://crawlee.dev/api/core/class/RequestList) - Represents a list of URLs to crawl. The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling can resume when the Node.js process restarts. Available also under `@crawlee/core` package.
- [`RequestQueue`](https://crawlee.dev/api/core/class/RequestQueue) - Represents a queue of URLs to crawl, which is stored either in memory, on a local filesystem, or in the [Apify Cloud](https://apify.com). The queue is used for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages. The data structure supports both breadth-first and depth-first crawling orders. Available also under `@crawlee/core` package.
- [`Dataset`](https://crawlee.dev/api/core/class/Dataset) - Provides a store for structured data and enables their export to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud. Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers. Available also under `@crawlee/core` package.
- [`KeyValueStore`](https://crawlee.dev/api/core/class/KeyValueStore) - A simple key-value store for arbitrary data records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud. Available also under `@crawlee/core` package.
- [`AutoscaledPool`](https://crawlee.dev/api/core/class/AutoscaledPool) - Runs asynchronous background tasks, while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running web scraping tasks at the maximum capacity of the system. Available also under `@crawlee/core` package.
Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the [Crawlee Programmer's Reference](https://crawlee.dev).
## Quick Start
This short tutorial will set you up to start using Crawlee in a minute or two. If you want to learn more, proceed to the [Getting Started](https://crawlee.dev/docs/introduction) tutorial that will take you step by step through creating your first scraper.
### Local stand-alone usage
Crawlee requires [Node.js](https://nodejs.org/en/) 16 or later. Add Crawlee to any Node.js project by running:
```bash

@@ -78,77 +57,67 @@ npm install crawlee playwright

> Neither `playwright` nor `puppeteer` are bundled with Crawlee to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.
Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee, [see the Examples section of the documentation](https://crawlee.dev/docs/examples/crawl-multiple-urls).
```javascript
```js
import { PlaywrightCrawler, Dataset } from 'crawlee';
const crawler = new PlaywrightCrawler();
// PlaywrightCrawler crawls the web using a headless
// browser controlled by the Playwright library.
const crawler = new PlaywrightCrawler({
// Use the requestHandler to process each of the crawled pages.
async requestHandler({ request, page, enqueueLinks, log }) {
const title = await page.title();
log.info(`Title of ${request.loadedUrl} is '${title}'`);
crawler.router.addDefaultHandler(async ({ request, page, enqueueLinks }) => {
const title = await page.title();
console.log(`Title of ${request.loadedUrl} is '${title}'`);
// Save results as JSON to ./storage/datasets/default
await Dataset.pushData({ title, url: request.loadedUrl });
// save some results
await Dataset.pushData({ title, url: request.loadedUrl });
// enqueue all links targeting the same hostname
await enqueueLinks();
// Extract links from the current page
// and add them to the crawling queue.
await enqueueLinks();
},
// Uncomment this option to see the browser window.
// headless: false,
});
await crawler.run(['https://www.iana.org/']);
// Add first URL to the queue and start the crawl.
await crawler.run(['https://crawlee.dev']);
```
When you run the example, you should see Crawlee automating a Chrome browser.
By default, Crawlee stores data to `./storage` in the current working directory. You can override this directory via Crawlee configuration. For details, see [Configuration guide](https://crawlee.dev/docs/guides/configuration), [Request storage](https://crawlee.dev/docs/guides/request-storage) and [Result storage](https://crawlee.dev/docs/guides/result-storage).
![Chrome Scrape](https://crawlee.dev/img/chrome_scrape.gif)
## 🛠 Features
By default, Crawlee stores data to `./storage` in the current working directory. You can override this directory via `CRAWLEE_STORAGE_DIR` env var. For details, see [Environment variables](https://crawlee.dev/docs/guides/environment-variables), [Request storage](https://crawlee.dev/docs/guides/request-storage) and [Result storage](https://crawlee.dev/docs/guides/result-storage).
- Single interface for **HTTP and headless browser** crawling
- Persistent **queue** for URLs to crawl (breadth & depth first)
- Pluggable **storage** of both tabular data and files
- Automatic **scaling** with available system resources
- Integrated **proxy rotation** and session management
- Lifecycles customizable with **hooks**
- **CLI** to bootstrap your projects
- Configurable **routing**, **error handling** and **retries**
- **Dockerfiles** ready to deploy
- Written in **TypeScript** with generics
### Local usage with Crawlee command-line interface (CLI)
### 👾 HTTP crawling
To create a boilerplate of your project we can use the [Crawlee command-line interface (CLI)](https://github.com/apify/apify-cli) tool.
- Zero config **HTTP2 support**, even for proxies
- Automatic generation of **browser-like headers**
- Replication of browser **TLS fingerprints**
- Integrated fast **HTML parsers**. Cheerio and JSDOM
- Yes, you can scrape **JSON APIs** as well
Let's create a boilerplate of your new web crawling project by running:
### 💻 Real browser crawling
```bash
npx crawlee create my-hello-world
```
- JavaScript **rendering** and **screenshots**
- **Headless** and **headful** support
- Zero-config generation of **human-like fingerprints**
- Automatic **browser management**
- Use **Playwright** and **Puppeteer** with the same interface
- **Chrome**, **Firefox**, **Webkit** and many others
The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called `my-hello-world` with a Node.js project files. You can run the project as follows:
## Usage on the Apify platform
```bash
cd my-hello-world
npx crawlee run
```
Crawlee is open-source and runs anywhere, but since it's developed by [Apify](https://apify.com), it's easy to set up on the Apify platform and run in the cloud. Visit the [Apify SDK website](https://sdk.apify.com) to learn more about deploying Crawlee to the Apify platform.
By default, the crawling data will be stored in a local directory at `./storage`. For example, the input JSON file for the actor is expected to be in the default key-value store in `./storage/key_value_stores/default/INPUT.json`.
### Usage on the Apify platform
Now if we want to run our new crawler on Apify Platform, we first need to download the `apify-cli` and login with our token:
> We could also use the Apify CLI to generate a new project, which can be better suited if we want to run it on the Apify Platform.
```bash
npm i -g apify-cli
apify login
```
Finally, we can easily deploy our code to the Apify platform by running:
```bash
apify push
```
Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the
[Apify Actor](https://docs.apify.com/cli) documentation.
You can also develop your web scraping project in an online code editor directly on the [Apify platform](https://crawlee.dev/docs/guides/apify-platform). You'll need to have an Apify Account. Go to [Actors](https://console.apify.com/actors), page in the Apify Console, click <i>Create new</i> and then go to the <i>Source</i> tab and start writing your code or paste one of the examples from the Examples section.
For more information, view the [Apify actors quick start guide](https://docs.apify.com/actor/quick-start).
## Support
If you find any bug or issue with Crawlee, please [submit an issue on GitHub](https://github.com/apify/crawlee/issues). For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/apify) or contact support@apify.com
If you find any bug or issue with Crawlee, please [submit an issue on GitHub](https://github.com/apify/crawlee/issues). For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/apify), in GitHub Discussions or you can join our [Discord server](https://discord.com/invite/jyEM2PRvMU).

@@ -155,0 +124,0 @@ ## Contributing

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc