Socket
Socket
Sign inDemoInstall

@crawlee/memory-storage

Package Overview
Dependencies
Maintainers
1
Versions
1170
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@crawlee/memory-storage - npm Package Compare versions

Comparing version 3.0.0-beta.54 to 3.0.0-beta.55

6

package.json
{
"name": "@crawlee/memory-storage",
"version": "3.0.0-beta.54",
"version": "3.0.0-beta.55",
"description": "A simple in-memory storage implementation of the Apify API",

@@ -52,4 +52,4 @@ "engines": {

"@apify/log": "^2.0.0",
"@crawlee/types": "^3.0.0-beta.54",
"@crawlee/utils": "^3.0.0-beta.54",
"@crawlee/types": "^3.0.0-beta.55",
"@crawlee/utils": "^3.0.0-beta.55",
"@sapphire/shapeshift": "^3.0.0",

@@ -56,0 +56,0 @@ "content-type": "^1.0.4",

@@ -23,3 +23,3 @@ <h1 align="center">

rotate proxies and much more.
The SDK is available as the [`crawlee`](https://www.npmjs.com/package/crawlee) NPM package.
Crawlee is available as the [`crawlee`](https://www.npmjs.com/package/crawlee) NPM package.
It can be used either stand-alone in your own applications

@@ -35,5 +35,3 @@ or in [actors](https://docs.apify.com/actor)

Thanks to tools like [Playwright](https://github.com/microsoft/playwright), [Puppeteer](https://github.com/puppeteer/puppeteer) or
[Cheerio](https://www.npmjs.com/package/cheerio), it is easy to write Node.js code to extract data from web pages. But
eventually things will get complicated. For example, when you try to:
Thanks to tools like [Playwright](https://github.com/microsoft/playwright), [Puppeteer](https://github.com/puppeteer/puppeteer) or [Cheerio](https://www.npmjs.com/package/cheerio), it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:

@@ -45,9 +43,5 @@ - Perform a deep crawl of an entire website using a persistent queue of URLs.

Python has [Scrapy](https://scrapy.org/) for these tasks, but there was no such library for **JavaScript, the language of
the web**. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a
browser.
Python has [Scrapy](https://scrapy.org/) for these tasks, but there was no such library for **JavaScript, the language of the web**. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.
The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't
reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing
commonalities.
The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.

@@ -58,67 +52,40 @@ ## Overview

[//]: # (TODO add links to the docs about `@crawlee/` packages and the `crawlee` metapackage)
- [`CheerioCrawler`](https://apify.github.io/apify-ts/api/cheerio-crawler/class/CheerioCrawler) - Enables the parallel crawling of a large number of web pages using the [cheerio](https://www.npmjs.com/package/cheerio) HTML parser. This is the most efficient web crawler, but it does not work on websites that require JavaScript. Available also under `@crawlee/cheerio` package.
- [`CheerioCrawler`](https://apify.github.io/apify-ts/api/cheerio-crawler/class/CheerioCrawler) - Enables the parallel crawling of a large
number of web pages using the [cheerio](https://www.npmjs.com/package/cheerio) HTML parser. This is the most
efficient web crawler, but it does not work on websites that require JavaScript.
- [`PuppeteerCrawler`](https://apify.github.io/apify-ts/api/puppeteer-crawler/class/PuppeteerCrawler) - Enables the parallel crawling of a large number of web pages using the headless Chrome browser and [Puppeteer](https://github.com/puppeteer/puppeteer). The pool of Chrome browsers is automatically scaled up and down based on available system resources. Available also under `@crawlee/puppeteer` package.
- [`PuppeteerCrawler`](https://apify.github.io/apify-ts/api/puppeteer-crawler/class/PuppeteerCrawler) - Enables the parallel crawling of
a large number of web pages using the headless Chrome browser and [Puppeteer](https://github.com/puppeteer/puppeteer).
The pool of Chrome browsers is automatically scaled up and down based on available system resources.
- [`PlaywrightCrawler`](https://apify.github.io/apify-ts/api/playwright-crawler/class/PlaywrightCrawler) - Unlike `PuppeteerCrawler` you can use [Playwright](https://github.com/microsoft/playwright) to manage almost any headless browser. It also provides a cleaner and more mature interface while keeping the ease of use and advanced features. Available also under `@crawlee/playwright` package.
- [`PlaywrightCrawler`](https://apify.github.io/apify-ts/api/playwright-crawler/class/PlaywrightCrawler) - Unlike `PuppeteerCrawler`
you can use [Playwright](https://github.com/microsoft/playwright) to manage almost any headless browser.
It also provides a cleaner and more mature interface while keeping the ease of use and advanced features.
- [`BasicCrawler`](https://apify.github.io/apify-ts/api/basic-crawler/class/BasicCrawler) - Provides a simple framework for the parallel crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class serves as a base for the more specialized crawlers above. Available also under `@crawlee/basic` package.
- [`BasicCrawler`](https://apify.github.io/apify-ts/api/basic-crawler/class/BasicCrawler) - Provides a simple framework for the parallel
crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class
serves as a base for the more specialized crawlers above.
- [`RequestList`](https://apify.github.io/apify-ts/api/core/class/RequestList) - Represents a list of URLs to crawl. The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling can resume when the Node.js process restarts. Available also under `@crawlee/core` package.
- [`RequestList`](https://apify.github.io/apify-ts/api/core/class/RequestList) - Represents a list of URLs to crawl.
The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling
can resume when the Node.js process restarts.
- [`RequestQueue`](https://apify.github.io/apify-ts/api/core/class/RequestQueue) - Represents a queue of URLs to crawl, which is stored either in memory, on a local filesystem, or in the [Apify Cloud](https://apify.com). The queue is used for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages. The data structure supports both breadth-first and depth-first crawling orders. Available also under `@crawlee/core` package.
- [`RequestQueue`](https://apify.github.io/apify-ts/api/core/class/RequestQueue) - Represents a queue of URLs to crawl,
which is stored either on a local filesystem or in the [Apify Cloud](https://apify.com). The queue is used
for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages.
The data structure supports both breadth-first and depth-first crawling orders.
- [`Dataset`](https://apify.github.io/apify-ts/api/core/class/Dataset) - Provides a store for structured data and enables their export to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud. Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers. Available also under `@crawlee/core` package.
- [`Dataset`](https://apify.github.io/apify-ts/api/core/class/Dataset) - Provides a store for structured data and enables their export
to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud.
Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers.
- [`KeyValueStore`](https://apify.github.io/apify-ts/api/core/class/KeyValueStore) - A simple key-value store for arbitrary data records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud. Available also under `@crawlee/core` package.
- [`KeyValueStore`](https://apify.github.io/apify-ts/api/core/class/KeyValueStore) - A simple key-value store for arbitrary data
records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs
or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud.
- [`AutoscaledPool`](https://apify.github.io/apify-ts/api/core/class/AutoscaledPool) - Runs asynchronous background tasks, while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running web scraping tasks at the maximum capacity of the system. Available also under `@crawlee/core` package.
- [`AutoscaledPool`](https://apify.github.io/apify-ts/api/core/class/AutoscaledPool) - Runs asynchronous background tasks,
while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running
web scraping tasks at the maximum capacity of the system.
Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the [Crawlee Programmer's Reference](https://apify.github.io/apify-ts/).
Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus
take advantage of its pool of proxies, job scheduler, data storage, etc.
For more information, see the [Crawlee Programmer's Reference](https://apify.github.io/apify-ts/).
## Quick Start
This short tutorial will set you up to start using Crawlee in a minute or two.
If you want to learn more, proceed to the [Getting Started](https://apify.github.io/apify-ts/docs/guides/getting-started)
tutorial that will take you step by step through creating your first scraper.
This short tutorial will set you up to start using Crawlee in a minute or two. If you want to learn more, proceed to the [Getting Started](https://apify.github.io/apify-ts/docs/guides/getting-started) tutorial that will take you step by step through creating your first scraper.
### Local stand-alone usage
Crawlee requires [Node.js](https://nodejs.org/en/) 16 or later.
Add Crawlee to any Node.js project by running:
Crawlee requires [Node.js](https://nodejs.org/en/) 16 or later. Add Crawlee to any Node.js project by running:
```bash
npm install @crawlee/playwright playwright
npm install crawlee playwright
```
> Neither `playwright` nor `puppeteer` are bundled with the SDK to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.
> Neither `playwright` nor `puppeteer` are bundled with Crawlee to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.
Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee,
[see the Examples section of the documentation](https://apify.github.io/apify-ts/docs/examples/crawl-multiple-urls).
Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee, [see the Examples section of the documentation](https://apify.github.io/apify-ts/docs/examples/crawl-multiple-urls).
```javascript
import { PlaywrightCrawler } from '@crawlee/playwright';
import { PlaywrightCrawler } from 'crawlee';

@@ -147,36 +114,34 @@ const crawler = new PlaywrightCrawler({

### Local usage with Apify command-line interface (CLI)
### Local usage with Crawlee command-line interface (CLI)
To avoid the need to set the environment variables manually, to create a boilerplate of your project, and to enable pushing and running your code on
the [Apify platform](https://apify.github.io/apify-ts/docs/guides/apify-platform), you can use the [Apify command-line interface (CLI)](https://github.com/apify/apify-cli) tool.
To create a boilerplate of your project we can use the [Crawlee command-line interface (CLI)](https://github.com/apify/apify-cli) tool.
Install the CLI by running:
Let's create a boilerplate of your new web crawling project by running:
```bash
npm -g install apify-cli
npx crawlee create my-hello-world
```
Now create a boilerplate of your new web crawling project by running:
The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called `my-hello-world` with a Node.js project files. You can run the project as follows:
```bash
apify create my-hello-world
cd my-hello-world
npx crawlee run
```
The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called `my-hello-world`
with a Node.js project files. You can run the project as follows:
By default, the crawling data will be stored in a local directory at `./crawlee_storage`. For example, the input JSON file for the actor is expected to be in the default key-value store in `./crawlee_storage/key_value_stores/default/INPUT.json`.
```bash
cd my-hello-world
apify run
```
### Usage on the Apify platform
By default, the crawling data will be stored in a local directory at `./crawlee_storage`. For example, the input JSON file for the actor is expected to
be in the default key-value store in `./crawlee_storage/key_value_stores/default/INPUT.json`.
Now if we want to run our new crawler on Apify Platform, we first need to download the `apify-cli` and login with our token:
Now you can easily deploy your code to the Apify platform by running:
> We could also use the Apify CLI to generate a new project, which can be better suited if we want to run it on the Apify Platform.
```bash
npm i -g apify-cli
apify login
```
Finally, we can easily deploy our code to the Apify platform by running:
```bash

@@ -189,8 +154,4 @@ apify push

### Usage on the Apify platform
You can also develop your web scraping project in an online code editor directly on the [Apify platform](https://apify.github.io/apify-ts/docs/guides/apify-platform). You'll need to have an Apify Account. Go to [Actors](https://console.apify.com/actors), page in the Apify Console, click <i>Create new</i> and then go to the <i>Source</i> tab and start writing your code or paste one of the examples from the Examples section.
You can also develop your web scraping project in an online code editor directly on the [Apify platform](https://apify.github.io/apify-ts/docs/guides/apify-platform).
You'll need to have an Apify Account. Go to [Actors](https://console.apify.com/actors), page in the Apify Console, click <i>Create new</i>
and then go to the <i>Source</i> tab and start writing your code or paste one of the examples from the Examples section.
For more information, view the [Apify actors quick start guide](https://docs.apify.com/actor/quick-start).

@@ -200,20 +161,10 @@

If you find any bug or issue with Crawlee, please [submit an issue on GitHub](https://github.com/apify/apify-js/issues).
For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/apify) or contact support@apify.com
If you find any bug or issue with Crawlee, please [submit an issue on GitHub](https://github.com/apify/apify-ts/issues). For questions, you can ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/apify) or contact support@apify.com
## Contributing
Your code contributions are welcome and you'll be praised to eternity!
If you have any ideas for improvements, either submit an issue or create a pull request.
For contribution guidelines and the code of conduct,
see [CONTRIBUTING.md](https://github.com/apify/apify-js/blob/master/CONTRIBUTING.md).
Your code contributions are welcome, and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see [CONTRIBUTING.md](https://github.com/apify/apify-ts/blob/master/CONTRIBUTING.md).
## License
This project is licensed under the Apache License 2.0 -
see the [LICENSE.md](https://github.com/apify/apify-js/blob/master/LICENSE.md) file for details.
## Acknowledgments
Many thanks to [Chema Balsas](https://www.npmjs.com/~jbalsas) for giving up the `apify` package name
on NPM and renaming his project to [jsdocify](https://www.npmjs.com/package/jsdocify).
This project is licensed under the Apache License 2.0 - see the [LICENSE.md](https://github.com/apify/apify-ts/blob/master/LICENSE.md) file for details.
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc