Security News
RubyGems.org Adds New Maintainer Role
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
@crawlee/basic
Advanced tools
The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
Crawlee simplifies the development of web crawlers, scrapers, data extractors and web automation jobs.
It provides tools to manage and automatically scale a pool of headless browsers,
to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud,
rotate proxies and much more.
The SDK is available as the crawlee
NPM package.
It can be used either stand-alone in your own applications
or in actors
running on the Apify Cloud.
View full documentation, guides and examples on the Crawlee project website
Would you like to work with us on Crawlee or similar projects? We are hiring!
Thanks to tools like Playwright, Puppeteer or Cheerio, it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:
Python has Scrapy for these tasks, but there was no such library for JavaScript, the language of the web. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.
The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.
Crawlee is available as the crawlee
NPM package and is also available via @crawlee/*
packages. It provides the following tools:
CheerioCrawler
- Enables the parallel crawling of a large
number of web pages using the cheerio HTML parser. This is the most
efficient web crawler, but it does not work on websites that require JavaScript.
PuppeteerCrawler
- Enables the parallel crawling of
a large number of web pages using the headless Chrome browser and Puppeteer.
The pool of Chrome browsers is automatically scaled up and down based on available system resources.
PlaywrightCrawler
- Unlike PuppeteerCrawler
you can use Playwright to manage almost any headless browser.
It also provides a cleaner and more mature interface while keeping the ease of use and advanced features.
BasicCrawler
- Provides a simple framework for the parallel
crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class
serves as a base for the more specialized crawlers above.
RequestList
- Represents a list of URLs to crawl.
The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling
can resume when the Node.js process restarts.
RequestQueue
- Represents a queue of URLs to crawl,
which is stored either on a local filesystem or in the Apify Cloud. The queue is used
for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages.
The data structure supports both breadth-first and depth-first crawling orders.
Dataset
- Provides a store for structured data and enables their export
to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud.
Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers.
KeyValueStore
- A simple key-value store for arbitrary data
records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs
or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud.
AutoscaledPool
- Runs asynchronous background tasks,
while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running
web scraping tasks at the maximum capacity of the system.
Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the Crawlee Programmer's Reference.
This short tutorial will set you up to start using Crawlee in a minute or two. If you want to learn more, proceed to the Getting Started tutorial that will take you step by step through creating your first scraper.
Crawlee requires Node.js 16 or later. Add Crawlee to any Node.js project by running:
npm install @crawlee/playwright playwright
Neither
playwright
norpuppeteer
are bundled with the SDK to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.
Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee, see the Examples section of the documentation.
import { PlaywrightCrawler } from '@crawlee/playwright';
const crawler = new PlaywrightCrawler({
async requestHandler({ request, page, enqueueLinks }) {
// Extract HTML title of the page.
const title = await page.title();
console.log(`Title of ${request.url}: ${title}`);
// Add URLs from the same subdomain.
await enqueueLinks();
},
});
// Choose the first URL to open and run the crawler.
await crawler.addRequests(['https://www.iana.org/']);
await crawler.run();
When you run the example, you should see Crawlee automating a Chrome browser.
By default, Crawlee stores data to ./crawlee_storage
in the current working directory. You can override this directory via CRAWLEE_STORAGE_DIR
env var. For details, see Environment variables, Request storage and Result storage.
To avoid the need to set the environment variables manually, to create a boilerplate of your project, and to enable pushing and running your code on the Apify platform, you can use the Apify command-line interface (CLI) tool.
Install the CLI by running:
npm -g install apify-cli
Now create a boilerplate of your new web crawling project by running:
apify create my-hello-world
The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called my-hello-world
with a Node.js project files. You can run the project as follows:
cd my-hello-world
apify run
By default, the crawling data will be stored in a local directory at ./crawlee_storage
. For example, the input JSON file for the actor is expected to
be in the default key-value store in ./crawlee_storage/key_value_stores/default/INPUT.json
.
Now you can easily deploy your code to the Apify platform by running:
apify login
apify push
Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the Apify Actor documentation.
You can also develop your web scraping project in an online code editor directly on the Apify platform. You'll need to have an Apify Account. Go to Actors, page in the Apify Console, click Create new and then go to the Source tab and start writing your code or paste one of the examples from the Examples section.
For more information, view the Apify actors quick start guide.
If you find any bug or issue with Crawlee, please submit an issue on GitHub. For questions, you can ask on Stack Overflow or contact support@apify.com
Your code contributions are welcome and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.
This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.
Many thanks to Chema Balsas for giving up the apify
package name
on NPM and renaming his project to jsdocify.
FAQs
The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
The npm package @crawlee/basic receives a total of 11,183 weekly downloads. As such, @crawlee/basic popularity was classified as popular.
We found that @crawlee/basic demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.
Security News
Research
Socket's threat research team has detected five malicious npm packages targeting Roblox developers, deploying malware to steal credentials and personal data.