The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
The scalable web crawling and scraping library for JavaScript
👉👉👉 Crawlee is the successor to Apify SDK. 🎉 Fully rewritten in TypeScript for a better developer experience, and with even more powerful anti-blocking features. The interface is almost the same as Apify SDK so upgrading is a breeze. Read the upgrading guide to learn about the changes. 👈👈👈
Crawlee simplifies the development of web crawlers, scrapers, data extractors and web automation jobs. It provides tools to manage and automatically scale a pool of headless browsers, to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud, rotate proxies and much more. Crawlee is available as the crawlee NPM package. It can be used either stand-alone in your own applications or in actors running on the Apify Cloud.
Would you like to work with us on Crawlee or similar projects? We are hiring!
Motivation
Thanks to tools like Playwright, Puppeteer or Cheerio, it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:
Perform a deep crawl of an entire website using a persistent queue of URLs.
Run your scraping code on a list of 100k URLs in a CSV file, without losing any data when your code crashes.
Rotate proxies to hide your browser origin and keep user-like sessions.
Disable browser fingerprinting protections used by websites.
Python has Scrapy for these tasks, but there was no such library for JavaScript, the language of the web. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.
The goal of Crawlee is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.
Overview
Crawlee is available as the crawlee NPM package and is also available via @crawlee/* packages. It provides the following tools:
CheerioCrawler - Enables the parallel crawling of a large number of web pages using the cheerio HTML parser. This is the most efficient web crawler, but it does not work on websites that require JavaScript. Available also under @crawlee/cheerio package.
PuppeteerCrawler - Enables the parallel crawling of a large number of web pages using the headless Chrome browser and Puppeteer. The pool of Chrome browsers is automatically scaled up and down based on available system resources. Available also under @crawlee/puppeteer package.
PlaywrightCrawler - Unlike PuppeteerCrawler you can use Playwright to manage almost any headless browser. It also provides a cleaner and more mature interface while keeping the ease of use and advanced features. Available also under @crawlee/playwright package.
BasicCrawler - Provides a simple framework for the parallel crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class serves as a base for the more specialized crawlers above. Available also under @crawlee/basic package.
RequestList - Represents a list of URLs to crawl. The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling can resume when the Node.js process restarts. Available also under @crawlee/core package.
RequestQueue - Represents a queue of URLs to crawl, which is stored either in memory, on a local filesystem, or in the Apify Cloud. The queue is used for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages. The data structure supports both breadth-first and depth-first crawling orders. Available also under @crawlee/core package.
Dataset - Provides a store for structured data and enables their export to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud. Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers. Available also under @crawlee/core package.
KeyValueStore - A simple key-value store for arbitrary data records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud. Available also under @crawlee/core package.
AutoscaledPool - Runs asynchronous background tasks, while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running web scraping tasks at the maximum capacity of the system. Available also under @crawlee/core package.
Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the Crawlee Programmer's Reference.
Quick Start
This short tutorial will set you up to start using Crawlee in a minute or two. If you want to learn more, proceed to the Getting Started tutorial that will take you step by step through creating your first scraper.
Local stand-alone usage
Crawlee requires Node.js 16 or later. Add Crawlee to any Node.js project by running:
npm install crawlee playwright
Neither playwright nor puppeteer are bundled with Crawlee to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.
Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of Crawlee, see the Examples section of the documentation.
import { PlaywrightCrawler, Dataset } from'crawlee';
const crawler = newPlaywrightCrawler();
crawler.router.addDefaultHandler(async ({ request, page, enqueueLinks }) => {
const title = await page.title();
console.log(`Title of ${request.loadedUrl} is '${title}'`);
// save some resultsawaitDataset.pushData({ title, url: request.loadedUrl });
// enqueue all links targeting the same hostnameawaitenqueueLinks();
});
await crawler.run(['https://www.iana.org/']);
When you run the example, you should see Crawlee automating a Chrome browser.
By default, Crawlee stores data to ./crawlee_storage in the current working directory. You can override this directory via CRAWLEE_STORAGE_DIR env var. For details, see Environment variables, Request storage and Result storage.
Local usage with Crawlee command-line interface (CLI)
Let's create a boilerplate of your new web crawling project by running:
npx crawlee create my-hello-world
The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called my-hello-world with a Node.js project files. You can run the project as follows:
cd my-hello-world
npx crawlee run
By default, the crawling data will be stored in a local directory at ./crawlee_storage. For example, the input JSON file for the actor is expected to be in the default key-value store in ./crawlee_storage/key_value_stores/default/INPUT.json.
Usage on the Apify platform
Now if we want to run our new crawler on Apify Platform, we first need to download the apify-cli and login with our token:
We could also use the Apify CLI to generate a new project, which can be better suited if we want to run it on the Apify Platform.
npm i -g apify-cli
apify login
Finally, we can easily deploy our code to the Apify platform by running:
apify push
Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the
Apify Actor documentation.
You can also develop your web scraping project in an online code editor directly on the Apify platform. You'll need to have an Apify Account. Go to Actors, page in the Apify Console, click Create new and then go to the Source tab and start writing your code or paste one of the examples from the Examples section.
Your code contributions are welcome, and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.
License
This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.
This section summarizes most of the breaking changes between Crawlee (v3) and Apify SDK (v2). Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.
Crawlee vs Apify SDK
Up until version 3 of apify, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:
Crawlee, the new web-scraping library, available as crawlee package on NPM
Apify SDK, helpers for the Apify platform, available as apify package on NPM
Moreover, the Crawlee library is published as several packages under @crawlee namespace:
@crawlee/core: the base for all the crawler implementations, also contains things like Request, RequestQueue, RequestList or Dataset classes
@crawlee/basic: exports BasicCrawler
@crawlee/cheerio: exports CheerioCrawler
@crawlee/browser: exports BrowserCrawler (which is used for creating @crawlee/playwright and @crawlee/puppeteer)
@crawlee/playwright: exports PlaywrightCrawler
@crawlee/puppeteer: exports PuppeteerCrawler
@crawlee/memory-storage: @apify/storage-local alternative
@crawlee/types: holds TS interfaces mainly about the StorageClient
Installing Crawlee
As Crawlee is not yet released as latest, we need to install from the next distribution tag!
Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright if you plan on using playwright - it already contains everything from the @crawlee/browser package, which includes everything from @crawlee/basic, which includes everything from @crawlee/core.
npm install crawlee@next
Or if all we need is cheerio support, we can install only @crawlee/cheerio
npm install @crawlee/cheerio@next
When using playwright or puppeteer, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.
npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwright
Alternatively we can also use the crawlee meta-package which contains (re-exports) most of the @crawlee/* packages, and therefore contains all the crawler classes.
Sometimes you might want to use some utility methods from @crawlee/utils, so you might want to install that as well. This package contains some utilities that were previously available under Apify.utils. Browser related utilities can be also found in the crawler packages (e.g. @crawlee/playwright).
Full TypeScript support
Both Crawlee and Apify SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig package. Don't forget to set the module and target to ES2022 or above to be able to use top level await.
The @apify/tsconfig config has noImplicitAny enabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.
For Dockerfile we recommend using multi-stage build, so you don't install the dev dependencies like TypeScript in your final image:
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder
# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build
# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json
# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version
# run compiled code
CMD npm run start:prod
Browser fingerprints
Previously we had a magical stealth option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.
In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints in browserPoolOptions:
Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies() or session.setPuppeteerCookies(). Since this method could be used for any of our crawlers, not just PuppeteerCrawler, the methods have been renamed to session.getCookies() and session.setCookies() respectively. Otherwise, their usage is exactly the same!
Memory storage
When we store some data or intermediate state (like the one RequestQueue holds), we now use @crawlee/memory-storage by default. It is an alternative to the @apify/storage-local, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local). While the state is stored in memory, it also dumps it to the file system, so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json file).
When we want to run the crawler on Apify platform, we need to use Actor.init or Actor.main, which will automatically switch the storage client to ApifyClient when on the Apify platform.
We can still use the @apify/storage-local, to do it, first install it pass it to the Actor.init or Actor.main options:
@apify/storage-local v2.1.0+ is required for Crawlee
import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';
const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });
Purging of the default storage
Previously the state was preserved between local runs, and we had to use --purge argument of the apify-cli. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main call. We can opt out of it via purge: false in the Actor.init options.
Renamed crawler options and interfaces
Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.
Some utilities previously available under Apify.utils namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request instance or current Page object, or the RequestQueue bound to the crawler.
Enqueuing links
One common helper that received more attention is the enqueueLinks. As mentioned above, it is context aware - we no longer need pass in the requestQueue or page arguments (or the cheerio handle $). In addition to that, it now offers 3 enqueuing strategies:
EnqueueStrategy.All ('all'): Matches any URLs found
EnqueueStrategy.SameHostname ('same-hostname') Matches any URLs that have the same subdomain as the base URL (default)
EnqueueStrategy.SameDomain ('same-domain') Matches any URLs that have the same domain name. For example, https://wow.an.example.com and https://example.com will both be matched for a base url of https://example.com.
This means we can even call enqueueLinks() without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.
Moreover, we can specify patterns the URL should match via globs:
const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});
Implicit RequestQueue instance
All crawlers now have the RequestQueue instance automatically available via crawler.getRequestQueue() method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue instance manually, and we can just use crawler.addRequests() method described underneath.
We can still create the RequestQueue explicitly, the crawler.getRequestQueue() method will respect that and return the instance provided via crawler options.
The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
The npm package crawlee receives a total of 16,489 weekly downloads. As such, crawlee popularity was classified as popular.
We found that crawlee demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 0 open source maintainers collaborating on the project.
Package last updated on 13 Jul 2022
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.