The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
Join us for a free webinar about Crawlee at 9 AM EST on June 12th, 2024 to learn how to build reliable web scrapers fast!
Crawlee covers your crawling and scraping end-to-end and helps you build reliable scrapers. Fast.
Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.
We recommend visiting the Introduction tutorial in Crawlee documentation for more information.
Crawlee requires Node.js 16 or higher.
With Crawlee CLI
The fastest way to try Crawlee out is to use the Crawlee CLI and choose the Getting started example. The CLI will install all the necessary dependencies and add boilerplate code for you to play with.
npx crawlee create my-crawler
cd my-crawler
npm start
Manual installation
If you prefer adding Crawlee into your own project, try the example below. Because it uses PlaywrightCrawler we also need to install Playwright. It's not bundled with Crawlee to reduce install size.
npm install crawlee playwright
import { PlaywrightCrawler, Dataset } from'crawlee';
// PlaywrightCrawler crawls the web using a headless// browser controlled by the Playwright library.const crawler = newPlaywrightCrawler({
// Use the requestHandler to process each of the crawled pages.asyncrequestHandler({ request, page, enqueueLinks, log }) {
const title = await page.title();
log.info(`Title of ${request.loadedUrl} is '${title}'`);
// Save results as JSON to ./storage/datasets/defaultawaitDataset.pushData({ title, url: request.loadedUrl });
// Extract links from the current page// and add them to the crawling queue.awaitenqueueLinks();
},
// Uncomment this option to see the browser window.// headless: false,
});
// Add first URL to the queue and start the crawl.await crawler.run(['https://crawlee.dev']);
By default, Crawlee stores data to ./storage in the current working directory. You can override this directory via Crawlee configuration. For details, see Configuration guide, Request storage and Result storage.
🛠 Features
Single interface for HTTP and headless browser crawling
Persistent queue for URLs to crawl (breadth & depth first)
Pluggable storage of both tabular data and files
Automatic scaling with available system resources
Integrated proxy rotation and session management
Lifecycles customizable with hooks
CLI to bootstrap your projects
Configurable routing, error handling and retries
Dockerfiles ready to deploy
Written in TypeScript with generics
👾 HTTP crawling
Zero config HTTP2 support, even for proxies
Automatic generation of browser-like headers
Replication of browser TLS fingerprints
Integrated fast HTML parsers. Cheerio and JSDOM
Yes, you can scrape JSON APIs as well
💻 Real browser crawling
JavaScript rendering and screenshots
Headless and headful support
Zero-config generation of human-like fingerprints
Automatic browser management
Use Playwright and Puppeteer with the same interface
Chrome, Firefox, Webkit and many others
Usage on the Apify platform
Crawlee is open-source and runs anywhere, but since it's developed by Apify, it's easy to set up on the Apify platform and run in the cloud. Visit the Apify SDK website to learn more about deploying Crawlee to the Apify platform.
Your code contributions are welcome, and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.
License
This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.
The scalable web crawling and scraping library for JavaScript/Node.js. Enables development of data extraction and web automation jobs (not only) with headless Chrome and Puppeteer.
The npm package @crawlee/http receives a total of 16,546 weekly downloads. As such, @crawlee/http popularity was classified as popular.
We found that @crawlee/http demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 0 open source maintainers collaborating on the project.
Package last updated on 03 Jun 2024
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.