New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

the-crawler

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

the-crawler

For crawling web file explorers for content

  • 0.5.0
  • PyPI
  • Socket score

Maintainers
1

The Crawler

Poetry psf/black pre-commit

Web crawling utility for downloading files from web pages.

Installation

From PyPI

This assumes you have Python 3.10+ installed and pip3 is on your path:

~$ pip3 install the-crawler
...
~$ the-crawler -h
usage: the-crawler [-h] [--recurse] [--output-directory OUTPUT_DIRECTORY] [--extensions EXTENSIONS [EXTENSIONS ...]] [--max-workers MAX_WORKERS] base_url

Crawls given url for content

positional arguments:
  base_url

options:
  -h, --help            show this help message and exit
  --recurse, -r
  --output-directory OUTPUT_DIRECTORY, -o OUTPUT_DIRECTORY
  --extensions EXTENSIONS [EXTENSIONS ...], -e EXTENSIONS [EXTENSIONS ...]
  --max-workers MAX_WORKERS

From Source

This assumes you have git, Python 3.10+, and poetry installed already.

~$ git clone git@gitlab.com:woodforsheep/the-crawler.git
...
~$ cd the-crawler
the-crawler$ poetry install
...
the-crawler$ poetry run the-crawler -h
usage: the-crawler [-h] [--quiet] [--verbose] [--collect-only] [--force-collection] [--recurse]
                   [--output-directory OUTPUT_DIRECTORY] [--extensions [EXTENSIONS]]
                   [--max-workers MAX_WORKERS]
                   base_url

Crawls given url for content

positional arguments:
  base_url

options:
  -h, --help            show this help message and exit
  --quiet               Changes the console log level from INFO to WARNING; defers to --verbose
  --verbose             Changes the console log level from INFO to DEBUG; takes precedence over
                        --quiet
  --collect-only        Stops after collecting links to be downloaded; useful for checking the
                        cache before continuing
  --force-collection    Forces recollection of links, even if the cache file is present
  --recurse, -r         If specified, will follow links to child pages and search them for
                        content
  --output-directory OUTPUT_DIRECTORY, -o OUTPUT_DIRECTORY
                        The location to store the downloaded content; must already exist
  --extensions [EXTENSIONS], -e [EXTENSIONS]
                        If specified, will restrict the types of files downloaded to those
                        matching the extensions provided; case-insensitive
  --max-workers MAX_WORKERS
                        The maximum number of parallel downloads to support; defaults to
                        os.cpu_count()

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc