TWStock profiles crawler
A light weight crawler
A webscraper with a sofisticated toolkit to scrap the world
ebest xing api wrapper
Scalable and Compliant Web Crawler
Simple crawler that detects link errors such as 404 and 500.
XSStrike is a Cross Site Scripting detection suite equipped with four hand written parsers, an intelligent payload generator, a powerful fuzzing engine and an incredibly fast crawler. Instead of injecting payloads and checking it works like all the other tools do, XSStrike analyses the response with multiple parsers and then crafts payloads that are guaranteed to work by context analysis integrated with a fuzzing engine.
A web crawler and content filtering tool using OpenAI.
A CLI client for exporting elasticsearch data to csv
A library for crawling websites
news crawler from naver
General purpose crawler
SQLi Crawler with JavaScript support.
A web crawler based on requests-html, mainly targets for url validation test.
Collection Of Tools To Ease Selenium Operation
An asyncio + aiolibs crawler imitate scrapy framework
Simple website crawler that asynchronously crawls a website and all subpages that it can find, along with static content that they rely on.
Notion news mecro
A crawler for product information of sellers on Ruten.
A light weight tool to crawl stock data from yahoo finance.
A web crawler for GPTs to build knowledge bases
Spider templates for automatic crawlers.
Scrapy Crawlbase Proxy Middleware: Crawlbase interfacing middleware for Scrapy
th2_grpc_crawler_check2
Archive a reddit user's post history. Formatted overview of a profile, JSON containing every post, and picture downloads.
A simple and clear Web Crawler framework build on python3.6+
This Sphinx extension uses Algolia's v1 Crawler API - and may even be run standalone via CLI (without Sphinx).
Python library to crawl user data from sharkey instances
Darkweb crawler & search engine.
llama-index readers apify integration
A simple cloudacademy course crawling & downloading tool
知乎关键词搜索、热榜、用户信息、回答、专栏文章、评论等信息的抓取程序
Fast Crawler
Basic web crawler
Lazy Crawler is a Python package that simplifies web scraping tasks. It builds upon Scrapy, a powerful web crawling and scraping framework, providing additional utilities and features for easier data extraction. With Lazy Crawler, you can quickly set up and deploy web scraping projects, saving time and effort.
Watchdogs to keep an eye on the world's change. Read more: https://github.com/ClericPy/watchdogs.
A simple asynchronous web crawler for scraping all URLs within a domain
web crawler and sitemap generator.
Windy-Web-Crawler is a command line web crawler that crawls 'www.windy.com' and displays the Temperature and wind speed for next 5 days
Your project description
Wikicivi WCC (Wikicivi Crawler Client) SDK