
Research
PyPI Package Disguised as Instagram Growth Tool Harvests User Credentials
A deceptive PyPI package posing as an Instagram growth tool collects user credentials and sends them to third-party bot services.
WebCap is an extremely lightweight web screenshot tool. It doesn't require Selenium, Playwright, Puppeteer, or any other browser automation framework; all it needs is a working Chrome installation. Used by BBOT.
pipx install webcap
webcap server
)https://github.com/user-attachments/assets/a5dea3fb-fa01-41e7-90cd-67c6efa3d6e5
WebCap's most unique feature is its ability to capture not only the fully-rendered DOM, but also every snippet of parsed Javascript (regardless of inline or external), and the full content of every HTTP request + response (including Javascript API calls etc.). For convenience, it can output directly to JSON.
# Capture screenshots of all URLs in urls.txt
webcap scan urls.txt -o ./my_screenshots
# Output to JSON, and include the fully-rendered DOM
webcap scan urls.txt --json --dom | jq
# Capture requests and responses
webcap scan urls.txt --json --requests --responses | jq
# Capture javascript
webcap scan urls.txt --json --javascript | jq
# Extract text from screenshots
webcap scan urls.txt --json --ocr | jq
# Start the server
webcap server
# Browse to http://localhost:8000
webcap scan
)import base64
from webcap import Browser
async def main():
# create a browser instance
browser = Browser()
# start the browser
await browser.start()
# take a screenshot
webscreenshot = await browser.screenshot("http://example.com")
# save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(webscreenshot.blob)
# stop the browser
await browser.stop()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Usage: webcap scan [OPTIONS] URLS
Screenshot URLs
āā Arguments āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā * urls TEXT URL(s) to capture, or file(s) containing URLs [default: None] ā
ā [required] ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āā Options āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā --json -j Output JSON ā
ā --chrome -c TEXT Path to Chrome executable [default: None] ā
ā --output -o OUTPUT_DIR Output directory ā
ā [default: /home/bls/Downloads/code/webcap/screenshots] ā
ā --help Show this message and exit. ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āā Screenshots āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā --resolution -r RESOLUTION Resolution to capture [default: 1440x900] ā
ā --full-page -f Capture the full page (larger resolution images) ā
ā --no-screenshots Only visit the sites; don't capture screenshots ā
ā (useful with -j/--json) ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āā Performance āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā --threads -t INTEGER Number of threads to use [default: 15] ā
ā --timeout -T INTEGER Timeout before giving up on a web request [default: 10] ā
ā --delay SECONDS Delay before capturing [default: 3.0] ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āā HTTP āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā --user-agent -U TEXT User agent to use ā
ā [default: Mozilla/5.0 (Windows NT 10.0; Win64; x64) ā
ā AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 ā
ā Safari/537.36] ā
ā --headers -H TEXT Additional headers to send in format: 'Header-Name: ā
ā Header-Value' (multiple supported) ā
ā --proxy -p TEXT HTTP proxy to use [default: None] ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āā JSON (Only apply when -j/--json is used) āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā --base64 -b Output each screenshot as base64 ā
ā --dom -d Capture the fully-rendered DOM ā
ā --responses -rs Capture the full body of each HTTP response ā
ā (including API calls etc.) ā
ā --requests -rq Capture the full body of each HTTP request ā
ā (including API calls etc.) ā
ā --javascript -J Capture every snippet of Javascript (inline + ā
ā external) ā
ā --ignore-types TEXT Ignore these filetypes ā
ā [default: Image, Media, Font, Stylesheet] ā
ā --ocr --no-ocr Extract text from screenshots [default: no-ocr] ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
FAQs
An ultra-lightweight web screenshot tool written in Python
We found that webcap demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
A deceptive PyPI package posing as an Instagram growth tool collects user credentials and sends them to third-party bot services.
Product
Socket now supports pylock.toml, enabling secure, reproducible Python builds with advanced scanning and full alignment with PEP 751's new standard.
Security News
Research
Socket uncovered two npm packages that register hidden HTTP endpoints to delete all files on command.