You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

github.com/nktauserum/crawler-service

Package Overview
Dependencies
Alerts
File Explorer
Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

github.com/nktauserum/crawler-service

v0.0.0-20250604155618-1415acca3ec1
Source
Go
Version published
Created
Source

Crawler Service

A web service for crawling and parsing content from URLs, supporting both HTML pages and PDF documents.

Features

  • HTML page parsing with title, sitename, and content extraction
  • PDF document downloading and text extraction
  • Content conversion from HTML to Markdown
  • RESTful API endpoint for crawling
  • Performance timing measurements

API Endpoints

POST /crawl

Crawls and parses content from a given URL.

Request Body:

{
    "url": "https://example.com/page"
}

Response:

{
    "url": "https://example.com/page",
    "title": "Page Title",
    "sitename": "Example Site",
    "content": "Markdown formatted content",
    "html": "Original HTML content",
    "time": "1.234"
}

Setup

  • Clone the repository
  • Configure the port in your environment
  • Run the application

Usage

The service runs on the configured port and accepts POST requests to /crawl endpoint.

Example curl request:

curl -X POST http://localhost:8080/crawl \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com/page"}'

Error Handling

The service returns appropriate HTTP status codes:

  • 200: Successful crawl
  • 204: Empty URL provided
  • 500: Internal server error with error message

FAQs

Package last updated on 04 Jun 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts