Socket
Book a DemoInstallSign in
Socket

scrapery

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

scrapery

Scrapery: A fast, lightweight library to scrape HTML, XML, and JSON using XPath, CSS selectors, and intuitive DOM navigation.

pipPyPI
Version
0.0.6
Maintainers
1

🕷️ scrapery

Free PyPI Version Python Versions Downloads

A blazing fast, lightweight, and modern parsing library for HTML, XML, and JSON, designed for web scraping and data extraction.
It supports both XPath and CSS selectors, along with seamless DOM navigation, making parsing and extracting data straightforward and intuitive.

✨ Features

  • Blazing Fast Performance – Optimized for high-speed HTML, XML, and JSON parsing
  • 🎯 Dual Selector Support – Use XPath or CSS selectors for flexible extraction
  • 🛡 Comprehensive Error Handling – Detailed exceptions for different error scenarios
  • 🧩 Robust Parsing – Encoding detection and content normalization for reliable results
  • 🧑‍💻 Function-Based API – Clean and intuitive interface for ease of use
  • 📦 Multi-Format Support – Parse HTML, XML, and JSON in a single library
  • ⚙️ Versatile File Management – Create directories, list files, and handle paths effortlessly
  • 📝 Smart String Normalization – Clean text by fixing encodings, removing HTML tags, and standardizing whitespace
  • 🔍 Flexible CSV & Excel Handling – Read, filter, save, and append data
  • 🔄 Efficient JSON Streaming & Reading – Stream large JSON files or load fully with encoding detection
  • 💾 Robust File Reading & Writing – Auto-detect encoding, support large files with mmap, and save JSON or plain text cleanly
  • 🌐 URL & Domain Utilities – Extract base domains accurately using industry-standard parsing
  • 🛡 Input Validation & Error Handling – Custom validations to ensure reliable data processing

⚡ Performance Comparison

The following benchmarks were run on sample HTML and JSON data to compare scrapery with other popular Python libraries.

LibraryHTML Parse TimeJSON Parse Time
scrapery12 ms8 ms
Other library120 msN/A

⚠️ Actual performance may vary depending on your environment. These results are meant for illustrative purposes only. No library is endorsed or affiliated with scrapery.

📦 Installation

pip install scrapery

# -------------------------------
# HTML Example
# -------------------------------
import scrapery import *

html_content = """
<html>
    <body>
        <h1>Welcome</h1>
        <p>Hello<br>World</p>
        <a href="/about">About Us</a>
        <table>
            <tr><th>Name</th><th>Age</th></tr>
            <tr><td>John</td><td>30</td></tr>
            <tr><td>Jane</td><td>25</td></tr>
        </table>
    </body>
</html>
"""

# Parse HTML content
doc = parse_html(html_content)

# Get all table rows
rows = select_all(doc, "table tr")
print("All table rows:")
for row in rows:
    print(selector_content(row))

# Output
    All table rows:
    NameAge
    John30
    Jane25

# Get first paragraph
paragraph = select_one(doc, "p")
print("\nFirst paragraph text:", selector_content(paragraph))
# ➜ First paragraph text: HelloWorld

# CSS selector: First <h1>
print(selector_content(doc, selector="h1"))  
# ➜ Welcome

# XPath: First <h1>
print(selector_content(doc, selector="//h1"))  
# ➜ Welcome

# CSS selector: <a href> attribute
print(selector_content(doc, selector="a", attr="href"))  
# ➜ /about

# XPath: <a> element href
print(selector_content(doc, selector="//a", attr="href"))  
# ➜ /about

# CSS: First <td> in table (John)
print(selector_content(doc, selector="td"))  
# ➜ John

# XPath: Second <td> (//td[2] = 30)
print(selector_content(doc, selector="//td[2]"))  
# ➜ 30

# XPath: Jane's age (//tr[3]/td[2])
print(selector_content(doc, selector="//tr[3]/td[2]"))  
# ➜ 25

# No css selector or XPath: full text
print(selector_content(doc))  
# ➜ Welcome HelloWorld About Us Name Age John 30 Jane 25

# Root attribute (lang, if it existed)
print(selector_content(doc, attr="lang"))  
# ➜ None

#-------------------------
# DOM navigation
#-------------------------
# Example 1: parent, children, siblings
p_elem = select_one(doc,"p")
print("Parent tag of <p>:", parent(p_elem).tag)
print("Children of <p>:", [c.tag for c in children(p_elem)])
print("Siblings of <p>:", [s.tag for s in siblings(p_elem)])

# Example 2: next_sibling, prev_sibling
print("Next sibling of <p>:", next_sibling(p_elem).tag)
h1_elem = select_one(doc,"h1")
print("Previous sibling of <p>:", next_sibling(h1_elem))

# Example 3: ancestors and descendants
ancs = ancestors(p_elem)
print("Ancestor tags of <p>:", [a.tag for a in ancs])
desc = descendants(select_one(doc,"table"))
print("Descendant tags of <table>:", [d.tag for d in desc])

# Example 4: class utilities
div_html = '<div class="card primary"></div>'
div_elem = parse_html(div_html)
print("Has class 'card'? ->", has_class(div_elem, "card"))
print("Classes:", get_classes(div_elem))

# -------------------------------
# Resolve relative URLs
# -------------------------------

html = """
<html>
  <body>
    <a href="/about">About</a>
    <img src="/images/logo.png">
  </body>
</html>
"""

doc = parse_html(html)
base = "https://example.com"

# Get <a> links
print(absolute_url(doc, "a", base_url=base))
# → 'https://example.com/about'

# Get <img> sources
print(absolute_url(doc, "img", base_url=base, attr="src"))
# → 'https://example.com/images/logo.png'

# -------------------------------
# XML Example
# -------------------------------

xml_content = """
<users>
    <user id="1"><name>John</name></user>
    <user id="2"><name>Jane</name></user>
</users>
"""

xml_doc = parse_xml(xml_content)
users = find_xml_all(xml_doc, "//user")
for u in users:
    print(u.attrib, u.xpath("./name/text()")[0])

# Convert XML to dict
xml_dict = xml_to_dict(xml_doc)
print(xml_dict)

# -------------------------------
# JSON Example
# -------------------------------

json_content = '{"users":[{"name":"John","age":30},{"name":"Jane","age":25}]}'
data = parse_json(json_content)

# Access using path
john_age = json_get_value(data, "users.0.age")
print("John's age:", john_age)

# Extract all names
names = json_extract_values(data, "name")
print("Names:", names)

# Flatten JSON
flat = json_flatten(data)
print("Flattened JSON:", flat)

# -------------------------------
# Utility Example
# -------------------------------

1. Create a Directory

from scrapery import create_directory
# Creates a directory if it doesn't already exist.


# Example 1: Creating a new directory
create_directory("new_folder")

# Example 2: Creating nested directories
create_directory("parent_folder/sub_folder")

# ================================================================
2. Standardize a String

from scrapery import standardized_string
# This function standardizes the input string by removing escape sequences like \n, \t, and \r, removing HTML tags, collapsing multiple spaces, and trimming leading/trailing spaces.


# Example 1: Standardize a string with newlines, tabs, and HTML tags
input_string_1 = "<html><body>  Hello \nWorld!  \tThis is a test.  </body></html>"
print("Standardized String 1:", standardized_string(input_string_1))

# Example 2: Input string with multiple spaces and line breaks
input_string_2 = "  This   is   a  \n\n   string   with  spaces and \t tabs.  "
print("Standardized String 2:", standardized_string(input_string_2))

# Example 3: Pass an empty string
input_string_3 = ""
print("Standardized String 3:", standardized_string(input_string_3))

# Example 4: Pass None (invalid input)
input_string_4 = None
print("Standardized String 4:", standardized_string(input_string_4))

================================================================
3. Read CSV

from scrapery import read_csv

csv_file_path = 'data.csv'
get_value_by_col_name = 'URL'
filter_col_name = 'Category'
include_filter_col_values = ['Tech']

result = read_csv(csv_file_path, get_value_by_col_name, filter_col_name, include_filter_col_values)
print(result)

Sample CSV

Category,URL
Tech,https://tech1.com
Tech,https://tech2.com
Science,https://science1.com

Result

['https://tech1.com', 'https://tech2.com']

================================================================
4. Save to CSV

from scrapery import save_to_csv

list_data = [[1, 'Alice', 23], [2, 'Bob', 30], [3, 'Charlie', 25]]
headers = ['ID', 'Name', 'Age']
output_file_path = 'output_data.csv'

# Default separator (comma)
save_to_csv(data_list, headers, output_file_path)

# Tab separator
save_to_csv(data_list, headers, output_file_path, sep="\t")

# Semicolon separator
save_to_csv(data_list, headers, output_file_path, sep=";")

Output (default, sep=","):
ID,Name,Age
1,Alice,23
2,Bob,30
3,Charlie,25

Output (sep="\t"):
ID  Name    Age
1   Alice   23
2   Bob 30
3   Charlie 25

================================================================
5. Save to Excel file 

from scrapery import save_to_xls

save_to_xls(data_list, headers, output_file_path)

================================================================
6. List files in a directory

from scrapery import list_files

files = list_files(directory=output_dir, extension="csv")
print("CSV files in output directory:", files)

================================================================
7. Read back file content

from scrapery import read_file_content

# Example 1: Read small JSON file fully
file_path_small_json = 'small_data.json'
content = read_file_content(file_path_small_json, stream_json=False)
print("Small JSON file content (fully loaded):")
print(content)  # content will be a dict or list depending on JSON structure

# Example 2: Read large JSON file by streaming (returns a generator)
file_path_large_json = 'large_data.json'
json_stream: Generator[dict, None, None] = read_file_content(file_path_large_json, stream_json=True)
print("\nLarge JSON file content streamed:")
for item in json_stream:
    print(item)  # process each streamed JSON object one by one

# Example 3: Read a large text file using mmap
file_path_large_txt = 'large_text.txt'
text_content = read_file_content(file_path_large_txt)
print("\nLarge text file content (using mmap):")
print(text_content[:500])  # print first 500 characters

# Example 4: Read a small text file with encoding detection
file_path_small_txt = 'small_text.txt'
text_content = read_file_content(file_path_small_txt)
print("\nSmall text file content (with encoding detection):")
print(text_content)

================================================================
8. Save to file

from scrapery import save_file_content

# Example 1: Save plain text content to a file
text_content = "Hello, this is a sample text file.\nWelcome to file handling in Python!"
save_file_content("output/text_file.txt", text_content)

# Output: Content successfully written to output/text_file.txt

# Example 2: Save JSON content to a file
json_content = {
    "name": "Alice",
    "age": 30,
    "skills": ["Python", "Data Science", "Machine Learning"]
}
save_file_content("output/data.json", json_content)

# Output: JSON content successfully written to output/data.json

# Example 3: Save number (non-string content) to a file
number_content = 12345
save_file_content("output/number.txt", number_content)

# Output: Content successfully written to output/number.txt

# Example 4: Append text content to an existing file
append_text = "\nThis line is appended."
save_file_content("output/text_file.txt", append_text, mode="a")

# Output: Content successfully written to output/text_file.txt


Keywords

web scraping

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts