You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

html-text

Package Overview
Dependencies
Maintainers
3
Versions
15
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

html-text - pypi Package Compare versions

Comparing version
0.7.0
to
0.7.1
+63
.gitignore
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
.pytest_cache
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# PyCharm
.idea
[tox]
envlist = py39,py310,py311,py312,py313,py39-parsel,twinecheck
[testenv]
deps =
pytest
pytest-cov >= 7.0.0
py39-parsel: parsel
commands =
pytest --cov=html_text --cov-report=xml --cov-report=term-missing {env:PYTEST_DOC:} {posargs:.}
[testenv:py39-parsel]
setenv =
PYTEST_DOC = --doctest-modules --doctest-glob='*.rst'
[testenv:twinecheck]
basepython = python3
deps =
twine==6.2.0
build==1.3.0
commands =
python -m build --sdist
twine check dist/*
[testenv:pre-commit]
deps = pre-commit
commands = pre-commit run --all-files --show-diff-on-failure
skip_install = true
[testenv:typing]
basepython = python3
deps =
mypy==1.18.2
parsel==1.10.0
pytest==8.4.2
types-lxml==2025.8.25
commands =
mypy {posargs: html_text tests}
+7
-0

@@ -5,2 +5,9 @@ =======

0.7.1 (unreleased)
------------------
* Added support for Python 3.14.
* Explicitly re-export public names.
* Migrated the build system to ``hatchling``.
* CI improvements.
0.7.0 (2025-02-10)

@@ -7,0 +14,0 @@ ------------------

+12
-1

@@ -1,2 +0,2 @@

__version__ = "0.7.0"
__version__ = "0.7.1"

@@ -13,1 +13,12 @@ from .html_text import (

)
__all__ = (
"DOUBLE_NEWLINE_TAGS",
"NEWLINE_TAGS",
"cleaned_selector",
"cleaner",
"etree_to_text",
"extract_text",
"parse_html",
"selector_to_text",
)
+2
-2

@@ -194,3 +194,3 @@ from __future__ import annotations

"""
import parsel
import parsel # noqa: PLC0415

@@ -214,3 +214,3 @@ if isinstance(sel, parsel.SelectorList):

"""Clean parsel.selector."""
import parsel
import parsel # noqa: PLC0415

@@ -217,0 +217,0 @@ try:

@@ -11,2 +11,1 @@

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+10
-133

@@ -1,12 +0,11 @@

Metadata-Version: 2.2
Name: html_text
Version: 0.7.0
Metadata-Version: 2.4
Name: html-text
Version: 0.7.1
Summary: Extract text from HTML
Home-page: https://github.com/zytedata/html-text
Author: Konstantin Lopukhin
Author-email: kostia.lopuhin@gmail.com
License: MIT license
Project-URL: Homepage, https://github.com/zytedata/html-text
Author-email: Konstantin Lopukhin <kostia.lopuhin@gmail.com>
License-Expression: MIT
License-File: LICENSE
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English

@@ -19,15 +18,7 @@ Classifier: Programming Language :: Python :: 3

Classifier: Programming Language :: Python :: 3.13
Description-Content-Type: text/x-rst
License-File: LICENSE
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Requires-Dist: lxml
Requires-Dist: lxml-html-clean
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license
Dynamic: requires-dist
Dynamic: summary
Description-Content-Type: text/x-rst

@@ -163,115 +154,1 @@ ============

`webstruct <http://webstruct.readthedocs.io/en/latest/>`_ library.
=======
History
=======
0.7.0 (2025-02-10)
------------------
* Removed support for Python 3.8.
* Added support for Python 3.13.
* Added type hints and ``py.typed``.
* CI improvements.
0.6.2 (2024-05-01)
------------------
* Support deeper trees by using iteration instead of recursion.
0.6.1 (2024-04-23)
------------------
* Fixed HTML comment and processing instruction handling.
* Use ``lxml-html-clean`` instead of ``lxml[html_clean]`` in setup.py,
to avoid https://github.com/jazzband/pip-tools/issues/2004
0.6.0 (2024-04-04)
------------------
* Moved the Git repository to https://github.com/zytedata/html-text.
* Added official support for Python 3.9-3.12.
* Removed support for Python 2.7 and 3.5-3.7.
* Switched the ``lxml`` dependency to ``lxml[html_clean]`` to support
``lxml >= 5.2.0``.
* Switch from Travis CI to GitHub Actions.
* CI improvements.
0.5.2 (2020-07-22)
------------------
* Handle lxml Cleaner exceptions (a workaround for
https://bugs.launchpad.net/lxml/+bug/1838497 );
* Python 3.8 support;
* testing improvements.
0.5.1 (2019-05-27)
------------------
Fixed whitespace handling when ``guess_punct_space`` is False: html-text was
producing unnecessary spaces after newlines.
0.5.0 (2018-11-19)
------------------
Parsel dependency is removed in this release,
though parsel is still supported.
* ``parsel`` package is no longer required to install and use html-text;
* ``html_text.etree_to_text`` function allows to extract text from
lxml Elements;
* ``html_text.cleaner`` is an ``lxml.html.clean.Cleaner`` instance with
options tuned for text extraction speed and quality;
* test and documentation improvements;
* Python 3.7 support.
0.4.1 (2018-09-25)
------------------
Fixed a regression in 0.4.0 release: text was empty when
``html_text.extract_text`` is called with a node with text, but
without children.
0.4.0 (2018-09-25)
------------------
This is a backwards-incompatible release: by default html_text functions
now add newlines after elements, if appropriate, to make the extracted text
to look more like how it is rendered in a browser.
To turn it off, pass ``guess_layout=False`` option to html_text functions.
* ``guess_layout`` option to to make extracted text look more like how
it is rendered in browser.
* Add tests of layout extraction for real webpages.
0.3.0 (2017-10-12)
------------------
* Expose functions that operate on selectors,
use ``.//text()`` to extract text from selector.
0.2.1 (2017-05-29)
------------------
* Packaging fix (include CHANGES.rst)
0.2.0 (2017-05-29)
------------------
* Fix unwanted joins of words with inline tags: spaces are added for inline
tags too, but a heuristic is used to preserve punctuation without extra spaces.
* Accept parsed html trees.
0.1.1 (2017-01-16)
------------------
* Travis-CI and codecov.io integrations added
0.1.0 (2016-09-27)
------------------
* First release on PyPI.

@@ -0,3 +1,38 @@

[build-system]
requires = ["hatchling>=1.27.0"]
build-backend = "hatchling.build"
[project]
name = "html-text"
dynamic = ["version"]
description = "Extract text from HTML"
readme = "README.rst"
license = "MIT"
license-files = ["LICENSE"]
authors = [
{ name = "Konstantin Lopukhin", email = "kostia.lopuhin@gmail.com" },
]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
]
dependencies = [
"lxml",
"lxml-html-clean",
]
requires-python = ">=3.9"
[project.urls]
Homepage = "https://github.com/zytedata/html-text"
[tool.bumpversion]
current_version = "0.7.0"
current_version = "0.7.1"
commit = true

@@ -8,7 +43,2 @@ tag = true

[[tool.bumpversion.files]]
filename = "setup.py"
search = "version=\"{current_version}\""
replace = "version=\"{new_version}\""
[[tool.bumpversion.files]]
filename = "html_text/__init__.py"

@@ -21,7 +51,16 @@ search = "__version__ = \"{current_version}\""

[tool.coverage.report]
exclude_also = [
"if TYPE_CHECKING:",
[tool.hatch.version]
path = "html_text/__init__.py"
[tool.hatch.build.targets.sdist]
include = [
"/html_text",
"/tests",
"/CHANGES.rst",
"/tox.ini",
]
[tool.mypy]
strict = true
[[tool.mypy.overrides]]

@@ -34,2 +73,6 @@ module = "tests.*"

extend-select = [
# flake8-builtins
"A",
# flake8-async
"ASYNC",
# flake8-bugbear

@@ -39,2 +82,4 @@ "B",

"C4",
# flake8-commas
"COM",
# pydocstyle

@@ -94,2 +139,4 @@ "D",

ignore = [
# Trailing comma missing
"COM812",
# Missing docstring in public module

@@ -149,10 +196,8 @@ "D100",

"S101",
# Using lxml to parse untrusted data is known to be vulnerable to XML attacks
"S320",
]
[tool.ruff.lint.per-file-ignores]
"html_text/__init__.py" = ["F401"]
[tool.ruff.lint.isort]
split-on-trailing-comma = false
[tool.ruff.lint.pydocstyle]
convention = "pep257"

Sorry, the diff of this file is not supported yet

Metadata-Version: 2.2
Name: html_text
Version: 0.7.0
Summary: Extract text from HTML
Home-page: https://github.com/zytedata/html-text
Author: Konstantin Lopukhin
Author-email: kostia.lopuhin@gmail.com
License: MIT license
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Description-Content-Type: text/x-rst
License-File: LICENSE
Requires-Dist: lxml
Requires-Dist: lxml-html-clean
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license
Dynamic: requires-dist
Dynamic: summary
============
HTML to Text
============
.. image:: https://img.shields.io/pypi/v/html-text.svg
:target: https://pypi.python.org/pypi/html-text
:alt: PyPI Version
.. image:: https://img.shields.io/pypi/pyversions/html-text.svg
:target: https://pypi.python.org/pypi/html-text
:alt: Supported Python Versions
.. image:: https://github.com/zytedata/html-text/workflows/tox/badge.svg
:target: https://github.com/zytedata/html-text/actions
:alt: Build Status
.. image:: https://codecov.io/github/zytedata/html-text/coverage.svg?branch=master
:target: https://codecov.io/gh/zytedata/html-text
:alt: Coverage report
Extract text from HTML
* Free software: MIT license
How is html_text different from ``.xpath('//text()')`` from LXML
or ``.get_text()`` from Beautiful Soup?
* Text extracted with ``html_text`` does not contain inline styles,
javascript, comments and other text that is not normally visible to users;
* ``html_text`` normalizes whitespace, but in a way smarter than
``.xpath('normalize-space())``, adding spaces around inline elements
(which are often used as block elements in html markup), and trying to
avoid adding extra spaces for punctuation;
* ``html-text`` can add newlines (e.g. after headers or paragraphs), so
that the output text looks more like how it is rendered in browsers.
Install
-------
Install with pip::
pip install html-text
The package depends on lxml, so you might need to install additional
packages: http://lxml.de/installation.html
Usage
-----
Extract text from HTML::
>>> import html_text
>>> html_text.extract_text('<h1>Hello</h1> world!')
'Hello\n\nworld!'
>>> html_text.extract_text('<h1>Hello</h1> world!', guess_layout=False)
'Hello world!'
Passed html is first cleaned from invisible non-text content such
as styles, and then text is extracted.
You can also pass an already parsed ``lxml.html.HtmlElement``:
>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello</h1> world!')
>>> html_text.extract_text(tree)
'Hello\n\nworld!'
If you want, you can handle cleaning manually; use lower-level
``html_text.etree_to_text`` in this case:
>>> import html_text
>>> tree = html_text.parse_html('<h1>Hello<style>.foo{}</style>!</h1>')
>>> cleaned_tree = html_text.cleaner.clean_html(tree)
>>> html_text.etree_to_text(cleaned_tree)
'Hello!'
parsel.Selector objects are also supported; you can define
a parsel.Selector to extract text only from specific elements:
>>> import html_text
>>> sel = html_text.cleaned_selector('<h1>Hello</h1> world!')
>>> subsel = sel.xpath('//h1')
>>> html_text.selector_to_text(subsel)
'Hello'
NB parsel.Selector objects are not cleaned automatically, you need to call
``html_text.cleaned_selector`` first.
Main functions and objects:
* ``html_text.extract_text`` accepts html and returns extracted text.
* ``html_text.etree_to_text`` accepts parsed lxml Element and returns
extracted text; it is a lower-level function, cleaning is not handled
here.
* ``html_text.cleaner`` is an ``lxml.html.clean.Cleaner`` instance which
can be used with ``html_text.etree_to_text``; its options are tuned for
speed and text extraction quality.
* ``html_text.cleaned_selector`` accepts html as text or as
``lxml.html.HtmlElement``, and returns cleaned ``parsel.Selector``.
* ``html_text.selector_to_text`` accepts ``parsel.Selector`` and returns
extracted text.
If ``guess_layout`` is True (default), a newline is added before and after
``newline_tags``, and two newlines are added before and after
``double_newline_tags``. This heuristic makes the extracted text
more similar to how it is rendered in the browser. Default newline and double
newline tags can be found in `html_text.NEWLINE_TAGS`
and `html_text.DOUBLE_NEWLINE_TAGS`.
It is possible to customize how newlines are added, using ``newline_tags`` and
``double_newline_tags`` arguments (which are `html_text.NEWLINE_TAGS` and
`html_text.DOUBLE_NEWLINE_TAGS` by default). For example, don't add a newline
after ``<div>`` tags:
>>> newline_tags = html_text.NEWLINE_TAGS - {'div'}
>>> html_text.extract_text('<div>Hello</div> world!',
... newline_tags=newline_tags)
'Hello world!'
Apart from just getting text from the page (e.g. for display or search),
one intended usage of this library is for machine learning (feature extraction).
If you want to use the text of the html page as a feature (e.g. for classification),
this library gives you plain text that you can later feed into a standard text
classification pipeline.
If you feel that you need html structure as well, check out
`webstruct <http://webstruct.readthedocs.io/en/latest/>`_ library.
=======
History
=======
0.7.0 (2025-02-10)
------------------
* Removed support for Python 3.8.
* Added support for Python 3.13.
* Added type hints and ``py.typed``.
* CI improvements.
0.6.2 (2024-05-01)
------------------
* Support deeper trees by using iteration instead of recursion.
0.6.1 (2024-04-23)
------------------
* Fixed HTML comment and processing instruction handling.
* Use ``lxml-html-clean`` instead of ``lxml[html_clean]`` in setup.py,
to avoid https://github.com/jazzband/pip-tools/issues/2004
0.6.0 (2024-04-04)
------------------
* Moved the Git repository to https://github.com/zytedata/html-text.
* Added official support for Python 3.9-3.12.
* Removed support for Python 2.7 and 3.5-3.7.
* Switched the ``lxml`` dependency to ``lxml[html_clean]`` to support
``lxml >= 5.2.0``.
* Switch from Travis CI to GitHub Actions.
* CI improvements.
0.5.2 (2020-07-22)
------------------
* Handle lxml Cleaner exceptions (a workaround for
https://bugs.launchpad.net/lxml/+bug/1838497 );
* Python 3.8 support;
* testing improvements.
0.5.1 (2019-05-27)
------------------
Fixed whitespace handling when ``guess_punct_space`` is False: html-text was
producing unnecessary spaces after newlines.
0.5.0 (2018-11-19)
------------------
Parsel dependency is removed in this release,
though parsel is still supported.
* ``parsel`` package is no longer required to install and use html-text;
* ``html_text.etree_to_text`` function allows to extract text from
lxml Elements;
* ``html_text.cleaner`` is an ``lxml.html.clean.Cleaner`` instance with
options tuned for text extraction speed and quality;
* test and documentation improvements;
* Python 3.7 support.
0.4.1 (2018-09-25)
------------------
Fixed a regression in 0.4.0 release: text was empty when
``html_text.extract_text`` is called with a node with text, but
without children.
0.4.0 (2018-09-25)
------------------
This is a backwards-incompatible release: by default html_text functions
now add newlines after elements, if appropriate, to make the extracted text
to look more like how it is rendered in a browser.
To turn it off, pass ``guess_layout=False`` option to html_text functions.
* ``guess_layout`` option to to make extracted text look more like how
it is rendered in browser.
* Add tests of layout extraction for real webpages.
0.3.0 (2017-10-12)
------------------
* Expose functions that operate on selectors,
use ``.//text()`` to extract text from selector.
0.2.1 (2017-05-29)
------------------
* Packaging fix (include CHANGES.rst)
0.2.0 (2017-05-29)
------------------
* Fix unwanted joins of words with inline tags: spaces are added for inline
tags too, but a heuristic is used to preserve punctuation without extra spaces.
* Accept parsed html trees.
0.1.1 (2017-01-16)
------------------
* Travis-CI and codecov.io integrations added
0.1.0 (2016-09-27)
------------------
* First release on PyPI.
lxml
lxml-html-clean
CHANGES.rst
LICENSE
MANIFEST.in
README.rst
pyproject.toml
setup.py
html_text/__init__.py
html_text/html_text.py
html_text/py.typed
html_text.egg-info/PKG-INFO
html_text.egg-info/SOURCES.txt
html_text.egg-info/dependency_links.txt
html_text.egg-info/not-zip-safe
html_text.egg-info/requires.txt
html_text.egg-info/top_level.txt
tests/__init__.py
tests/test_html_text.py
tests/test_webpages/A Light in the Attic | Books to Scrape - Sandbox.html
tests/test_webpages/A Light in the Attic | Books to Scrape - Sandbox.txt
tests/test_webpages/IANA — IANA-managed Reserved Domains.html
tests/test_webpages/IANA — IANA-managed Reserved Domains.txt
tests/test_webpages/Scrapinghub Enterprise Solutions.html
tests/test_webpages/Scrapinghub Enterprise Solutions.txt
tests/test_webpages/Tutorial — Webstruct 0.6 documentation.html
tests/test_webpages/Tutorial — Webstruct 0.6 documentation.txt
tests/test_webpages/Webstruct — Webstruct 0.6 documentation.html
tests/test_webpages/Webstruct — Webstruct 0.6 documentation.txt
include CHANGES.rst
include LICENSE
include README.rst
recursive-include tests *
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
[egg_info]
tag_build =
tag_date = 0
#!/usr/bin/env python
from pathlib import Path
from setuptools import setup
readme = Path("README.rst").read_text(encoding="utf-8")
history = Path("CHANGES.rst").read_text(encoding="utf-8")
setup(
name="html_text",
version="0.7.0",
description="Extract text from HTML",
long_description=readme + "\n\n" + history,
long_description_content_type="text/x-rst",
author="Konstantin Lopukhin",
author_email="kostia.lopuhin@gmail.com",
url="https://github.com/zytedata/html-text",
packages=["html_text"],
package_data={
"html_text": ["py.typed"],
},
include_package_data=True,
install_requires=[
"lxml",
"lxml-html-clean",
],
license="MIT license",
zip_safe=False,
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
],
)