
Security News
Software Engineering Daily Podcast: Feross on AI, Open Source, and Supply Chain Risk
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.
urlstd
Advanced tools
urlstd is a Python implementation of the WHATWG URL Living Standard.
This library provides URL class, URLSearchParams class, and low-level APIs that comply with the URL specification.
URL(url: str, base: Optional[str | URL] = None)
can_parse(url: str, base: Optional[str | URL] = None) -> bool__str__() -> strreadonly property href: strreadonly property origin: strproperty protocol: strproperty username: strproperty password: strproperty host: strproperty hostname: strproperty port: strproperty pathname: strproperty search: strreadonly property search_params: URLSearchParamsproperty hash: str__eq__(other: Any) -> bool and equals(other: URL, exclude_fragments: bool = False) → boolURLSearchParams(init: Optional[str | Sequence[Sequence[str | int | float]] | dict[str, str | int | float] | URLRecord | URLSearchParams] = None)
__len__() -> intappend(name: str, value: str | int | float) -> Nonedelete(name: str, value: Optional[str | int | float] = None) -> Noneget(name: str) -> str | Noneget_all(name: str) -> tuple[str, ...]has(name: str, value: Optional[str | int | float] = None) -> boolset(name: str, value: str | int | float) -> Nonesort() -> None__iter__() -> Iterator[tuple[str, str]]__str__() -> strLow-level APIs
parse_url(urlstring: str, base: Optional[str | URLRecord] = None, encoding: str = "utf-8") -> URLRecordBasicURLParser
parse(urlstring: str, base: Optional[URLRecord] = None, encoding: str = "utf-8", url: Optional[URLRecord] = None, state_override: Optional[URLParserState] = None) -> URLRecordURLRecord
property scheme: str = ""property username: str = ""property password: str = ""property host: Optional[str | int | tuple[int, ...]] = Noneproperty port: Optional[int] = Noneproperty path: list[str] | str = []property query: Optional[str] = Noneproperty fragment: Optional[str] = Nonereadonly property origin: Origin | Noneis_special() -> boolis_not_special() -> boolincludes_credentials() -> boolhas_opaque_path() -> boolcannot_have_username_password_port() -> boolserialize_url(exclude_fragment: bool = False) -> strserialize_host() -> strserialize_path() -> str__eq__(other: Any) -> bool and equals(other: URLRecord, exclude_fragments: bool = False) → boolHosts (domains and IP addresses)
IDNA
domain_to_ascii(domain: str, be_strict: bool = False) -> strdomain_to_unicode(domain: str, be_strict: bool = False) -> strHost
parse(host: str, is_not_special: bool = False) -> str | int | tuple[int, ...]serialize(host: str | int | Sequence[int]) -> strstring_percent_decode(s: str) -> bytesstring_percent_encode(s: str, safe: str, encoding: str = "utf-8", space_as_plus: bool = False) -> strapplication/x-www-form-urlencoded parser
parse_qsl(query: bytes) -> list[tuple[str, str]]application/x-www-form-urlencoded serializer
urlencode(query: Sequence[tuple[str, str]], encoding: str = "utf-8") -> strValidation
HostValidator
is_valid(host: str) -> boolis_valid_domain(domain: str) -> boolis_valid_ipv4_address(address: str) -> boolis_valid_ipv6_address(address: str) -> boolURLValidator
is_valid(urlstring: str, base: Optional[str | URLRecord] = None, encoding: str = "utf-8") -> boolis_valid_url_scheme(value: str) -> boolCompatibility with standard library urllib
urlstd.parse.urlparse(urlstring: str, base: str = None, encoding: str = "utf-8", allow_fragments: bool = True) -> urllib.parse.ParseResult
urlstd.parse.urlparse() ia an alternative to urllib.parse.urlparse().
Parses a string representation of a URL using the basic URL parser, and returns urllib.parse.ParseResult.
To parse a string into a URL:
from urlstd.parse import URL
URL('http://user:pass@foo:21/bar;par?b#c')
# → <URL(href='http://user:pass@foo:21/bar;par?b#c', origin='http://foo:21', protocol='http:', username='user', password='pass', host='foo:21', hostname='foo', port='21', pathname='/bar;par', search='?b', hash='#c')>
To parse a string into a URL with using a base URL:
url = URL('?ffi&🌈', base='http://example.org')
url # → <URL(href='http://example.org/?%EF%AC%83&%F0%9F%8C%88', origin='http://example.org', protocol='http:', username='', password='', host='example.org', hostname='example.org', port='', pathname='/', search='?%EF%AC%83&%F0%9F%8C%88', hash='')>
url.search # → '?%EF%AC%83&%F0%9F%8C%88'
params = url.search_params
params # → URLSearchParams([('ffi', ''), ('🌈', '')])
params.sort()
params # → URLSearchParams([('🌈', ''), ('ffi', '')])
url.search # → '?%F0%9F%8C%88=&%EF%AC%83='
str(url) # → 'http://example.org/?%F0%9F%8C%88=&%EF%AC%83='
To validate a URL string:
from urlstd.parse import URL, URLValidator, ValidityState
URL.can_parse('https://user:password@example.org/') # → True
URLValidator.is_valid('https://user:password@example.org/') # → False
validity = ValidityState()
URLValidator.is_valid('https://user:password@example.org/', validity=validity)
validity.valid # → False
validity.validation_errors # → 1
validity.descriptions[0] # → "invalid-credentials: input includes credentials: 'https://user:password@example.org/' at position 21"
URL.can_parse('file:///C|/demo') # → True
URLValidator.is_valid('file:///C|/demo') # → False
validity = ValidityState()
URLValidator.is_valid('file:///C|/demo', validity=validity) # → False
validity.valid # → False
validity.validation_errors # → 1
validity.descriptions[0] # → "invalid-URL-unit: code point is found that is not a URL unit: U+007C (|) in 'file:///C|/demo' at position 9"
To parse a string into a urllib.parse.ParseResult with using a base URL:
import html
from urllib.parse import unquote
from urlstd.parse import urlparse
pr = urlparse('?aÿb', base='http://example.org/foo/', encoding='utf-8')
pr # → ParseResult(scheme='http', netloc='example.org', path='/foo/', params='', query='a%C3%BFb', fragment='')
unquote(pr.query) # → 'aÿb'
pr = urlparse('?aÿb', base='http://example.org/foo/', encoding='windows-1251')
pr # → ParseResult(scheme='http', netloc='example.org', path='/foo/', params='', query='a%26%23255%3Bb', fragment='')
unquote(pr.query, encoding='windows-1251') # → 'aÿb'
html.unescape('aÿb') # → 'aÿb'
pr = urlparse('?aÿb', base='http://example.org/foo/', encoding='windows-1252')
pr # → ParseResult(scheme='http', netloc='example.org', path='/foo/', params='', query='a%FFb', fragment='')
unquote(pr.query, encoding='windows-1252') # → 'aÿb'
urlstd uses standard library logging for validation error.
Change the logger log level of urlstd if needed:
logging.getLogger('urlstd').setLevel(logging.ERROR)
icupy requirements:
Configuring environment variables for icupy (ICU):
Windows:
Set the ICU_ROOT environment variable to the root of the ICU installation (default is C:\icu).
For example, if the ICU is located in C:\icu4c:
set ICU_ROOT=C:\icu4c
or in PowerShell:
$env:ICU_ROOT = "C:\icu4c"
To verify settings using icuinfo (64 bit):
%ICU_ROOT%\bin64\icuinfo
or in PowerShell:
& $env:ICU_ROOT\bin64\icuinfo
Linux/POSIX:
If the ICU is located in a non-regular place, set the PKG_CONFIG_PATH and LD_LIBRARY_PATH environment variables.
For example, if the ICU is located in /usr/local:
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
To verify settings using pkg-config:
$ pkg-config --cflags --libs icu-uc
-I/usr/local/include -L/usr/local/lib -licuuc -licudata
Installing from PyPI:
pip install urlstd
Install dependencies:
pipx install tox
# or
pip install --user tox
To run tests and generate a report:
git clone https://github.com/miute/urlstd.git
cd urlstd
tox -e wpt
See result: tests/wpt/report.html
FAQs
Python implementation of the WHATWG URL Standard
We found that urlstd demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.

Security News
GitHub has revoked npm classic tokens for publishing; maintainers must migrate, but OpenJS warns OIDC trusted publishing still has risky gaps for critical projects.

Security News
Rust’s crates.io team is advancing an RFC to add a Security tab that surfaces RustSec vulnerability and unsoundness advisories directly on crate pages.