
Product
Introducing Webhook Events for Alert Changes
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.
language-data
Advanced tools
This package is not meant to be used on its own. Please see langcodes for documentation.
language_data is a supplement to the langcodes module, for working with standardized codes for human languages. It stores the more bulky and hard-to-index data about languages, particularly what they are named in various languages.
For example, this stores the data that tell you that the code "en" means "English" in English, or that "francés" is the Spanish (es) name for French (fr).
The functions and test cases for working with this data are in langcodes, because working with the data correctly requires parsing language codes.
The data included in this package is:
These are all extracted from the Unicode CLDR data package, version 40, plus a few additional language names that fill in gaps in CLDR.
The estimates for "writing population" are often overestimates, as described in the CLDR documentation on territory data. In most cases, they are derived from published data about literacy rates in the places where those languages are spoken. This doesn't take into account that many literate people around the world speak a language that isn't typically written, and write in a different language.
The writing systems of Chinese erase most (but not all) of the distinctions between spoken Chinese languages. You'll see separate estimates of the writing population for Cantonese, Mandarin, Wu, and so on, even though you'll likely consider these all to be zh when written.
CLDR doesn't have language population data for sign languages. Sign languages end up with a speaking_population() and writing_population() of 0, and I suppose that is literally true, but there's no data from which we could provide a signing_population() method.
language_data has a dependency on the marisa-trie package so that it can load a compact, efficient data structure for looking up language names.
language_data is usually installed as a dependency of langcodes, and doesn't make much sense without it. You can pip install language_data anyway if you want.
To install the language_data package in editable mode, run poetry install in the package root. (This is the equivalent of pip install -e ., which will hopefully become compatible again soon via PEP 660.)
git submodule update --initsupplemental/languageInfo.xml and supplemental/supplementalData.xml into language_data/datacd language_data && ../.venv/bin/python build_data.pyFAQs
Supplementary data about languages used by the langcodes module
We found that language-data demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Product
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.

Security News
ENISA has become a CVE Program Root, giving the EU a central authority for coordinating vulnerability reporting, disclosure, and cross-border response.

Product
Socket now scans OpenVSX extensions, giving teams early detection of risky behaviors, hidden capabilities, and supply chain threats in developer tools.