Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
This package is not meant to be used on its own. Please see langcodes for documentation.
language_data
is a supplement to the langcodes module, for working with standardized codes for human languages. It stores the more bulky and hard-to-index data about languages, particularly what they are named in various languages.
For example, this stores the data that tell you that the code "en" means "English" in English, or that "francés" is the Spanish (es) name for French (fr).
The functions and test cases for working with this data are in langcodes, because working with the data correctly requires parsing language codes.
The data included in this package is:
These are all extracted from the Unicode CLDR data package, version 40, plus a few additional language names that fill in gaps in CLDR.
The estimates for "writing population" are often overestimates, as described in the CLDR documentation on territory data. In most cases, they are derived from published data about literacy rates in the places where those languages are spoken. This doesn't take into account that many literate people around the world speak a language that isn't typically written, and write in a different language.
The writing systems of Chinese erase most (but not all) of the distinctions between spoken Chinese languages. You'll see separate estimates of the writing population for Cantonese, Mandarin, Wu, and so on, even though you'll likely consider these all to be zh
when written.
CLDR doesn't have language population data for sign languages. Sign languages end up with a speaking_population()
and writing_population()
of 0, and I suppose that is literally true, but there's no data from which we could provide a signing_population()
method.
language_data
has a dependency on the marisa-trie
package so that it can load a compact, efficient data structure for looking up language names.
language_data
is usually installed as a dependency of langcodes
, and doesn't make much sense without it. You can pip install language_data
anyway if you want.
To install the language_data
package in editable mode, run poetry install
in the package root. (This is the equivalent of pip install -e .
, which will hopefully become compatible again soon via PEP 660.)
git submodule update --init
supplemental/languageInfo.xml
and supplemental/supplementalData.xml
into language_data/data
cd language_data && ../.venv/bin/python build_data.py
FAQs
Supplementary data about languages used by the langcodes module
We found that language-data demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.