Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
A fast, robust library to check for offensive language in strings. Dropdown replacement of "profanity-check".
Alt profanity check is a drop-in replacement of the profanity-check
library for the not so well
maintained https://github.com/vzhou842/profanity-check:
A fast, robust Python library to check for profanity or offensive language in strings. Read more about how and why
profanity-check
was built in this blog post.
Our aim is to follow scikit-learn's (main dependency) versions and post models trained with the same version number, example alt-profanity-check version 1.2.3.4 should be trained with the 1.2.3.4 version of the scikit-learn library.
For joblib which is the next major dependency we will be using the latest one which was available when we trained the models.
Last but not least we aim to clean up the codebase a bit and maybe introduce some features or datasets.
Learn Python from the Maintainer of alt-profanity-check 🎓🧑💻️⌨️ |
---|
I am teaching Python through Mentorcruise, aiming both to beginners and seasoned developers who want to get to the next level in their learning journey: https://mentorcruise.com/mentor/dimitriosmistriotis/. Please mention that you found me through this repository. |
See CHANGELOG.md
profanity-check
uses a linear SVM model trained on 200k human-labeled samples of clean and
profane text strings. Its model is simple but surprisingly effective, meaning
profanity-check
is both robust and extremely performant.
Many profanity detection libraries use a hard-coded list of bad words to detect and filter profanity. For example, profanity uses this wordlist, and even better-profanity still uses a wordlist. There are obviously glaring issues with this approach, and, while they might be performant, these libraries are not accurate at all.
A simple example for which profanity-check
is better is the phrase
profanity
thinks this is clean because it doesn't haveOther libraries like profanity-filter use more sophisticated methods that are much more accurate but at the cost of performance. A benchmark (performed December 2018 on a new 2018 Macbook Pro) using a Kaggle dataset of Wikipedia comments yielded roughly the following results:
Package | 1 Prediction (ms) | 10 Predictions (ms) | 100 Predictions (ms) |
---|---|---|---|
profanity-check | 0.2 | 0.5 | 3.5 |
profanity-filter | 60 | 1200 | 13000 |
profanity | 0.3 | 1.2 | 24 |
profanity-check
is anywhere from 300 - 4000 times faster than profanity-filter
in this
benchmark!
This table speaks for itself:
Package | Test Accuracy | Balanced Test Accuracy | Precision | Recall | F1 Score |
---|---|---|---|---|---|
profanity-check | 95.0% | 93.0% | 86.1% | 89.6% | 0.88 |
profanity-filter | 91.8% | 83.6% | 85.4% | 70.2% | 0.77 |
profanity | 85.6% | 65.1% | 91.7% | 30.8% | 0.46 |
See the How section below for more details on the dataset used for these results.
pip install alt-profanity-check
Reference: https://scikit-learn.org/stable/install.html
Scikit-learn 0.20 was the last version to support Python 2.7 and Python 3.4. Scikit-learn 0.21 supported Python 3.5-3.7. Scikit-learn 0.22 supported Python 3.5-3.8. Scikit-learn 0.23-0.24 required Python 3.6 or newer. Scikit-learn 1.0 supported Python 3.7-3.10. Scikit-learn 1.1, 1.2 and 1.3 support Python 3.8-3.12 Scikit-learn 1.4 requires Python 3.9 or newer.
Seems that for some reason 1.4.* branches worked with Python 3.8 with that in mind last Python 3.8 version of this libreary supported is 1.4.2.
From 1.1.2 and later, Python 3.7 is not supported, hence if you are using 3.6 pin alt-profanity-check to 1.0.2.1.
Following Scikit-learn, Python3.6 is not supported after its 1.0 version if you are using 3.6 pin alt-profanity-check to 0.24.2.
You can test from the command line:
profanity_check "Check something" "Check something else"
from profanity_check import predict, predict_prob
predict(['predict() takes an array and returns a 1 for each string if it is offensive, else 0.'])
# [0]
predict(['fuck you'])
# [1]
predict_prob(['predict_prob() takes an array and returns the probability each string is offensive'])
# [0.08686173]
predict_prob(['go to hell, you scum'])
# [0.7618861]
Note that both predict()
and predict_prob
return numpy
arrays.
Special thanks to the authors of the datasets used in this project. profanity-check
hence also
alt-profanity-check
is trained on a combined dataset from 2 sources:
profanity-check
relies heavily on the excellent scikit-learn
library. It's mostly powered by scikit-learn
classes
CountVectorizer
,
LinearSVC
, and
CalibratedClassifierCV
.
It uses a Bag-of-words model
to vectorize input strings before feeding them to a linear classifier.
One simplified way you could think about why profanity-check
works is this:
during the training process, the model learns which words are "bad" and how "bad" they are
because those words will appear more often in offensive texts. Thus, it's as if the training
process is picking out the "bad" words out of all possible words and using those to make future
predictions. This is better than just relying on arbitrary word blacklists chosen by humans!
This library is far from perfect. For example, it has a hard time picking up on less common variants of swear words like "f4ck you" or "you b1tch" because they don't appear often enough in the training corpus. Never treat any prediction from this library as unquestionable truth, because it does and will make mistakes. Instead, use this library as a heuristic.
pip install -r development_requirements.txt
With the above in place:
cd profanity_check/data
python train_model.py
Currently trying to automate it using Github Actions; see:
.github/workflows/package_release_dry_run.yml
.
Setup:
pip install -r requirements_for_uploading.txt
which installs twineNew Version:
With x.y.z
as the version to be uploaded:
First tag:
git tag -a vx.y.z -m "Version x.y.z"
git push --tags
Then upload:
python setup.py sdist
twine upload dist/alt-profanity-check-x.y.z.tar.gz
FAQs
A fast, robust library to check for offensive language in strings. Dropdown replacement of "profanity-check".
We found that alt-profanity-check demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.