
Research
SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.
glin-profanity
Advanced tools
Glin-Profanity is a lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.
ML-Powered Profanity Detection for the Modern Web
pip install glin-profanity
from glin_profanity import Filter
# Basic usage
filter = Filter()
# Quick check
if filter.is_profane("This is a damn example"):
print("Profanity detected!")
# Detailed results
result = filter.check_profanity("This is a damn example")
print(result["profane_words"]) # ['damn']
print(result["contains_profanity"]) # True
from glin_profanity import Filter, SeverityLevel
filter = Filter({
"languages": ["english", "spanish"],
"case_sensitive": False,
"word_boundaries": True,
"replace_with": "***",
"severity_levels": True,
"custom_words": ["badword"],
"ignore_words": ["exception"],
"allow_obfuscated_match": True,
"fuzzy_tolerance_level": 0.8,
})
result = filter.check_profanity("bad content here")
| Feature | Description |
|---|---|
| Multi-language | 23 languages supported |
| Context-aware | Reduces false positives |
| Configurable | Custom word lists, severity levels |
| High performance | Optimized for speed |
| TypeScript parity | Same API as JS package |
class Filter:
def __init__(self, config: Optional[FilterConfig] = None)
def is_profane(self, text: str) -> bool
def check_profanity(self, text: str) -> CheckProfanityResult
def matches(self, word: str) -> bool
def check_profanity_with_min_severity(self, text: str, min_severity: SeverityLevel) -> dict
{
"contains_profanity": bool,
"profane_words": List[str],
"processed_text": Optional[str], # If replace_with is set
"severity_map": Optional[Dict], # If severity_levels is True
"matches": Optional[List[Match]],
"context_score": Optional[float],
"reason": Optional[str]
}
SeverityLevel.EXACT # Exact word match
SeverityLevel.FUZZY # Fuzzy/approximate match
23 languages: Arabic, Chinese, Czech, Danish, Dutch, English, Esperanto, Finnish, French, German, Hindi, Hungarian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish
| Resource | Link |
|---|---|
| Getting Started | docs/getting-started.md |
| API Reference | docs/api-reference.md |
| Advanced Features | docs/advanced-features.md |
| Main README | README.md |
# Clone and setup
git clone https://github.com/GLINCKER/glin-profanity
cd glin-profanity/packages/py
pip install -e ".[dev]"
# Testing
pytest
pytest --cov=glin_profanity
# Code quality
black glin_profanity tests
isort glin_profanity tests
mypy glin_profanity
ruff check glin_profanity tests
MIT License - see LICENSE
FAQs
Glin-Profanity is a lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.
We found that glin-profanity demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
An emerging npm supply chain attack that infects repos, steals CI secrets, and targets developer AI toolchains for further compromise.

Company News
Socket is proud to join the OpenJS Foundation as a Silver Member, deepening our commitment to the long-term health and security of the JavaScript ecosystem.

Security News
npm now links to Socket's security analysis on every package page. Here's what you'll find when you click through.