Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoInstallSign in
Socket

glin-profanity

Package Overview
Dependencies
Maintainers
1
Versions
18
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

glin-profanity

Glin-Profanity is a lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.

pipPyPI
Version
3.0.0
Maintainers
1

Glin Profanity - ML-Powered Profanity Detection

GLIN PROFANITY - Python

ML-Powered Profanity Detection for the Modern Web

PyPI MIT Downloads Demo

Installation

pip install glin-profanity

Quick Start

from glin_profanity import Filter

# Basic usage
filter = Filter()

# Quick check
if filter.is_profane("This is a damn example"):
    print("Profanity detected!")

# Detailed results
result = filter.check_profanity("This is a damn example")
print(result["profane_words"])       # ['damn']
print(result["contains_profanity"])  # True

Configuration

from glin_profanity import Filter, SeverityLevel

filter = Filter({
    "languages": ["english", "spanish"],
    "case_sensitive": False,
    "word_boundaries": True,
    "replace_with": "***",
    "severity_levels": True,
    "custom_words": ["badword"],
    "ignore_words": ["exception"],
    "allow_obfuscated_match": True,
    "fuzzy_tolerance_level": 0.8,
})

result = filter.check_profanity("bad content here")

Features

FeatureDescription
Multi-language23 languages supported
Context-awareReduces false positives
ConfigurableCustom word lists, severity levels
High performanceOptimized for speed
TypeScript paritySame API as JS package

API Reference

Filter Class

class Filter:
    def __init__(self, config: Optional[FilterConfig] = None)
    def is_profane(self, text: str) -> bool
    def check_profanity(self, text: str) -> CheckProfanityResult
    def matches(self, word: str) -> bool
    def check_profanity_with_min_severity(self, text: str, min_severity: SeverityLevel) -> dict

Return Type

{
    "contains_profanity": bool,
    "profane_words": List[str],
    "processed_text": Optional[str],      # If replace_with is set
    "severity_map": Optional[Dict],       # If severity_levels is True
    "matches": Optional[List[Match]],
    "context_score": Optional[float],
    "reason": Optional[str]
}

SeverityLevel

SeverityLevel.EXACT  # Exact word match
SeverityLevel.FUZZY  # Fuzzy/approximate match

Supported Languages

23 languages: Arabic, Chinese, Czech, Danish, Dutch, English, Esperanto, Finnish, French, German, Hindi, Hungarian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish

Documentation

ResourceLink
Getting Starteddocs/getting-started.md
API Referencedocs/api-reference.md
Advanced Featuresdocs/advanced-features.md
Main READMEREADME.md

Development

# Clone and setup
git clone https://github.com/GLINCKER/glin-profanity
cd glin-profanity/packages/py
pip install -e ".[dev]"

# Testing
pytest
pytest --cov=glin_profanity

# Code quality
black glin_profanity tests
isort glin_profanity tests
mypy glin_profanity
ruff check glin_profanity tests

License

MIT License - see LICENSE

Built by GLINCKER

Keywords

bert

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts